]> asedeno.scripts.mit.edu Git - linux.git/commitdiff
Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
authorDavid S. Miller <davem@davemloft.net>
Mon, 13 Aug 2018 17:07:23 +0000 (10:07 -0700)
committerDavid S. Miller <davem@davemloft.net>
Mon, 13 Aug 2018 17:07:23 +0000 (10:07 -0700)
Daniel Borkmann says:

====================
pull-request: bpf-next 2018-08-13

The following pull-request contains BPF updates for your *net-next* tree.

The main changes are:

1) Add driver XDP support for veth. This can be used in conjunction with
   redirect of another XDP program e.g. sitting on NIC so the xdp_frame
   can be forwarded to the peer veth directly without modification,
   from Toshiaki.

2) Add a new BPF map type REUSEPORT_SOCKARRAY and prog type SK_REUSEPORT
   in order to provide more control and visibility on where a SO_REUSEPORT
   sk should be located, and the latter enables to directly select a sk
   from the bpf map. This also enables map-in-map for application migration
   use cases, from Martin.

3) Add a new BPF helper bpf_skb_ancestor_cgroup_id() that returns the id
   of cgroup v2 that is the ancestor of the cgroup associated with the
   skb at the ancestor_level, from Andrey.

4) Implement BPF fs map pretty-print support based on BTF data for regular
   hash table and LRU map, from Yonghong.

5) Decouple the ability to attach BTF for a map from the key and value
   pretty-printer in BPF fs, and enable further support of BTF for maps for
   percpu and LPM trie, from Daniel.

6) Implement a better BPF sample of using XDP's CPU redirect feature for
   load balancing SKB processing to remote CPU. The sample implements the
   same XDP load balancing as Suricata does which is symmetric hash based
   on IP and L4 protocol, from Jesper.

7) Revert adding NULL pointer check with WARN_ON_ONCE() in __xdp_return()'s
   critical path as it is ensured that the allocator is present, from Björn.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
225 files changed:
Documentation/devicetree/bindings/net/mediatek-bluetooth.txt [new file with mode: 0644]
Documentation/networking/ip-sysctl.txt
MAINTAINERS
arch/mips/configs/bcm47xx_defconfig
arch/parisc/Kconfig
arch/powerpc/configs/wii_defconfig
arch/s390/Kconfig
drivers/block/zram/zram_drv.c
drivers/bluetooth/Kconfig
drivers/bluetooth/Makefile
drivers/bluetooth/btmtkuart.c [new file with mode: 0644]
drivers/bluetooth/btqca.c
drivers/bluetooth/hci_h5.c
drivers/crypto/virtio/virtio_crypto_core.c
drivers/i2c/busses/i2c-xlp9xx.c
drivers/net/dsa/rtl8366rb.c
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
drivers/net/ethernet/cavium/liquidio/lio_ethtool.c
drivers/net/ethernet/cavium/liquidio/liquidio_common.h
drivers/net/ethernet/chelsio/cxgb/cxgb2.c
drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
drivers/net/ethernet/chelsio/cxgb4/cxgb4_dcb.c
drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
drivers/net/ethernet/chelsio/cxgb4/t4fw_version.h
drivers/net/ethernet/emulex/benet/Kconfig
drivers/net/ethernet/emulex/benet/be.h
drivers/net/ethernet/emulex/benet/be_main.c
drivers/net/ethernet/marvell/mvneta.c
drivers/net/ethernet/mellanox/mlxsw/core.c
drivers/net/ethernet/mellanox/mlxsw/i2c.c
drivers/net/ethernet/mellanox/mlxsw/pci.c
drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
drivers/net/ethernet/microchip/Makefile
drivers/net/ethernet/microchip/lan743x_ethtool.c
drivers/net/ethernet/microchip/lan743x_main.c
drivers/net/ethernet/microchip/lan743x_main.h
drivers/net/ethernet/microchip/lan743x_ptp.c [new file with mode: 0644]
drivers/net/ethernet/microchip/lan743x_ptp.h [new file with mode: 0644]
drivers/net/ethernet/neterion/vxge/vxge-config.c
drivers/net/ethernet/ni/nixge.c
drivers/net/ethernet/qlogic/qed/qed_l2.c
drivers/net/ethernet/qlogic/qed/qed_main.c
drivers/net/ethernet/qlogic/qede/qede.h
drivers/net/ethernet/qlogic/qede/qede_ethtool.c
drivers/net/ethernet/qlogic/qede/qede_filter.c
drivers/net/ethernet/qlogic/qede/qede_fp.c
drivers/net/ethernet/qlogic/qede/qede_main.c
drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.c
drivers/net/ethernet/realtek/r8169.c
drivers/net/ethernet/socionext/netsec.c
drivers/net/ethernet/ti/cpsw.c
drivers/net/ethernet/ti/cpsw_ale.c
drivers/net/phy/dp83640.c
drivers/net/phy/sfp.c
drivers/net/usb/r8152.c
drivers/net/virtio_net.c
drivers/net/wimax/i2400m/usb-fw.c
drivers/net/wimax/i2400m/usb-tx.c
drivers/net/wireless/ath/ath10k/ahb.c
drivers/net/wireless/ath/ath10k/core.c
drivers/net/wireless/ath/ath10k/core.h
drivers/net/wireless/ath/ath10k/debug.c
drivers/net/wireless/ath/ath10k/htt_tx.c
drivers/net/wireless/ath/ath10k/hw.h
drivers/net/wireless/ath/ath10k/mac.c
drivers/net/wireless/ath/ath10k/wmi-ops.h
drivers/net/wireless/ath/ath10k/wmi-tlv.c
drivers/net/wireless/ath/ath10k/wmi-tlv.h
drivers/net/wireless/ath/ath10k/wmi.c
drivers/net/wireless/ath/ath10k/wmi.h
drivers/net/wireless/ath/ath5k/phy.c
drivers/net/wireless/ath/ath6kl/bmi.c
drivers/net/wireless/ath/ath6kl/htc_pipe.c
drivers/net/wireless/ath/ath6kl/main.c
drivers/net/wireless/ath/ath6kl/txrx.c
drivers/net/wireless/ath/ath9k/ar9002_calib.c
drivers/net/wireless/ath/ath9k/ar9003_phy.c
drivers/net/wireless/ath/ath9k/hw.c
drivers/net/wireless/ath/ath9k/xmit.c
drivers/net/wireless/ath/wil6210/cfg80211.c
drivers/net/wireless/ath/wil6210/debugfs.c
drivers/net/wireless/ath/wil6210/fw.c
drivers/net/wireless/ath/wil6210/fw_inc.c
drivers/net/wireless/ath/wil6210/interrupt.c
drivers/net/wireless/ath/wil6210/main.c
drivers/net/wireless/ath/wil6210/pcie_bus.c
drivers/net/wireless/ath/wil6210/rx_reorder.c
drivers/net/wireless/ath/wil6210/txrx.c
drivers/net/wireless/ath/wil6210/txrx.h
drivers/net/wireless/ath/wil6210/txrx_edma.c
drivers/net/wireless/ath/wil6210/txrx_edma.h
drivers/net/wireless/ath/wil6210/wil6210.h
drivers/net/wireless/ath/wil6210/wil_platform.h
drivers/net/wireless/ath/wil6210/wmi.c
drivers/net/wireless/ath/wil6210/wmi.h
drivers/net/wireless/broadcom/b43/leds.c
drivers/net/wireless/broadcom/b43legacy/leds.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
drivers/net/wireless/intel/iwlegacy/3945-debug.c
drivers/net/wireless/mediatek/mt76/Kconfig
drivers/net/wireless/mediatek/mt76/mt76x0/eeprom.c
drivers/net/wireless/mediatek/mt76/mt76x0/mac.c
drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtc8723b2ant.c
drivers/net/wireless/rsi/rsi_91x_hal.c
drivers/net/xen-netfront.c
drivers/s390/net/qeth_core.h
drivers/s390/net/qeth_core_main.c
drivers/s390/net/qeth_core_mpc.c
drivers/s390/net/qeth_core_mpc.h
drivers/s390/net/qeth_l2_main.c
drivers/s390/net/qeth_l3_main.c
drivers/ssb/Kconfig
drivers/ssb/b43_pci_bridge.c
drivers/ssb/bridge_pcmcia_80211.c
drivers/ssb/driver_chipcommon.c
drivers/ssb/driver_chipcommon_pmu.c
drivers/ssb/driver_chipcommon_sflash.c
drivers/ssb/driver_extif.c
drivers/ssb/driver_gige.c
drivers/ssb/driver_gpio.c
drivers/ssb/driver_mipscore.c
drivers/ssb/driver_pcicore.c
drivers/ssb/embedded.c
drivers/ssb/host_soc.c
drivers/ssb/main.c
drivers/ssb/pci.c
drivers/ssb/pcmcia.c
drivers/ssb/scan.c
drivers/ssb/sdio.c
drivers/ssb/sprom.c
drivers/ssb/ssb_private.h
drivers/virtio/virtio_pci_common.c
drivers/virtio/virtio_pci_common.h
include/linux/cpumask.h
include/linux/netdevice.h
include/linux/netfilter/nfnetlink_osf.h
include/linux/qed/qed_eth_if.h
include/linux/skbuff.h
include/linux/ssb/ssb.h
include/linux/virtio_config.h
include/net/act_api.h
include/net/bluetooth/hci.h
include/net/gen_stats.h
include/net/inet_connection_sock.h
include/net/inet_frag.h
include/net/netfilter/nf_conntrack_l4proto.h
include/net/netfilter/nf_conntrack_timeout.h
include/net/netns/ipv6.h
include/net/sctp/structs.h
include/net/tls.h
include/trace/events/fib.h
include/uapi/linux/netfilter/nf_tables.h
include/uapi/linux/netfilter/nfnetlink_osf.h
include/uapi/linux/netfilter/xt_osf.h
include/uapi/linux/ppp-ioctl.h
include/uapi/linux/sysctl.h
kernel/bpf/cpumap.c
kernel/bpf/devmap.c
kernel/bpf/sockmap.c
lib/Kconfig.ubsan
mm/memory.c
net/bluetooth/hci_event.c
net/bluetooth/mgmt.c
net/core/dev.c
net/core/gen_estimator.c
net/core/net-sysfs.c
net/decnet/dn_route.c
net/ipv4/inet_fragment.c
net/ipv4/ip_fragment.c
net/ipv4/tcp_dctcp.c
net/ipv4/tcp_input.c
net/ipv6/af_inet6.c
net/ipv6/icmp.c
net/ipv6/ip6_gre.c
net/l2tp/l2tp_core.c
net/l2tp/l2tp_core.h
net/l2tp/l2tp_ip.c
net/l2tp/l2tp_ip6.c
net/l2tp/l2tp_netlink.c
net/l2tp/l2tp_ppp.c
net/netfilter/nf_conntrack_timeout.c
net/netfilter/nfnetlink_cttimeout.c
net/netfilter/nfnetlink_osf.c
net/netfilter/nft_ct.c
net/netfilter/nft_osf.c
net/netfilter/xt_CT.c
net/packet/af_packet.c
net/packet/internal.h
net/rxrpc/sysctl.c
net/sched/act_bpf.c
net/sched/act_connmark.c
net/sched/act_csum.c
net/sched/act_gact.c
net/sched/act_ife.c
net/sched/act_ipt.c
net/sched/act_mirred.c
net/sched/act_nat.c
net/sched/act_pedit.c
net/sched/act_police.c
net/sched/act_sample.c
net/sched/act_simple.c
net/sched/act_skbedit.c
net/sched/act_skbmod.c
net/sched/act_tunnel_key.c
net/sched/act_vlan.c
net/sched/cls_api.c
net/sctp/chunk.c
net/sctp/outqueue.c
net/sctp/socket.c
net/sctp/stream.c
net/sctp/stream_interleave.c
net/sctp/stream_sched.c
net/sctp/stream_sched_prio.c
net/sctp/stream_sched_rr.c
net/smc/smc_ism.c
net/tls/tls_sw.c
samples/bpf/xdp_redirect_cpu_kern.c
samples/bpf/xdp_redirect_cpu_user.c
scripts/Makefile.ubsan
tools/lib/bpf/btf.c
tools/lib/bpf/btf.h
tools/testing/selftests/bpf/test_sockmap.c
tools/testing/selftests/tc-testing/README
tools/testing/selftests/tc-testing/config [new file with mode: 0644]

diff --git a/Documentation/devicetree/bindings/net/mediatek-bluetooth.txt b/Documentation/devicetree/bindings/net/mediatek-bluetooth.txt
new file mode 100644 (file)
index 0000000..14ceb2a
--- /dev/null
@@ -0,0 +1,35 @@
+MediaTek SoC built-in Bluetooth Devices
+==================================
+
+This device is a serial attached device to BTIF device and thus it must be a
+child node of the serial node with BTIF. The dt-bindings details for BTIF
+device can be known via Documentation/devicetree/bindings/serial/8250.txt.
+
+Required properties:
+
+- compatible:  Must be
+                 "mediatek,mt7622-bluetooth": for MT7622 SoC
+- clocks:      Should be the clock specifiers corresponding to the entry in
+               clock-names property.
+- clock-names: Should contain "ref" entries.
+- power-domains: Phandle to the power domain that the device is part of
+
+Example:
+
+       btif: serial@1100c000 {
+               compatible = "mediatek,mt7622-btif",
+                            "mediatek,mtk-btif";
+               reg = <0 0x1100c000 0 0x1000>;
+               interrupts = <GIC_SPI 90 IRQ_TYPE_LEVEL_LOW>;
+               clocks = <&pericfg CLK_PERI_BTIF_PD>;
+               clock-names = "main";
+               reg-shift = <2>;
+               reg-io-width = <4>;
+
+               bluetooth {
+                       compatible = "mediatek,mt7622-bluetooth";
+                       power-domains = <&scpsys MT7622_POWER_DOMAIN_WB>;
+                       clocks = <&clk25m>;
+                       clock-names = "ref";
+               };
+       };
index e74515ecaa9c4b8f66e2fe1a59b584f340ffa718..8313a636dd533540172859653bcfa173c1e03864 100644 (file)
@@ -1882,6 +1882,11 @@ ratelimit - INTEGER
        otherwise the minimal space between responses in milliseconds.
        Default: 1000
 
+echo_ignore_all - BOOLEAN
+       If set non-zero, then the kernel will ignore all ICMP ECHO
+       requests sent to it over the IPv6 protocol.
+       Default: 0
+
 xfrm6_gc_thresh - INTEGER
        The threshold at which we will start garbage collecting for IPv6
        destination cache entries.  At twice this value the system will
index 82f277462349c3b62eb0f0ba1a9a72283e3e037c..4c05e18ced505d4486902d1a94c7fa4d31136ebf 100644 (file)
@@ -5930,7 +5930,7 @@ F:        Documentation/dev-tools/gcov.rst
 
 GDB KERNEL DEBUGGING HELPER SCRIPTS
 M:     Jan Kiszka <jan.kiszka@siemens.com>
-M:     Kieran Bingham <kieran@bingham.xyz>
+M:     Kieran Bingham <kbingham@kernel.org>
 S:     Supported
 F:     scripts/gdb/
 
@@ -8988,6 +8988,14 @@ F:       include/uapi/linux/meye.h
 F:     include/uapi/linux/ivtv*
 F:     include/uapi/linux/uvcvideo.h
 
+MEDIATEK BLUETOOTH DRIVER
+M:     Sean Wang <sean.wang@mediatek.com>
+L:     linux-bluetooth@vger.kernel.org
+L:     linux-mediatek@lists.infradead.org (moderated for non-subscribers)
+S:     Maintained
+F:     Documentation/devicetree/bindings/net/mediatek-bluetooth.txt
+F:     drivers/bluetooth/btmtkuart.c
+
 MEDIATEK CIR DRIVER
 M:     Sean Wang <sean.wang@mediatek.com>
 S:     Maintained
index fad8e964f14ca4cf0d7de518471c35faeceb7d6e..ba800a892384c2568537bd2a2539b3aaa9fcc7a9 100644 (file)
@@ -66,7 +66,6 @@ CONFIG_HW_RANDOM=y
 CONFIG_GPIO_SYSFS=y
 CONFIG_WATCHDOG=y
 CONFIG_BCM47XX_WDT=y
-CONFIG_SSB_DEBUG=y
 CONFIG_SSB_DRIVER_GIGE=y
 CONFIG_BCMA_DRIVER_GMAC_CMN=y
 CONFIG_USB=y
index 46f656b8fc233af2984c3d7e9e7074a1aed80544..e7705dde953fc72d943154d456edc81b81fab0d4 100644 (file)
@@ -11,7 +11,6 @@ config PARISC
        select ARCH_HAS_ELF_RANDOMIZE
        select ARCH_HAS_STRICT_KERNEL_RWX
        select ARCH_HAS_UBSAN_SANITIZE_ALL
-       select ARCH_WANTS_UBSAN_NO_NULL
        select ARCH_SUPPORTS_MEMORY_FAILURE
        select RTC_CLASS
        select RTC_DRV_GENERIC
index 10940533da7166ff656633ce897bc6b8d84ad362..f5c366b028289ecb411285636020256a6be3a9eb 100644 (file)
@@ -78,7 +78,6 @@ CONFIG_GPIO_HLWD=y
 CONFIG_POWER_RESET=y
 CONFIG_POWER_RESET_GPIO=y
 # CONFIG_HWMON is not set
-CONFIG_SSB_DEBUG=y
 CONFIG_FB=y
 # CONFIG_VGA_CONSOLE is not set
 CONFIG_FRAMEBUFFER_CONSOLE=y
index 8a1863d9ed53385cd5e72813e962bb4310c38abc..4fe5b2affa23306d5d7c07086e05d5baab293d62 100644 (file)
@@ -106,7 +106,6 @@ config S390
        select ARCH_USE_BUILTIN_BSWAP
        select ARCH_USE_CMPXCHG_LOCKREF
        select ARCH_WANTS_DYNAMIC_TASK_STRUCT
-       select ARCH_WANTS_UBSAN_NO_NULL
        select ARCH_WANT_IPC_PARSE_VERSION
        select BUILDTIME_EXTABLE_SORT
        select CLONE_BACKWARDS2
index 7436b2d27fa38513602207c6b4bc9213c97af436..a390c6d4f72df976e0880150207fef26acc3db97 100644 (file)
@@ -298,7 +298,8 @@ static void reset_bdev(struct zram *zram)
        zram->backing_dev = NULL;
        zram->old_block_size = 0;
        zram->bdev = NULL;
-
+       zram->disk->queue->backing_dev_info->capabilities |=
+                               BDI_CAP_SYNCHRONOUS_IO;
        kvfree(zram->bitmap);
        zram->bitmap = NULL;
 }
@@ -400,6 +401,18 @@ static ssize_t backing_dev_store(struct device *dev,
        zram->backing_dev = backing_dev;
        zram->bitmap = bitmap;
        zram->nr_pages = nr_pages;
+       /*
+        * With writeback feature, zram does asynchronous IO so it's no longer
+        * synchronous device so let's remove synchronous io flag. Othewise,
+        * upper layer(e.g., swap) could wait IO completion rather than
+        * (submit and return), which will cause system sluggish.
+        * Furthermore, when the IO function returns(e.g., swap_readpage),
+        * upper layer expects IO was done so it could deallocate the page
+        * freely but in fact, IO is going on so finally could cause
+        * use-after-free when the IO is really done.
+        */
+       zram->disk->queue->backing_dev_info->capabilities &=
+                       ~BDI_CAP_SYNCHRONOUS_IO;
        up_write(&zram->init_lock);
 
        pr_info("setup backing device %s\n", file_name);
index 5f953ca8ac5bb11defd0de48f893f93092c9146a..2df11cc08a460a893e2ebcb7af510f70bd6e792a 100644 (file)
@@ -195,6 +195,19 @@ config BT_HCIUART_BCM
 
          Say Y here to compile support for Broadcom protocol.
 
+config BT_HCIUART_RTL
+       bool "Realtek protocol support"
+       depends on BT_HCIUART
+       depends on BT_HCIUART_SERDEV
+       depends on GPIOLIB
+       select BT_HCIUART_3WIRE
+       select BT_RTL
+       help
+         The Realtek protocol support enables Bluetooth HCI over 3-Wire
+         serial port internface for Realtek Bluetooth controllers.
+
+         Say Y here to compile support for Realtek protocol.
+
 config BT_HCIUART_QCA
        bool "Qualcomm Atheros protocol support"
        depends on BT_HCIUART
@@ -365,6 +378,17 @@ config BT_WILINK
          Say Y here to compile support for Texas Instrument's WiLink7 driver
          into the kernel or say M to compile it as module (btwilink).
 
+config BT_MTKUART
+       tristate "MediaTek HCI UART driver"
+       depends on SERIAL_DEV_BUS
+       help
+         MediaTek Bluetooth HCI UART driver.
+         This driver is required if you want to use MediaTek Bluetooth
+         with serial interface.
+
+         Say Y here to compile support for MediaTek Bluetooth UART devices
+         into the kernel or say M to compile it as module (btmtkuart).
+
 config BT_QCOMSMD
        tristate "Qualcomm SMD based HCI support"
        depends on RPMSG || (COMPILE_TEST && RPMSG=n)
index ec16c55eb6e9bdf881740ecc2f639187a38a03f8..b7e393cfc1e31470a05978361015788edae3807d 100644 (file)
@@ -20,6 +20,7 @@ obj-$(CONFIG_BT_ATH3K)                += ath3k.o
 obj-$(CONFIG_BT_MRVL)          += btmrvl.o
 obj-$(CONFIG_BT_MRVL_SDIO)     += btmrvl_sdio.o
 obj-$(CONFIG_BT_WILINK)                += btwilink.o
+obj-$(CONFIG_BT_MTKUART)       += btmtkuart.o
 obj-$(CONFIG_BT_QCOMSMD)       += btqcomsmd.o
 obj-$(CONFIG_BT_BCM)           += btbcm.o
 obj-$(CONFIG_BT_RTL)           += btrtl.o
diff --git a/drivers/bluetooth/btmtkuart.c b/drivers/bluetooth/btmtkuart.c
new file mode 100644 (file)
index 0000000..e0571fe
--- /dev/null
@@ -0,0 +1,629 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2018 MediaTek Inc.
+
+/*
+ * Bluetooth support for MediaTek serial devices
+ *
+ * Author: Sean Wang <sean.wang@mediatek.com>
+ *
+ */
+
+#include <asm/unaligned.h>
+#include <linux/atomic.h>
+#include <linux/clk.h>
+#include <linux/firmware.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/pm_runtime.h>
+#include <linux/serdev.h>
+#include <linux/skbuff.h>
+
+#include <net/bluetooth/bluetooth.h>
+#include <net/bluetooth/hci_core.h>
+
+#include "h4_recv.h"
+
+#define VERSION "0.1"
+
+#define FIRMWARE_MT7622                "mediatek/mt7622pr2h.bin"
+
+#define MTK_STP_TLR_SIZE       2
+
+#define BTMTKUART_TX_STATE_ACTIVE      1
+#define BTMTKUART_TX_STATE_WAKEUP      2
+#define BTMTKUART_TX_WAIT_VND_EVT      3
+
+enum {
+       MTK_WMT_PATCH_DWNLD = 0x1,
+       MTK_WMT_FUNC_CTRL = 0x6,
+       MTK_WMT_RST = 0x7
+};
+
+struct mtk_stp_hdr {
+       u8      prefix;
+       __be16  dlen;
+       u8      cs;
+} __packed;
+
+struct mtk_wmt_hdr {
+       u8      dir;
+       u8      op;
+       __le16  dlen;
+       u8      flag;
+} __packed;
+
+struct mtk_hci_wmt_cmd {
+       struct mtk_wmt_hdr hdr;
+       u8 data[256];
+} __packed;
+
+struct btmtkuart_dev {
+       struct hci_dev *hdev;
+       struct serdev_device *serdev;
+       struct clk *clk;
+
+       struct work_struct tx_work;
+       unsigned long tx_state;
+       struct sk_buff_head txq;
+
+       struct sk_buff *rx_skb;
+
+       u8      stp_pad[6];
+       u8      stp_cursor;
+       u16     stp_dlen;
+};
+
+static int mtk_hci_wmt_sync(struct hci_dev *hdev, u8 op, u8 flag, u16 plen,
+                           const void *param)
+{
+       struct btmtkuart_dev *bdev = hci_get_drvdata(hdev);
+       struct mtk_hci_wmt_cmd wc;
+       struct mtk_wmt_hdr *hdr;
+       u32 hlen;
+       int err;
+
+       hlen = sizeof(*hdr) + plen;
+       if (hlen > 255)
+               return -EINVAL;
+
+       hdr = (struct mtk_wmt_hdr *)&wc;
+       hdr->dir = 1;
+       hdr->op = op;
+       hdr->dlen = cpu_to_le16(plen + 1);
+       hdr->flag = flag;
+       memcpy(wc.data, param, plen);
+
+       set_bit(BTMTKUART_TX_WAIT_VND_EVT, &bdev->tx_state);
+
+       err = __hci_cmd_send(hdev, 0xfc6f, hlen, &wc);
+       if (err < 0) {
+               clear_bit(BTMTKUART_TX_WAIT_VND_EVT, &bdev->tx_state);
+               return err;
+       }
+
+       /* The vendor specific WMT commands are all answered by a vendor
+        * specific event and will not have the Command Status or Command
+        * Complete as with usual HCI command flow control.
+        *
+        * After sending the command, wait for BTMTKUART_TX_WAIT_VND_EVT
+        * state to be cleared. The driver speicfic event receive routine
+        * will clear that state and with that indicate completion of the
+        * WMT command.
+        */
+       err = wait_on_bit_timeout(&bdev->tx_state, BTMTKUART_TX_WAIT_VND_EVT,
+                                 TASK_INTERRUPTIBLE, HCI_INIT_TIMEOUT);
+       if (err == -EINTR) {
+               bt_dev_err(hdev, "Execution of wmt command interrupted");
+               return err;
+       }
+
+       if (err) {
+               bt_dev_err(hdev, "Execution of wmt command timed out");
+               return -ETIMEDOUT;
+       }
+
+       return 0;
+}
+
+static int mtk_setup_fw(struct hci_dev *hdev)
+{
+       const struct firmware *fw;
+       const u8 *fw_ptr;
+       size_t fw_size;
+       int err, dlen;
+       u8 flag;
+
+       err = request_firmware(&fw, FIRMWARE_MT7622, &hdev->dev);
+       if (err < 0) {
+               bt_dev_err(hdev, "Failed to load firmware file (%d)", err);
+               return err;
+       }
+
+       fw_ptr = fw->data;
+       fw_size = fw->size;
+
+       /* The size of patch header is 30 bytes, should be skip */
+       if (fw_size < 30)
+               return -EINVAL;
+
+       fw_size -= 30;
+       fw_ptr += 30;
+       flag = 1;
+
+       while (fw_size > 0) {
+               dlen = min_t(int, 250, fw_size);
+
+               /* Tell device the position in sequence */
+               if (fw_size - dlen <= 0)
+                       flag = 3;
+               else if (fw_size < fw->size - 30)
+                       flag = 2;
+
+               err = mtk_hci_wmt_sync(hdev, MTK_WMT_PATCH_DWNLD, flag, dlen,
+                                      fw_ptr);
+               if (err < 0) {
+                       bt_dev_err(hdev, "Failed to send wmt patch dwnld (%d)",
+                                  err);
+                       break;
+               }
+
+               fw_size -= dlen;
+               fw_ptr += dlen;
+       }
+
+       release_firmware(fw);
+
+       return err;
+}
+
+static int btmtkuart_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+{
+       struct btmtkuart_dev *bdev = hci_get_drvdata(hdev);
+       struct hci_event_hdr *hdr = (void *)skb->data;
+       int err;
+
+       /* Fix up the vendor event id with 0xff for vendor specific instead
+        * of 0xe4 so that event send via monitoring socket can be parsed
+        * properly.
+        */
+       if (hdr->evt == 0xe4)
+               hdr->evt = HCI_EV_VENDOR;
+
+       err = hci_recv_frame(hdev, skb);
+
+       if (hdr->evt == HCI_EV_VENDOR) {
+               if (test_and_clear_bit(BTMTKUART_TX_WAIT_VND_EVT,
+                                      &bdev->tx_state)) {
+                       /* Barrier to sync with other CPUs */
+                       smp_mb__after_atomic();
+                       wake_up_bit(&bdev->tx_state, BTMTKUART_TX_WAIT_VND_EVT);
+               }
+       }
+
+       return err;
+}
+
+static const struct h4_recv_pkt mtk_recv_pkts[] = {
+       { H4_RECV_ACL,      .recv = hci_recv_frame },
+       { H4_RECV_SCO,      .recv = hci_recv_frame },
+       { H4_RECV_EVENT,    .recv = btmtkuart_recv_event },
+};
+
+static void btmtkuart_tx_work(struct work_struct *work)
+{
+       struct btmtkuart_dev *bdev = container_of(work, struct btmtkuart_dev,
+                                                  tx_work);
+       struct serdev_device *serdev = bdev->serdev;
+       struct hci_dev *hdev = bdev->hdev;
+
+       while (1) {
+               clear_bit(BTMTKUART_TX_STATE_WAKEUP, &bdev->tx_state);
+
+               while (1) {
+                       struct sk_buff *skb = skb_dequeue(&bdev->txq);
+                       int len;
+
+                       if (!skb)
+                               break;
+
+                       len = serdev_device_write_buf(serdev, skb->data,
+                                                     skb->len);
+                       hdev->stat.byte_tx += len;
+
+                       skb_pull(skb, len);
+                       if (skb->len > 0) {
+                               skb_queue_head(&bdev->txq, skb);
+                               break;
+                       }
+
+                       switch (hci_skb_pkt_type(skb)) {
+                       case HCI_COMMAND_PKT:
+                               hdev->stat.cmd_tx++;
+                               break;
+                       case HCI_ACLDATA_PKT:
+                               hdev->stat.acl_tx++;
+                               break;
+                       case HCI_SCODATA_PKT:
+                               hdev->stat.sco_tx++;
+                               break;
+                       }
+
+                       kfree_skb(skb);
+               }
+
+               if (!test_bit(BTMTKUART_TX_STATE_WAKEUP, &bdev->tx_state))
+                       break;
+       }
+
+       clear_bit(BTMTKUART_TX_STATE_ACTIVE, &bdev->tx_state);
+}
+
+static void btmtkuart_tx_wakeup(struct btmtkuart_dev *bdev)
+{
+       if (test_and_set_bit(BTMTKUART_TX_STATE_ACTIVE, &bdev->tx_state))
+               set_bit(BTMTKUART_TX_STATE_WAKEUP, &bdev->tx_state);
+
+       schedule_work(&bdev->tx_work);
+}
+
+static const unsigned char *
+mtk_stp_split(struct btmtkuart_dev *bdev, const unsigned char *data, int count,
+             int *sz_h4)
+{
+       struct mtk_stp_hdr *shdr;
+
+       /* The cursor is reset when all the data of STP is consumed out */
+       if (!bdev->stp_dlen && bdev->stp_cursor >= 6)
+               bdev->stp_cursor = 0;
+
+       /* Filling pad until all STP info is obtained */
+       while (bdev->stp_cursor < 6 && count > 0) {
+               bdev->stp_pad[bdev->stp_cursor] = *data;
+               bdev->stp_cursor++;
+               data++;
+               count--;
+       }
+
+       /* Retrieve STP info and have a sanity check */
+       if (!bdev->stp_dlen && bdev->stp_cursor >= 6) {
+               shdr = (struct mtk_stp_hdr *)&bdev->stp_pad[2];
+               bdev->stp_dlen = be16_to_cpu(shdr->dlen) & 0x0fff;
+
+               /* Resync STP when unexpected data is being read */
+               if (shdr->prefix != 0x80 || bdev->stp_dlen > 2048) {
+                       bt_dev_err(bdev->hdev, "stp format unexpect (%d, %d)",
+                                  shdr->prefix, bdev->stp_dlen);
+                       bdev->stp_cursor = 2;
+                       bdev->stp_dlen = 0;
+               }
+       }
+
+       /* Directly quit when there's no data found for H4 can process */
+       if (count <= 0)
+               return NULL;
+
+       /* Tranlate to how much the size of data H4 can handle so far */
+       *sz_h4 = min_t(int, count, bdev->stp_dlen);
+
+       /* Update the remaining size of STP packet */
+       bdev->stp_dlen -= *sz_h4;
+
+       /* Data points to STP payload which can be handled by H4 */
+       return data;
+}
+
+static int btmtkuart_recv(struct hci_dev *hdev, const u8 *data, size_t count)
+{
+       struct btmtkuart_dev *bdev = hci_get_drvdata(hdev);
+       const unsigned char *p_left = data, *p_h4;
+       int sz_left = count, sz_h4, adv;
+       int err;
+
+       while (sz_left > 0) {
+               /*  The serial data received from MT7622 BT controller is
+                *  at all time padded around with the STP header and tailer.
+                *
+                *  A full STP packet is looking like
+                *   -----------------------------------
+                *  | STP header  |  H:4   | STP tailer |
+                *   -----------------------------------
+                *  but it doesn't guarantee to contain a full H:4 packet which
+                *  means that it's possible for multiple STP packets forms a
+                *  full H:4 packet that means extra STP header + length doesn't
+                *  indicate a full H:4 frame, things can fragment. Whose length
+                *  recorded in STP header just shows up the most length the
+                *  H:4 engine can handle currently.
+                */
+
+               p_h4 = mtk_stp_split(bdev, p_left, sz_left, &sz_h4);
+               if (!p_h4)
+                       break;
+
+               adv = p_h4 - p_left;
+               sz_left -= adv;
+               p_left += adv;
+
+               bdev->rx_skb = h4_recv_buf(bdev->hdev, bdev->rx_skb, p_h4,
+                                          sz_h4, mtk_recv_pkts,
+                                          sizeof(mtk_recv_pkts));
+               if (IS_ERR(bdev->rx_skb)) {
+                       err = PTR_ERR(bdev->rx_skb);
+                       bt_dev_err(bdev->hdev,
+                                  "Frame reassembly failed (%d)", err);
+                       bdev->rx_skb = NULL;
+                       return err;
+               }
+
+               sz_left -= sz_h4;
+               p_left += sz_h4;
+       }
+
+       return 0;
+}
+
+static int btmtkuart_receive_buf(struct serdev_device *serdev, const u8 *data,
+                                size_t count)
+{
+       struct btmtkuart_dev *bdev = serdev_device_get_drvdata(serdev);
+       int err;
+
+       err = btmtkuart_recv(bdev->hdev, data, count);
+       if (err < 0)
+               return err;
+
+       bdev->hdev->stat.byte_rx += count;
+
+       return count;
+}
+
+static void btmtkuart_write_wakeup(struct serdev_device *serdev)
+{
+       struct btmtkuart_dev *bdev = serdev_device_get_drvdata(serdev);
+
+       btmtkuart_tx_wakeup(bdev);
+}
+
+static const struct serdev_device_ops btmtkuart_client_ops = {
+       .receive_buf = btmtkuart_receive_buf,
+       .write_wakeup = btmtkuart_write_wakeup,
+};
+
+static int btmtkuart_open(struct hci_dev *hdev)
+{
+       struct btmtkuart_dev *bdev = hci_get_drvdata(hdev);
+       struct device *dev;
+       int err;
+
+       err = serdev_device_open(bdev->serdev);
+       if (err) {
+               bt_dev_err(hdev, "Unable to open UART device %s",
+                          dev_name(&bdev->serdev->dev));
+               goto err_open;
+       }
+
+       bdev->stp_cursor = 2;
+       bdev->stp_dlen = 0;
+
+       dev = &bdev->serdev->dev;
+
+       /* Enable the power domain and clock the device requires */
+       pm_runtime_enable(dev);
+       err = pm_runtime_get_sync(dev);
+       if (err < 0) {
+               pm_runtime_put_noidle(dev);
+               goto err_disable_rpm;
+       }
+
+       err = clk_prepare_enable(bdev->clk);
+       if (err < 0)
+               goto err_put_rpm;
+
+       return 0;
+
+err_put_rpm:
+       pm_runtime_put_sync(dev);
+err_disable_rpm:
+       pm_runtime_disable(dev);
+err_open:
+       return err;
+}
+
+static int btmtkuart_close(struct hci_dev *hdev)
+{
+       struct btmtkuart_dev *bdev = hci_get_drvdata(hdev);
+       struct device *dev = &bdev->serdev->dev;
+
+       /* Shutdown the clock and power domain the device requires */
+       clk_disable_unprepare(bdev->clk);
+       pm_runtime_put_sync(dev);
+       pm_runtime_disable(dev);
+
+       serdev_device_close(bdev->serdev);
+
+       return 0;
+}
+
+static int btmtkuart_flush(struct hci_dev *hdev)
+{
+       struct btmtkuart_dev *bdev = hci_get_drvdata(hdev);
+
+       /* Flush any pending characters */
+       serdev_device_write_flush(bdev->serdev);
+       skb_queue_purge(&bdev->txq);
+
+       cancel_work_sync(&bdev->tx_work);
+
+       kfree_skb(bdev->rx_skb);
+       bdev->rx_skb = NULL;
+
+       bdev->stp_cursor = 2;
+       bdev->stp_dlen = 0;
+
+       return 0;
+}
+
+static int btmtkuart_setup(struct hci_dev *hdev)
+{
+       u8 param = 0x1;
+       int err = 0;
+
+       /* Setup a firmware which the device definitely requires */
+       err = mtk_setup_fw(hdev);
+       if (err < 0)
+               return err;
+
+       /* Activate function the firmware providing to */
+       err = mtk_hci_wmt_sync(hdev, MTK_WMT_RST, 0x4, 0, 0);
+       if (err < 0) {
+               bt_dev_err(hdev, "Failed to send wmt rst (%d)", err);
+               return err;
+       }
+
+       /* Enable Bluetooth protocol */
+       err = mtk_hci_wmt_sync(hdev, MTK_WMT_FUNC_CTRL, 0x0, sizeof(param),
+                              &param);
+       if (err < 0) {
+               bt_dev_err(hdev, "Failed to send wmt func ctrl (%d)", err);
+               return err;
+       }
+
+       return 0;
+}
+
+static int btmtkuart_shutdown(struct hci_dev *hdev)
+{
+       u8 param = 0x0;
+       int err;
+
+       /* Disable the device */
+       err = mtk_hci_wmt_sync(hdev, MTK_WMT_FUNC_CTRL, 0x0, sizeof(param),
+                              &param);
+       if (err < 0) {
+               bt_dev_err(hdev, "Failed to send wmt func ctrl (%d)", err);
+               return err;
+       }
+
+       return 0;
+}
+
+static int btmtkuart_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
+{
+       struct btmtkuart_dev *bdev = hci_get_drvdata(hdev);
+       struct mtk_stp_hdr *shdr;
+       int err, dlen, type = 0;
+
+       /* Prepend skb with frame type */
+       memcpy(skb_push(skb, 1), &hci_skb_pkt_type(skb), 1);
+
+       /* Make sure that there is enough rooms for STP header and trailer */
+       if (unlikely(skb_headroom(skb) < sizeof(*shdr)) ||
+           (skb_tailroom(skb) < MTK_STP_TLR_SIZE)) {
+               err = pskb_expand_head(skb, sizeof(*shdr), MTK_STP_TLR_SIZE,
+                                      GFP_ATOMIC);
+               if (err < 0)
+                       return err;
+       }
+
+       /* Add the STP header */
+       dlen = skb->len;
+       shdr = skb_push(skb, sizeof(*shdr));
+       shdr->prefix = 0x80;
+       shdr->dlen = cpu_to_be16((dlen & 0x0fff) | (type << 12));
+       shdr->cs = 0;           /* MT7622 doesn't care about checksum value */
+
+       /* Add the STP trailer */
+       skb_put_zero(skb, MTK_STP_TLR_SIZE);
+
+       skb_queue_tail(&bdev->txq, skb);
+
+       btmtkuart_tx_wakeup(bdev);
+       return 0;
+}
+
+static int btmtkuart_probe(struct serdev_device *serdev)
+{
+       struct btmtkuart_dev *bdev;
+       struct hci_dev *hdev;
+
+       bdev = devm_kzalloc(&serdev->dev, sizeof(*bdev), GFP_KERNEL);
+       if (!bdev)
+               return -ENOMEM;
+
+       bdev->clk = devm_clk_get(&serdev->dev, "ref");
+       if (IS_ERR(bdev->clk))
+               return PTR_ERR(bdev->clk);
+
+       bdev->serdev = serdev;
+       serdev_device_set_drvdata(serdev, bdev);
+
+       serdev_device_set_client_ops(serdev, &btmtkuart_client_ops);
+
+       INIT_WORK(&bdev->tx_work, btmtkuart_tx_work);
+       skb_queue_head_init(&bdev->txq);
+
+       /* Initialize and register HCI device */
+       hdev = hci_alloc_dev();
+       if (!hdev) {
+               dev_err(&serdev->dev, "Can't allocate HCI device\n");
+               return -ENOMEM;
+       }
+
+       bdev->hdev = hdev;
+
+       hdev->bus = HCI_UART;
+       hci_set_drvdata(hdev, bdev);
+
+       hdev->open     = btmtkuart_open;
+       hdev->close    = btmtkuart_close;
+       hdev->flush    = btmtkuart_flush;
+       hdev->setup    = btmtkuart_setup;
+       hdev->shutdown = btmtkuart_shutdown;
+       hdev->send     = btmtkuart_send_frame;
+       SET_HCIDEV_DEV(hdev, &serdev->dev);
+
+       hdev->manufacturer = 70;
+       set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks);
+
+       if (hci_register_dev(hdev) < 0) {
+               dev_err(&serdev->dev, "Can't register HCI device\n");
+               hci_free_dev(hdev);
+               return -ENODEV;
+       }
+
+       return 0;
+}
+
+static void btmtkuart_remove(struct serdev_device *serdev)
+{
+       struct btmtkuart_dev *bdev = serdev_device_get_drvdata(serdev);
+       struct hci_dev *hdev = bdev->hdev;
+
+       hci_unregister_dev(hdev);
+       hci_free_dev(hdev);
+}
+
+#ifdef CONFIG_OF
+static const struct of_device_id mtk_of_match_table[] = {
+       { .compatible = "mediatek,mt7622-bluetooth"},
+       { }
+};
+MODULE_DEVICE_TABLE(of, mtk_of_match_table);
+#endif
+
+static struct serdev_device_driver btmtkuart_driver = {
+       .probe = btmtkuart_probe,
+       .remove = btmtkuart_remove,
+       .driver = {
+               .name = "btmtkuart",
+               .of_match_table = of_match_ptr(mtk_of_match_table),
+       },
+};
+
+module_serdev_device_driver(btmtkuart_driver);
+
+MODULE_AUTHOR("Sean Wang <sean.wang@mediatek.com>");
+MODULE_DESCRIPTION("MediaTek Bluetooth Serial driver ver " VERSION);
+MODULE_VERSION(VERSION);
+MODULE_LICENSE("GPL");
+MODULE_FIRMWARE(FIRMWARE_MT7622);
index 488f5e7521dd0fa5bd8093ba81da38be883436db..ec9e03a6b7786084b634b73f24ee5cf0dfc2c70f 100644 (file)
@@ -39,7 +39,7 @@ int qca_read_soc_version(struct hci_dev *hdev, u32 *soc_version)
 
        cmd = EDL_PATCH_VER_REQ_CMD;
        skb = __hci_cmd_sync_ev(hdev, EDL_PATCH_CMD_OPCODE, EDL_PATCH_CMD_LEN,
-                               &cmd, HCI_VENDOR_PKT, HCI_INIT_TIMEOUT);
+                               &cmd, HCI_EV_VENDOR, HCI_INIT_TIMEOUT);
        if (IS_ERR(skb)) {
                err = PTR_ERR(skb);
                bt_dev_err(hdev, "Reading QCA version information failed (%d)",
@@ -229,7 +229,7 @@ static int qca_tlv_send_segment(struct hci_dev *hdev, int seg_size,
                                      cmd);
 
        skb = __hci_cmd_sync_ev(hdev, EDL_PATCH_CMD_OPCODE, seg_size + 2, cmd,
-                               HCI_VENDOR_PKT, HCI_INIT_TIMEOUT);
+                               HCI_EV_VENDOR, HCI_INIT_TIMEOUT);
        if (IS_ERR(skb)) {
                err = PTR_ERR(skb);
                bt_dev_err(hdev, "QCA Failed to send TLV segment (%d)", err);
@@ -318,7 +318,7 @@ int qca_set_bdaddr_rome(struct hci_dev *hdev, const bdaddr_t *bdaddr)
        cmd[2] = sizeof(bdaddr_t);      /* size */
        memcpy(cmd + 3, bdaddr, sizeof(bdaddr_t));
        skb = __hci_cmd_sync_ev(hdev, EDL_NVM_ACCESS_OPCODE, sizeof(cmd), cmd,
-                               HCI_VENDOR_PKT, HCI_INIT_TIMEOUT);
+                               HCI_EV_VENDOR, HCI_INIT_TIMEOUT);
        if (IS_ERR(skb)) {
                err = PTR_ERR(skb);
                bt_dev_err(hdev, "QCA Change address command failed (%d)", err);
index 63c0dcbc49141c53e74b9158abbdff3e785e60a2..8eede1197cd2ee7f6e32042d29fff7a660a14e92 100644 (file)
@@ -841,6 +841,7 @@ static void h5_serdev_remove(struct serdev_device *serdev)
        hci_uart_unregister_device(&h5->serdev_hu);
 }
 
+#ifdef CONFIG_BT_HCIUART_RTL
 static int h5_btrtl_setup(struct h5 *h5)
 {
        struct btrtl_device_info *btrtl_dev;
@@ -922,10 +923,13 @@ static struct h5_vnd rtl_vnd = {
        .close          = h5_btrtl_close,
        .acpi_gpio_map  = acpi_btrtl_gpios,
 };
+#endif
 
 #ifdef CONFIG_ACPI
 static const struct acpi_device_id h5_acpi_match[] = {
+#ifdef CONFIG_BT_HCIUART_RTL
        { "OBDA8723", (kernel_ulong_t)&rtl_vnd },
+#endif
        { },
 };
 MODULE_DEVICE_TABLE(acpi, h5_acpi_match);
index 83326986c113db43edb947d236d7fc8e1dfc55be..7c719855369989d4a90b4a1288d858c2db1b353a 100644 (file)
@@ -146,7 +146,7 @@ static void virtcrypto_clean_affinity(struct virtio_crypto *vi, long hcpu)
 
        if (vi->affinity_hint_set) {
                for (i = 0; i < vi->max_data_queues; i++)
-                       virtqueue_set_affinity(vi->data_vq[i].vq, -1);
+                       virtqueue_set_affinity(vi->data_vq[i].vq, NULL);
 
                vi->affinity_hint_set = false;
        }
@@ -173,7 +173,7 @@ static void virtcrypto_set_affinity(struct virtio_crypto *vcrypto)
         *
         */
        for_each_online_cpu(cpu) {
-               virtqueue_set_affinity(vcrypto->data_vq[i].vq, cpu);
+               virtqueue_set_affinity(vcrypto->data_vq[i].vq, cpumask_of(cpu));
                if (++i >= vcrypto->max_data_queues)
                        break;
        }
index 1f41a4f89c08f4c4b2da8cf3f0feb2bd643e9b5e..8a873975cf1252006e91d437893b2aff2791e843 100644 (file)
@@ -191,28 +191,43 @@ static void xlp9xx_i2c_drain_rx_fifo(struct xlp9xx_i2c_dev *priv)
        if (priv->len_recv) {
                /* read length byte */
                rlen = xlp9xx_read_i2c_reg(priv, XLP9XX_I2C_MRXFIFO);
+
+               /*
+                * We expect at least 2 interrupts for I2C_M_RECV_LEN
+                * transactions. The length is updated during the first
+                * interrupt, and the buffer contents are only copied
+                * during subsequent interrupts. If in case the interrupts
+                * get merged we would complete the transaction without
+                * copying out the bytes from RX fifo. To avoid this now we
+                * drain the fifo as and when data is available.
+                * We drained the rlen byte already, decrement total length
+                * by one.
+                */
+
+               len--;
                if (rlen > I2C_SMBUS_BLOCK_MAX || rlen == 0) {
                        rlen = 0;       /*abort transfer */
                        priv->msg_buf_remaining = 0;
                        priv->msg_len = 0;
-               } else {
-                       *buf++ = rlen;
-                       if (priv->client_pec)
-                               ++rlen; /* account for error check byte */
-                       /* update remaining bytes and message length */
-                       priv->msg_buf_remaining = rlen;
-                       priv->msg_len = rlen + 1;
+                       xlp9xx_i2c_update_rlen(priv);
+                       return;
                }
+
+               *buf++ = rlen;
+               if (priv->client_pec)
+                       ++rlen; /* account for error check byte */
+               /* update remaining bytes and message length */
+               priv->msg_buf_remaining = rlen;
+               priv->msg_len = rlen + 1;
                xlp9xx_i2c_update_rlen(priv);
                priv->len_recv = false;
-       } else {
-               len = min(priv->msg_buf_remaining, len);
-               for (i = 0; i < len; i++, buf++)
-                       *buf = xlp9xx_read_i2c_reg(priv, XLP9XX_I2C_MRXFIFO);
-
-               priv->msg_buf_remaining -= len;
        }
 
+       len = min(priv->msg_buf_remaining, len);
+       for (i = 0; i < len; i++, buf++)
+               *buf = xlp9xx_read_i2c_reg(priv, XLP9XX_I2C_MRXFIFO);
+
+       priv->msg_buf_remaining -= len;
        priv->msg_buf = buf;
 
        if (priv->msg_buf_remaining)
index 1e55b9bf8b563692ea8bf25ee84bfe0f30cf097e..a4d5049df6928281edaddb198457c064989e3657 100644 (file)
 #define RTL8366RB_SSCR2                                0x0004
 #define RTL8366RB_SSCR2_DROP_UNKNOWN_DA                BIT(0)
 
+/* Port Mode Control registers */
+#define RTL8366RB_PMC0                         0x0005
+#define RTL8366RB_PMC0_SPI                     BIT(0)
+#define RTL8366RB_PMC0_EN_AUTOLOAD             BIT(1)
+#define RTL8366RB_PMC0_PROBE                   BIT(2)
+#define RTL8366RB_PMC0_DIS_BISR                        BIT(3)
+#define RTL8366RB_PMC0_ADCTEST                 BIT(4)
+#define RTL8366RB_PMC0_SRAM_DIAG               BIT(5)
+#define RTL8366RB_PMC0_EN_SCAN                 BIT(6)
+#define RTL8366RB_PMC0_P4_IOMODE_SHIFT         7
+#define RTL8366RB_PMC0_P4_IOMODE_MASK          GENMASK(9, 7)
+#define RTL8366RB_PMC0_P5_IOMODE_SHIFT         10
+#define RTL8366RB_PMC0_P5_IOMODE_MASK          GENMASK(12, 10)
+#define RTL8366RB_PMC0_SDSMODE_SHIFT           13
+#define RTL8366RB_PMC0_SDSMODE_MASK            GENMASK(15, 13)
+#define RTL8366RB_PMC1                         0x0006
+
 /* Port Mirror Control Register */
 #define RTL8366RB_PMCR                         0x0007
 #define RTL8366RB_PMCR_SOURCE_PORT(a)          (a)
@@ -860,6 +877,19 @@ static int rtl8366rb_setup(struct dsa_switch *ds)
        if (ret)
                return ret;
 
+       /* Port 4 setup: this enables Port 4, usually the WAN port,
+        * common PHY IO mode is apparently mode 0, and this is not what
+        * the port is initialized to. There is no explanation of the
+        * IO modes in the Realtek source code, if your WAN port is
+        * connected to something exotic such as fiber, then this might
+        * be worth experimenting with.
+        */
+       ret = regmap_update_bits(smi->map, RTL8366RB_PMC0,
+                                RTL8366RB_PMC0_P4_IOMODE_MASK,
+                                0 << RTL8366RB_PMC0_P4_IOMODE_SHIFT);
+       if (ret)
+               return ret;
+
        /* Discard VLAN tagged packets if the port is not a member of
         * the VLAN with which the packets is associated.
         */
index b6dbc3f6d309ef984034fbf6eee212a9f67e5541..9c929cd90b86854df86894c7a063b0d17429e3cd 100644 (file)
@@ -2823,7 +2823,7 @@ bnxt_fill_coredump_seg_hdr(struct bnxt *bp,
                           int status, u32 duration, u32 instance)
 {
        memset(seg_hdr, 0, sizeof(*seg_hdr));
-       strcpy(seg_hdr->signature, "sEgM");
+       memcpy(seg_hdr->signature, "sEgM", 4);
        if (seg_rec) {
                seg_hdr->component_id = (__force __le32)seg_rec->component_id;
                seg_hdr->segment_id = (__force __le32)seg_rec->segment_id;
@@ -2855,7 +2855,7 @@ bnxt_fill_coredump_record(struct bnxt *bp, struct bnxt_coredump_record *record,
 
        time64_to_tm(start, 0, &tm);
        memset(record, 0, sizeof(*record));
-       strcpy(record->signature, "cOrE");
+       memcpy(record->signature, "cOrE", 4);
        record->flags = 0;
        record->low_version = 0;
        record->high_version = 1;
@@ -2876,7 +2876,7 @@ bnxt_fill_coredump_record(struct bnxt *bp, struct bnxt_coredump_record *record,
        record->os_ver_major = cpu_to_le32(os_ver_major);
        record->os_ver_minor = cpu_to_le32(os_ver_minor);
 
-       strcpy(record->os_name, utsname()->sysname);
+       strlcpy(record->os_name, utsname()->sysname, 32);
        time64_to_tm(end, 0, &tm);
        record->end_year = cpu_to_le16(tm.tm_year + 1900);
        record->end_month = cpu_to_le16(tm.tm_mon + 1);
index 06f7449c569d5cdc03051b1f8e5617643a910406..8e05afd5e39c028becb3daebf177ed52c8b3d4ad 100644 (file)
@@ -389,18 +389,14 @@ static int lio_set_link_ksettings(struct net_device *netdev,
        struct lio *lio = GET_LIO(netdev);
        struct oct_link_info *linfo;
        struct octeon_device *oct;
-       u32 is25G = 0;
 
        oct = lio->oct_dev;
 
        linfo = &lio->linfo;
 
-       if (oct->subsystem_id == OCTEON_CN2350_25GB_SUBSYS_ID ||
-           oct->subsystem_id == OCTEON_CN2360_25GB_SUBSYS_ID) {
-               is25G = 1;
-       } else {
+       if (!(oct->subsystem_id == OCTEON_CN2350_25GB_SUBSYS_ID ||
+             oct->subsystem_id == OCTEON_CN2360_25GB_SUBSYS_ID))
                return -EOPNOTSUPP;
-       }
 
        if (oct->no_speed_setting) {
                dev_err(&oct->pci_dev->dev, "%s: Changing speed is not supported\n",
@@ -857,7 +853,14 @@ static int lio_set_phys_id(struct net_device *netdev,
 {
        struct lio *lio = GET_LIO(netdev);
        struct octeon_device *oct = lio->oct_dev;
+       struct oct_link_info *linfo;
        int value, ret;
+       u32 cur_ver;
+
+       linfo = &lio->linfo;
+       cur_ver = OCT_FW_VER(oct->fw_info.ver.maj,
+                            oct->fw_info.ver.min,
+                            oct->fw_info.ver.rev);
 
        switch (state) {
        case ETHTOOL_ID_ACTIVE:
@@ -896,16 +899,22 @@ static int lio_set_phys_id(struct net_device *netdev,
                                return ret;
                } else if (oct->chip_id == OCTEON_CN23XX_PF_VID) {
                        octnet_id_active(netdev, LED_IDENTIFICATION_ON);
-
-                       /* returns 0 since updates are asynchronous */
-                       return 0;
+                       if (linfo->link.s.phy_type == LIO_PHY_PORT_TP &&
+                           cur_ver > OCT_FW_VER(1, 7, 2))
+                               return 2;
+                       else
+                               return 0;
                } else {
                        return -EINVAL;
                }
                break;
 
        case ETHTOOL_ID_ON:
-               if (oct->chip_id == OCTEON_CN66XX)
+               if (oct->chip_id == OCTEON_CN23XX_PF_VID &&
+                   linfo->link.s.phy_type == LIO_PHY_PORT_TP &&
+                   cur_ver > OCT_FW_VER(1, 7, 2))
+                       octnet_id_active(netdev, LED_IDENTIFICATION_ON);
+               else if (oct->chip_id == OCTEON_CN66XX)
                        octnet_gpio_access(netdev, VITESSE_PHY_GPIO_CFG,
                                           VITESSE_PHY_GPIO_HIGH);
                else
@@ -914,7 +923,11 @@ static int lio_set_phys_id(struct net_device *netdev,
                break;
 
        case ETHTOOL_ID_OFF:
-               if (oct->chip_id == OCTEON_CN66XX)
+               if (oct->chip_id == OCTEON_CN23XX_PF_VID &&
+                   linfo->link.s.phy_type == LIO_PHY_PORT_TP &&
+                   cur_ver > OCT_FW_VER(1, 7, 2))
+                       octnet_id_active(netdev, LED_IDENTIFICATION_OFF);
+               else if (oct->chip_id == OCTEON_CN66XX)
                        octnet_gpio_access(netdev, VITESSE_PHY_GPIO_CFG,
                                           VITESSE_PHY_GPIO_LOW);
                else
index 690424b6781ad4378a6dfdeac73325b53ec9b5ee..7407fcd338e9fd716346a607542cb4cdb3fe56bb 100644 (file)
@@ -907,6 +907,7 @@ static inline int opcode_slow_path(union octeon_rh *rh)
 #define VITESSE_PHY_GPIO_LOW      0x3
 #define LED_IDENTIFICATION_ON     0x1
 #define LED_IDENTIFICATION_OFF    0x0
+#define LIO23XX_COPPERHEAD_LED_GPIO 0x2
 
 struct oct_mdio_cmd {
        u64 op;
index 8623be13bf86f2cc82f579daadc8121dcfb8432d..0ccdde366ae171fa6ccb19c1522429b625deefb9 100644 (file)
@@ -109,10 +109,6 @@ static int disable_msi = 0;
 module_param(disable_msi, int, 0);
 MODULE_PARM_DESC(disable_msi, "Disable Message Signaled Interrupt (MSI)");
 
-static const char pci_speed[][4] = {
-       "33", "66", "100", "133"
-};
-
 /*
  * Setup MAC to receive the types of packets we want.
  */
index 3da9299cd786e399b31852bbbf9544a9fc57c8fb..76d16747f513c39f362640a20a4938f39e2cf9bb 100644 (file)
@@ -1853,4 +1853,5 @@ void cxgb4_write_sgl(const struct sk_buff *skb, struct sge_txq *q,
 void cxgb4_ring_tx_db(struct adapter *adap, struct sge_txq *q, int n);
 int t4_set_vlan_acl(struct adapter *adap, unsigned int mbox, unsigned int vf,
                    u16 vlan);
+int cxgb4_dcb_enabled(const struct net_device *dev);
 #endif /* __CXGB4_H__ */
index 4e7f72b17e825c847f631efbaf3ab684465c4ca0..b34f0f077a3107662ffb2799e09a40e05ec5848a 100644 (file)
@@ -22,7 +22,7 @@
 
 /* DCBx version control
  */
-static const char * const dcb_ver_array[] = {
+const char * const dcb_ver_array[] = {
        "Unknown",
        "DCBx-CIN",
        "DCBx-CEE 1.01",
index 6f312e03432fcadf7592757756e23728377e2cc1..0f72f9c4ec74c6f42722d3e731071b535d1bab33 100644 (file)
@@ -2414,6 +2414,196 @@ static const struct file_operations rss_vf_config_debugfs_fops = {
        .release = seq_release_private
 };
 
+#ifdef CONFIG_CHELSIO_T4_DCB
+extern char *dcb_ver_array[];
+
+/* Data Center Briging information for each port.
+ */
+static int dcb_info_show(struct seq_file *seq, void *v)
+{
+       struct adapter *adap = seq->private;
+
+       if (v == SEQ_START_TOKEN) {
+               seq_puts(seq, "Data Center Bridging Information\n");
+       } else {
+               int port = (uintptr_t)v - 2;
+               struct net_device *dev = adap->port[port];
+               struct port_info *pi = netdev2pinfo(dev);
+               struct port_dcb_info *dcb = &pi->dcb;
+
+               seq_puts(seq, "\n");
+               seq_printf(seq, "Port: %d (DCB negotiated: %s)\n",
+                          port,
+                          cxgb4_dcb_enabled(dev) ? "yes" : "no");
+
+               if (cxgb4_dcb_enabled(dev))
+                       seq_printf(seq, "[ DCBx Version %s ]\n",
+                                  dcb_ver_array[dcb->dcb_version]);
+
+               if (dcb->msgs) {
+                       int i;
+
+                       seq_puts(seq, "\n  Index\t\t\t  :\t");
+                       for (i = 0; i < 8; i++)
+                               seq_printf(seq, " %3d", i);
+                       seq_puts(seq, "\n\n");
+               }
+
+               if (dcb->msgs & CXGB4_DCB_FW_PGID) {
+                       int prio, pgid;
+
+                       seq_puts(seq, "  Priority Group IDs\t  :\t");
+                       for (prio = 0; prio < 8; prio++) {
+                               pgid = (dcb->pgid >> 4 * (7 - prio)) & 0xf;
+                               seq_printf(seq, " %3d", pgid);
+                       }
+                       seq_puts(seq, "\n");
+               }
+
+               if (dcb->msgs & CXGB4_DCB_FW_PGRATE) {
+                       int pg;
+
+                       seq_puts(seq, "  Priority Group BW(%)\t  :\t");
+                       for (pg = 0; pg < 8; pg++)
+                               seq_printf(seq, " %3d", dcb->pgrate[pg]);
+                       seq_puts(seq, "\n");
+
+                       if (dcb->dcb_version == FW_PORT_DCB_VER_IEEE) {
+                               seq_puts(seq, "  TSA Algorithm\t\t  :\t");
+                               for (pg = 0; pg < 8; pg++)
+                                       seq_printf(seq, " %3d", dcb->tsa[pg]);
+                               seq_puts(seq, "\n");
+                       }
+
+                       seq_printf(seq, "  Max PG Traffic Classes  [%3d  ]\n",
+                                  dcb->pg_num_tcs_supported);
+
+                       seq_puts(seq, "\n");
+               }
+
+               if (dcb->msgs & CXGB4_DCB_FW_PRIORATE) {
+                       int prio;
+
+                       seq_puts(seq, "  Priority Rate\t:\t");
+                       for (prio = 0; prio < 8; prio++)
+                               seq_printf(seq, " %3d", dcb->priorate[prio]);
+                       seq_puts(seq, "\n");
+               }
+
+               if (dcb->msgs & CXGB4_DCB_FW_PFC) {
+                       int prio;
+
+                       seq_puts(seq, "  Priority Flow Control   :\t");
+                       for (prio = 0; prio < 8; prio++) {
+                               int pfcen = (dcb->pfcen >> 1 * (7 - prio))
+                                           & 0x1;
+                               seq_printf(seq, " %3d", pfcen);
+                       }
+                       seq_puts(seq, "\n");
+
+                       seq_printf(seq, "  Max PFC Traffic Classes [%3d  ]\n",
+                                  dcb->pfc_num_tcs_supported);
+
+                       seq_puts(seq, "\n");
+               }
+
+               if (dcb->msgs & CXGB4_DCB_FW_APP_ID) {
+                       int app, napps;
+
+                       seq_puts(seq, "  Application Information:\n");
+                       seq_puts(seq, "  App    Priority    Selection         Protocol\n");
+                       seq_puts(seq, "  Index  Map         Field             ID\n");
+                       for (app = 0, napps = 0;
+                            app < CXGB4_MAX_DCBX_APP_SUPPORTED; app++) {
+                               struct app_priority *ap;
+                               static const char * const sel_names[] = {
+                                       "Ethertype",
+                                       "Socket TCP",
+                                       "Socket UDP",
+                                       "Socket All",
+                               };
+                               const char *sel_name;
+
+                               ap = &dcb->app_priority[app];
+                               /* skip empty slots */
+                               if (ap->protocolid == 0)
+                                       continue;
+                               napps++;
+
+                               if (ap->sel_field < ARRAY_SIZE(sel_names))
+                                       sel_name = sel_names[ap->sel_field];
+                               else
+                                       sel_name = "UNKNOWN";
+
+                               seq_printf(seq, "  %3d    %#04x        %-10s (%d)    %#06x (%d)\n",
+                                          app,
+                                          ap->user_prio_map,
+                                          sel_name, ap->sel_field,
+                                          ap->protocolid, ap->protocolid);
+                       }
+                       if (napps == 0)
+                               seq_puts(seq, "    --- None ---\n");
+               }
+       }
+       return 0;
+}
+
+static inline void *dcb_info_get_idx(struct adapter *adap, loff_t pos)
+{
+       return (pos <= adap->params.nports
+               ? (void *)((uintptr_t)pos + 1)
+               : NULL);
+}
+
+static void *dcb_info_start(struct seq_file *seq, loff_t *pos)
+{
+       struct adapter *adap = seq->private;
+
+       return (*pos
+               ? dcb_info_get_idx(adap, *pos)
+               : SEQ_START_TOKEN);
+}
+
+static void dcb_info_stop(struct seq_file *seq, void *v)
+{
+}
+
+static void *dcb_info_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+       struct adapter *adap = seq->private;
+
+       (*pos)++;
+       return dcb_info_get_idx(adap, *pos);
+}
+
+static const struct seq_operations dcb_info_seq_ops = {
+       .start = dcb_info_start,
+       .next  = dcb_info_next,
+       .stop  = dcb_info_stop,
+       .show  = dcb_info_show
+};
+
+static int dcb_info_open(struct inode *inode, struct file *file)
+{
+       int res = seq_open(file, &dcb_info_seq_ops);
+
+       if (!res) {
+               struct seq_file *seq = file->private_data;
+
+               seq->private = inode->i_private;
+       }
+       return res;
+}
+
+static const struct file_operations dcb_info_debugfs_fops = {
+       .owner   = THIS_MODULE,
+       .open    = dcb_info_open,
+       .read    = seq_read,
+       .llseek  = seq_lseek,
+       .release = seq_release,
+};
+#endif /* CONFIG_CHELSIO_T4_DCB */
+
 static int resources_show(struct seq_file *seq, void *v)
 {
        struct adapter *adapter = seq->private;
@@ -3435,6 +3625,9 @@ int t4_setup_debugfs(struct adapter *adap)
                { "rss_pf_config", &rss_pf_config_debugfs_fops, 0400, 0 },
                { "rss_vf_config", &rss_vf_config_debugfs_fops, 0400, 0 },
                { "resources", &resources_debugfs_fops, 0400, 0 },
+#ifdef CONFIG_CHELSIO_T4_DCB
+               { "dcb_info", &dcb_info_debugfs_fops, 0400, 0 },
+#endif
                { "sge_qinfo", &sge_qinfo_debugfs_fops, 0400, 0 },
                { "ibq_tp0",  &cim_ibq_fops, 0400, 0 },
                { "ibq_tp1",  &cim_ibq_fops, 0400, 1 },
index 0f7ce71205e6846a29041ea5fdd26b54bc6264e8..961e3087d1d38c3294c5e49c0fab9f1922bf15b9 100644 (file)
@@ -267,7 +267,7 @@ static void dcb_tx_queue_prio_enable(struct net_device *dev, int enable)
        }
 }
 
-static int cxgb4_dcb_enabled(const struct net_device *dev)
+int cxgb4_dcb_enabled(const struct net_device *dev)
 {
        struct port_info *pi = netdev_priv(dev);
 
@@ -5144,17 +5144,9 @@ static void print_port_info(const struct net_device *dev)
 {
        char buf[80];
        char *bufp = buf;
-       const char *spd = "";
        const struct port_info *pi = netdev_priv(dev);
        const struct adapter *adap = pi->adapter;
 
-       if (adap->params.pci.speed == PCI_EXP_LNKSTA_CLS_2_5GB)
-               spd = " 2.5 GT/s";
-       else if (adap->params.pci.speed == PCI_EXP_LNKSTA_CLS_5_0GB)
-               spd = " 5 GT/s";
-       else if (adap->params.pci.speed == PCI_EXP_LNKSTA_CLS_8_0GB)
-               spd = " 8 GT/s";
-
        if (pi->link_cfg.pcaps & FW_PORT_CAP32_SPEED_100M)
                bufp += sprintf(bufp, "100M/");
        if (pi->link_cfg.pcaps & FW_PORT_CAP32_SPEED_1G)
@@ -5658,6 +5650,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
 #ifdef CONFIG_CHELSIO_T4_DCB
                netdev->dcbnl_ops = &cxgb4_dcb_ops;
                cxgb4_dcb_state_init(netdev);
+               cxgb4_dcb_version_init(netdev);
 #endif
                cxgb4_set_ethtool_ops(netdev);
        }
index 4eb15ceddca3889c1f57c6210a175f4d14875c19..a844296135b410c827e778a19506d2a1bd028aaa 100644 (file)
@@ -36,8 +36,8 @@
 #define __T4FW_VERSION_H__
 
 #define T4FW_VERSION_MAJOR 0x01
-#define T4FW_VERSION_MINOR 0x13
-#define T4FW_VERSION_MICRO 0x01
+#define T4FW_VERSION_MINOR 0x14
+#define T4FW_VERSION_MICRO 0x08
 #define T4FW_VERSION_BUILD 0x00
 
 #define T4FW_MIN_VERSION_MAJOR 0x01
@@ -45,8 +45,8 @@
 #define T4FW_MIN_VERSION_MICRO 0x00
 
 #define T5FW_VERSION_MAJOR 0x01
-#define T5FW_VERSION_MINOR 0x13
-#define T5FW_VERSION_MICRO 0x01
+#define T5FW_VERSION_MINOR 0x14
+#define T5FW_VERSION_MICRO 0x08
 #define T5FW_VERSION_BUILD 0x00
 
 #define T5FW_MIN_VERSION_MAJOR 0x00
@@ -54,8 +54,8 @@
 #define T5FW_MIN_VERSION_MICRO 0x00
 
 #define T6FW_VERSION_MAJOR 0x01
-#define T6FW_VERSION_MINOR 0x13
-#define T6FW_VERSION_MICRO 0x01
+#define T6FW_VERSION_MINOR 0x14
+#define T6FW_VERSION_MICRO 0x08
 #define T6FW_VERSION_BUILD 0x00
 
 #define T6FW_MIN_VERSION_MAJOR 0x00
index b4853ec9de8d417aed37f5dfb09ede2581e42e2f..8cf794edd3c3116913679036ed9e427478b50bbf 100644 (file)
@@ -1,7 +1,7 @@
 config BE2NET
        tristate "ServerEngines' 10Gbps NIC - BladeEngine"
        depends on PCI
-       ---help---
+       help
          This driver implements the NIC functionality for ServerEngines'
          10Gbps network adapter - BladeEngine.
 
@@ -10,6 +10,42 @@ config BE2NET_HWMON
        depends on BE2NET && HWMON
        depends on !(BE2NET=y && HWMON=m)
        default y
-       ---help---
+       help
          Say Y here if you want to expose thermal sensor data on
          be2net network adapter.
+
+config BE2NET_BE2
+       bool "Support for BE2 chipsets"
+       depends on BE2NET
+       default y
+       help
+         Say Y here if you want to use devices based on BE2
+         chipsets. (e.g. OneConnect OCe10xxx)
+
+config BE2NET_BE3
+       bool "Support for BE3 chipsets"
+       depends on BE2NET
+       default y
+       help
+         Say Y here if you want to use devices based on BE3
+         chipsets. (e.g. OneConnect OCe11xxx)
+
+config BE2NET_LANCER
+       bool "Support for Lancer chipsets"
+       depends on BE2NET
+       default y
+       help
+         Say Y here if you want to use devices based on Lancer
+         chipsets. (e.g LightPulse LPe12xxx)
+
+config BE2NET_SKYHAWK
+       bool "Support for Skyhawk chipsets"
+       depends on BE2NET
+       default y
+       help
+         Say Y here if you want to use devices based on Skyhawk
+         chipsets. (e.g. OneConnect OCe14xxx)
+
+comment "WARNING: be2net is useless without any enabled chip"
+        depends on BE2NET_BE2=n && BE2NET_BE3=n && BE2NET_LANCER=n && \
+       BE2NET_SKYHAWK=n && BE2NET
index d80fe03d31070dcc56a53854698e42d3c04fd87a..58bcee8f0a58b700593d44bc3638e2ed9a2ebfb2 100644 (file)
@@ -752,17 +752,33 @@ static inline u16 be_max_any_irqs(struct be_adapter *adapter)
 /* Is BE in QNQ multi-channel mode */
 #define be_is_qnq_mode(adapter)                (adapter->function_mode & QNQ_MODE)
 
+#ifdef CONFIG_BE2NET_LANCER
 #define lancer_chip(adapter)   (adapter->pdev->device == OC_DEVICE_ID3 || \
                                 adapter->pdev->device == OC_DEVICE_ID4)
+#else
+#define lancer_chip(adapter)   (0)
+#endif /* CONFIG_BE2NET_LANCER */
 
+#ifdef CONFIG_BE2NET_SKYHAWK
 #define skyhawk_chip(adapter)  (adapter->pdev->device == OC_DEVICE_ID5 || \
                                 adapter->pdev->device == OC_DEVICE_ID6)
+#else
+#define skyhawk_chip(adapter)  (0)
+#endif /* CONFIG_BE2NET_SKYHAWK */
 
+#ifdef CONFIG_BE2NET_BE3
 #define BE3_chip(adapter)      (adapter->pdev->device == BE_DEVICE_ID2 || \
                                 adapter->pdev->device == OC_DEVICE_ID2)
+#else
+#define BE3_chip(adapter)      (0)
+#endif /* CONFIG_BE2NET_BE3 */
 
+#ifdef CONFIG_BE2NET_BE2
 #define BE2_chip(adapter)      (adapter->pdev->device == BE_DEVICE_ID1 || \
                                 adapter->pdev->device == OC_DEVICE_ID1)
+#else
+#define BE2_chip(adapter)      (0)
+#endif /* CONFIG_BE2NET_BE2 */
 
 #define BEx_chip(adapter)      (BE3_chip(adapter) || BE2_chip(adapter))
 
index d0b9415d9ae71615e06cf6b1a9e8dc1d49beee80..74d122616e76a2d876793a610dd6672e3e8b9910 100644 (file)
@@ -47,14 +47,22 @@ MODULE_PARM_DESC(rx_frag_size, "Size of a fragment that holds rcvd data.");
 static struct workqueue_struct *be_err_recovery_workq;
 
 static const struct pci_device_id be_dev_ids[] = {
+#ifdef CONFIG_BE2NET_BE2
        { PCI_DEVICE(BE_VENDOR_ID, BE_DEVICE_ID1) },
-       { PCI_DEVICE(BE_VENDOR_ID, BE_DEVICE_ID2) },
        { PCI_DEVICE(BE_VENDOR_ID, OC_DEVICE_ID1) },
+#endif /* CONFIG_BE2NET_BE2 */
+#ifdef CONFIG_BE2NET_BE3
+       { PCI_DEVICE(BE_VENDOR_ID, BE_DEVICE_ID2) },
        { PCI_DEVICE(BE_VENDOR_ID, OC_DEVICE_ID2) },
+#endif /* CONFIG_BE2NET_BE3 */
+#ifdef CONFIG_BE2NET_LANCER
        { PCI_DEVICE(EMULEX_VENDOR_ID, OC_DEVICE_ID3)},
        { PCI_DEVICE(EMULEX_VENDOR_ID, OC_DEVICE_ID4)},
+#endif /* CONFIG_BE2NET_LANCER */
+#ifdef CONFIG_BE2NET_SKYHAWK
        { PCI_DEVICE(EMULEX_VENDOR_ID, OC_DEVICE_ID5)},
        { PCI_DEVICE(EMULEX_VENDOR_ID, OC_DEVICE_ID6)},
+#endif /* CONFIG_BE2NET_SKYHAWK */
        { 0 }
 };
 MODULE_DEVICE_TABLE(pci, be_dev_ids);
index 55c2a56c5dae52178af62a0ab7d97f6ed054a3cf..bc80a678abc30e22910400c5c39f847317aac8ec 100644 (file)
@@ -4107,13 +4107,18 @@ static int  mvneta_config_rss(struct mvneta_port *pp)
 
        on_each_cpu(mvneta_percpu_mask_interrupt, pp, true);
 
-       /* We have to synchronise on the napi of each CPU */
-       for_each_online_cpu(cpu) {
-               struct mvneta_pcpu_port *pcpu_port =
-                       per_cpu_ptr(pp->ports, cpu);
+       if (!pp->neta_armada3700) {
+               /* We have to synchronise on the napi of each CPU */
+               for_each_online_cpu(cpu) {
+                       struct mvneta_pcpu_port *pcpu_port =
+                               per_cpu_ptr(pp->ports, cpu);
 
-               napi_synchronize(&pcpu_port->napi);
-               napi_disable(&pcpu_port->napi);
+                       napi_synchronize(&pcpu_port->napi);
+                       napi_disable(&pcpu_port->napi);
+               }
+       } else {
+               napi_synchronize(&pp->napi);
+               napi_disable(&pp->napi);
        }
 
        pp->rxq_def = pp->indir[0];
@@ -4130,12 +4135,16 @@ static int  mvneta_config_rss(struct mvneta_port *pp)
        mvneta_percpu_elect(pp);
        spin_unlock(&pp->lock);
 
-       /* We have to synchronise on the napi of each CPU */
-       for_each_online_cpu(cpu) {
-               struct mvneta_pcpu_port *pcpu_port =
-                       per_cpu_ptr(pp->ports, cpu);
+       if (!pp->neta_armada3700) {
+               /* We have to synchronise on the napi of each CPU */
+               for_each_online_cpu(cpu) {
+                       struct mvneta_pcpu_port *pcpu_port =
+                               per_cpu_ptr(pp->ports, cpu);
 
-               napi_enable(&pcpu_port->napi);
+                       napi_enable(&pcpu_port->napi);
+               }
+       } else {
+               napi_enable(&pp->napi);
        }
 
        netif_tx_start_all_queues(pp->dev);
index b5d5e57f1583651fcf0ada0d632fa30c0706feba..81533d7f395c14b4e200df93785acdd45ed89c90 100644 (file)
@@ -726,15 +726,6 @@ static struct mlxsw_driver *mlxsw_core_driver_get(const char *kind)
        return mlxsw_driver;
 }
 
-static void mlxsw_core_driver_put(const char *kind)
-{
-       struct mlxsw_driver *mlxsw_driver;
-
-       spin_lock(&mlxsw_core_driver_list_lock);
-       mlxsw_driver = __driver_find(kind);
-       spin_unlock(&mlxsw_core_driver_list_lock);
-}
-
 static int mlxsw_devlink_port_split(struct devlink *devlink,
                                    unsigned int port_index,
                                    unsigned int count,
@@ -1082,7 +1073,6 @@ int mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info,
        if (!reload)
                devlink_free(devlink);
 err_devlink_alloc:
-       mlxsw_core_driver_put(device_kind);
        return err;
 }
 EXPORT_SYMBOL(mlxsw_core_bus_device_register);
@@ -1090,7 +1080,6 @@ EXPORT_SYMBOL(mlxsw_core_bus_device_register);
 void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
                                      bool reload)
 {
-       const char *device_kind = mlxsw_core->bus_info->device_kind;
        struct devlink *devlink = priv_to_devlink(mlxsw_core);
 
        if (mlxsw_core->reload_fail)
@@ -1111,7 +1100,6 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
                return;
 reload_fail:
        devlink_free(devlink);
-       mlxsw_core_driver_put(device_kind);
 }
 EXPORT_SYMBOL(mlxsw_core_bus_device_unregister);
 
index bd255417751b134427cd69f41354a52cfba0f988..798bd5aca38470073d7b47f70942fc047c861f2b 100644 (file)
@@ -15,8 +15,6 @@
 #include "core.h"
 #include "i2c.h"
 
-static const char mlxsw_i2c_driver_name[] = "mlxsw_i2c";
-
 #define MLXSW_I2C_CIR2_BASE            0x72000
 #define MLXSW_I2C_CIR_STATUS_OFF       0x18
 #define MLXSW_I2C_CIR2_OFF_STATUS      (MLXSW_I2C_CIR2_BASE + \
index 4bec4f6b80ca8785167e0b57bb279e956c964fab..4d271fb3de3d2ba6f556f9869cb3720ce7a8f43c 100644 (file)
@@ -22,8 +22,6 @@
 #include "port.h"
 #include "resources.h"
 
-static const char mlxsw_pci_driver_name[] = "mlxsw_pci";
-
 #define mlxsw_pci_write32(mlxsw_pci, reg, val) \
        iowrite32be(val, (mlxsw_pci)->hw_addr + (MLXSW_PCI_ ## reg))
 #define mlxsw_pci_read32(mlxsw_pci, reg) \
index 75e5316ffebd7f7b2afaa19c778659c9bb02d330..ebd1b24ebaa5dd48df775d9a0704bd81b8ed09df 100644 (file)
@@ -514,10 +514,9 @@ int mlxsw_sp_flower_tmplt_create(struct mlxsw_sp *mlxsw_sp,
                                           f->common.chain_index,
                                           MLXSW_SP_ACL_PROFILE_FLOWER,
                                           &rulei.values.elusage);
-       if (IS_ERR(ruleset))
-               return PTR_ERR(ruleset);
+
        /* keep the reference to the ruleset */
-       return 0;
+       return PTR_ERR_OR_ZERO(ruleset);
 }
 
 void mlxsw_sp_flower_tmplt_destroy(struct mlxsw_sp *mlxsw_sp,
index 43f47cb45fe2cee55e372d2381cd5be8a8767284..538926d2b43fb88ba8a09ae14f9bc32b24985e03 100644 (file)
@@ -6,4 +6,4 @@ obj-$(CONFIG_ENC28J60) += enc28j60.o
 obj-$(CONFIG_ENCX24J600) += encx24j600.o encx24j600-regmap.o
 obj-$(CONFIG_LAN743X) += lan743x.o
 
-lan743x-objs := lan743x_main.o lan743x_ethtool.o
+lan743x-objs := lan743x_main.o lan743x_ethtool.o lan743x_ptp.o
index c25b3e97ae2652584f9c19109a9a0af9ffcf2d58..07c1eb63415a343e08b58892d4196c76fb75d2fc 100644 (file)
@@ -4,6 +4,7 @@
 #include <linux/netdevice.h>
 #include "lan743x_main.h"
 #include "lan743x_ethtool.h"
+#include <linux/net_tstamp.h>
 #include <linux/pci.h>
 #include <linux/phy.h>
 
@@ -542,6 +543,31 @@ static int lan743x_ethtool_set_rxfh(struct net_device *netdev,
        return 0;
 }
 
+static int lan743x_ethtool_get_ts_info(struct net_device *netdev,
+                                      struct ethtool_ts_info *ts_info)
+{
+       struct lan743x_adapter *adapter = netdev_priv(netdev);
+
+       ts_info->so_timestamping = SOF_TIMESTAMPING_TX_SOFTWARE |
+                                  SOF_TIMESTAMPING_RX_SOFTWARE |
+                                  SOF_TIMESTAMPING_SOFTWARE |
+                                  SOF_TIMESTAMPING_TX_HARDWARE |
+                                  SOF_TIMESTAMPING_RX_HARDWARE |
+                                  SOF_TIMESTAMPING_RAW_HARDWARE;
+
+       if (adapter->ptp.ptp_clock)
+               ts_info->phc_index = ptp_clock_index(adapter->ptp.ptp_clock);
+       else
+               ts_info->phc_index = -1;
+
+       ts_info->tx_types = BIT(HWTSTAMP_TX_OFF) |
+                           BIT(HWTSTAMP_TX_ON) |
+                           BIT(HWTSTAMP_TX_ONESTEP_SYNC);
+       ts_info->rx_filters = BIT(HWTSTAMP_FILTER_NONE) |
+                             BIT(HWTSTAMP_FILTER_ALL);
+       return 0;
+}
+
 static int lan743x_ethtool_get_eee(struct net_device *netdev,
                                   struct ethtool_eee *eee)
 {
@@ -685,6 +711,7 @@ const struct ethtool_ops lan743x_ethtool_ops = {
        .get_rxfh_indir_size = lan743x_ethtool_get_rxfh_indir_size,
        .get_rxfh = lan743x_ethtool_get_rxfh,
        .set_rxfh = lan743x_ethtool_set_rxfh,
+       .get_ts_info = lan743x_ethtool_get_ts_info,
        .get_eee = lan743x_ethtool_get_eee,
        .set_eee = lan743x_ethtool_set_eee,
        .get_link_ksettings = phy_ethtool_get_link_ksettings,
index bb323f2698392d2af220d4a6b29b9e37d1a38939..e7dce79ff2c910a090de7e55bf3e3efae3e0f2b7 100644 (file)
@@ -267,6 +267,10 @@ static void lan743x_intr_shared_isr(void *context, u32 int_sts, u32 flags)
                        lan743x_intr_software_isr(adapter);
                        int_sts &= ~INT_BIT_SW_GP_;
                }
+               if (int_sts & INT_BIT_1588_) {
+                       lan743x_ptp_isr(adapter);
+                       int_sts &= ~INT_BIT_1588_;
+               }
        }
        if (int_sts)
                lan743x_csr_write(adapter, INT_EN_CLR, int_sts);
@@ -976,6 +980,7 @@ static void lan743x_phy_link_status_change(struct net_device *netdev)
                                               ksettings.base.duplex,
                                               local_advertisement,
                                               remote_advertisement);
+               lan743x_ptp_update_latency(adapter, ksettings.base.speed);
        }
 }
 
@@ -1226,6 +1231,7 @@ static void lan743x_tx_release_desc(struct lan743x_tx *tx,
        struct lan743x_tx_buffer_info *buffer_info = NULL;
        struct lan743x_tx_descriptor *descriptor = NULL;
        u32 descriptor_type = 0;
+       bool ignore_sync;
 
        descriptor = &tx->ring_cpu_ptr[descriptor_index];
        buffer_info = &tx->buffer_info[descriptor_index];
@@ -1256,11 +1262,27 @@ static void lan743x_tx_release_desc(struct lan743x_tx *tx,
                buffer_info->dma_ptr = 0;
                buffer_info->buffer_length = 0;
        }
-       if (buffer_info->skb) {
+       if (!buffer_info->skb)
+               goto clear_active;
+
+       if (!(buffer_info->flags & TX_BUFFER_INFO_FLAG_TIMESTAMP_REQUESTED)) {
                dev_kfree_skb(buffer_info->skb);
-               buffer_info->skb = NULL;
+               goto clear_skb;
+       }
+
+       if (cleanup) {
+               lan743x_ptp_unrequest_tx_timestamp(tx->adapter);
+               dev_kfree_skb(buffer_info->skb);
+       } else {
+               ignore_sync = (buffer_info->flags &
+                              TX_BUFFER_INFO_FLAG_IGNORE_SYNC) != 0;
+               lan743x_ptp_tx_timestamp_skb(tx->adapter,
+                                            buffer_info->skb, ignore_sync);
        }
 
+clear_skb:
+       buffer_info->skb = NULL;
+
 clear_active:
        buffer_info->flags &= ~TX_BUFFER_INFO_FLAG_ACTIVE;
 
@@ -1321,10 +1343,25 @@ static int lan743x_tx_get_avail_desc(struct lan743x_tx *tx)
                return last_head - last_tail - 1;
 }
 
+void lan743x_tx_set_timestamping_mode(struct lan743x_tx *tx,
+                                     bool enable_timestamping,
+                                     bool enable_onestep_sync)
+{
+       if (enable_timestamping)
+               tx->ts_flags |= TX_TS_FLAG_TIMESTAMPING_ENABLED;
+       else
+               tx->ts_flags &= ~TX_TS_FLAG_TIMESTAMPING_ENABLED;
+       if (enable_onestep_sync)
+               tx->ts_flags |= TX_TS_FLAG_ONE_STEP_SYNC;
+       else
+               tx->ts_flags &= ~TX_TS_FLAG_ONE_STEP_SYNC;
+}
+
 static int lan743x_tx_frame_start(struct lan743x_tx *tx,
                                  unsigned char *first_buffer,
                                  unsigned int first_buffer_length,
                                  unsigned int frame_length,
+                                 bool time_stamp,
                                  bool check_sum)
 {
        /* called only from within lan743x_tx_xmit_frame.
@@ -1362,6 +1399,8 @@ static int lan743x_tx_frame_start(struct lan743x_tx *tx,
                TX_DESC_DATA0_DTYPE_DATA_ |
                TX_DESC_DATA0_FS_ |
                TX_DESC_DATA0_FCS_;
+       if (time_stamp)
+               tx->frame_data0 |= TX_DESC_DATA0_TSE_;
 
        if (check_sum)
                tx->frame_data0 |= TX_DESC_DATA0_ICE_ |
@@ -1475,6 +1514,7 @@ static int lan743x_tx_frame_add_fragment(struct lan743x_tx *tx,
 
 static void lan743x_tx_frame_end(struct lan743x_tx *tx,
                                 struct sk_buff *skb,
+                                bool time_stamp,
                                 bool ignore_sync)
 {
        /* called only from within lan743x_tx_xmit_frame
@@ -1492,6 +1532,8 @@ static void lan743x_tx_frame_end(struct lan743x_tx *tx,
        tx_descriptor = &tx->ring_cpu_ptr[tx->frame_tail];
        buffer_info = &tx->buffer_info[tx->frame_tail];
        buffer_info->skb = skb;
+       if (time_stamp)
+               buffer_info->flags |= TX_BUFFER_INFO_FLAG_TIMESTAMP_REQUESTED;
        if (ignore_sync)
                buffer_info->flags |= TX_BUFFER_INFO_FLAG_IGNORE_SYNC;
 
@@ -1520,6 +1562,7 @@ static netdev_tx_t lan743x_tx_xmit_frame(struct lan743x_tx *tx,
        unsigned int frame_length = 0;
        unsigned int head_length = 0;
        unsigned long irq_flags = 0;
+       bool do_timestamp = false;
        bool ignore_sync = false;
        int nr_frags = 0;
        bool gso = false;
@@ -1541,6 +1584,14 @@ static netdev_tx_t lan743x_tx_xmit_frame(struct lan743x_tx *tx,
        }
 
        /* space available, transmit skb  */
+       if ((skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
+           (tx->ts_flags & TX_TS_FLAG_TIMESTAMPING_ENABLED) &&
+           (lan743x_ptp_request_tx_timestamp(tx->adapter))) {
+               skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+               do_timestamp = true;
+               if (tx->ts_flags & TX_TS_FLAG_ONE_STEP_SYNC)
+                       ignore_sync = true;
+       }
        head_length = skb_headlen(skb);
        frame_length = skb_pagelen(skb);
        nr_frags = skb_shinfo(skb)->nr_frags;
@@ -1554,6 +1605,7 @@ static netdev_tx_t lan743x_tx_xmit_frame(struct lan743x_tx *tx,
        if (lan743x_tx_frame_start(tx,
                                   skb->data, head_length,
                                   start_frame_length,
+                                  do_timestamp,
                                   skb->ip_summed == CHECKSUM_PARTIAL)) {
                dev_kfree_skb(skb);
                goto unlock;
@@ -1581,7 +1633,7 @@ static netdev_tx_t lan743x_tx_xmit_frame(struct lan743x_tx *tx,
        }
 
 finish:
-       lan743x_tx_frame_end(tx, skb, ignore_sync);
+       lan743x_tx_frame_end(tx, skb, do_timestamp, ignore_sync);
 
 unlock:
        spin_unlock_irqrestore(&tx->ring_lock, irq_flags);
@@ -2410,6 +2462,8 @@ static int lan743x_netdev_close(struct net_device *netdev)
        for (index = 0; index < LAN743X_USED_RX_CHANNELS; index++)
                lan743x_rx_close(&adapter->rx[index]);
 
+       lan743x_ptp_close(adapter);
+
        lan743x_phy_close(adapter);
 
        lan743x_mac_close(adapter);
@@ -2437,6 +2491,10 @@ static int lan743x_netdev_open(struct net_device *netdev)
        if (ret)
                goto close_mac;
 
+       ret = lan743x_ptp_open(adapter);
+       if (ret)
+               goto close_phy;
+
        lan743x_rfe_open(adapter);
 
        for (index = 0; index < LAN743X_USED_RX_CHANNELS; index++) {
@@ -2456,6 +2514,9 @@ static int lan743x_netdev_open(struct net_device *netdev)
                if (adapter->rx[index].ring_cpu_ptr)
                        lan743x_rx_close(&adapter->rx[index]);
        }
+       lan743x_ptp_close(adapter);
+
+close_phy:
        lan743x_phy_close(adapter);
 
 close_mac:
@@ -2483,6 +2544,8 @@ static int lan743x_netdev_ioctl(struct net_device *netdev,
 {
        if (!netif_running(netdev))
                return -EINVAL;
+       if (cmd == SIOCSHWTSTAMP)
+               return lan743x_ptp_ioctl(netdev, ifr, cmd);
        return phy_mii_ioctl(netdev->phydev, ifr, cmd);
 }
 
@@ -2607,6 +2670,11 @@ static int lan743x_hardware_init(struct lan743x_adapter *adapter,
        adapter->intr.irq = adapter->pdev->irq;
        lan743x_csr_write(adapter, INT_EN_CLR, 0xFFFFFFFF);
        mutex_init(&adapter->dp_lock);
+
+       ret = lan743x_gpio_init(adapter);
+       if (ret)
+               return ret;
+
        ret = lan743x_mac_init(adapter);
        if (ret)
                return ret;
@@ -2615,6 +2683,10 @@ static int lan743x_hardware_init(struct lan743x_adapter *adapter,
        if (ret)
                return ret;
 
+       ret = lan743x_ptp_init(adapter);
+       if (ret)
+               return ret;
+
        lan743x_rfe_update_mac_address(adapter);
 
        ret = lan743x_dmac_init(adapter);
index 4fa7a5e027f44be6e60a60d46b198e7fcd9d61fe..0e82b6368798a2cf02cfef922b4feffa2ff779d1 100644 (file)
@@ -4,12 +4,17 @@
 #ifndef _LAN743X_H
 #define _LAN743X_H
 
+#include "lan743x_ptp.h"
+
 #define DRIVER_AUTHOR   "Bryan Whitehead <Bryan.Whitehead@microchip.com>"
 #define DRIVER_DESC "LAN743x PCIe Gigabit Ethernet Driver"
 #define DRIVER_NAME "lan743x"
 
 /* Register Definitions */
 #define ID_REV                         (0x00)
+#define ID_REV_ID_MASK_                        (0xFFFF0000)
+#define ID_REV_ID_LAN7430_             (0x74300000)
+#define ID_REV_ID_LAN7431_             (0x74310000)
 #define ID_REV_IS_VALID_CHIP_ID_(id_rev)       \
        (((id_rev) & 0xFFF00000) == 0x74300000)
 #define ID_REV_CHIP_REV_MASK_          (0x0000FFFF)
 
 #define E2P_DATA                       (0x044)
 
+#define GPIO_CFG0                      (0x050)
+#define GPIO_CFG0_GPIO_DIR_BIT_(bit)   BIT(16 + (bit))
+#define GPIO_CFG0_GPIO_DATA_BIT_(bit)  BIT(0 + (bit))
+
+#define GPIO_CFG1                      (0x054)
+#define GPIO_CFG1_GPIOEN_BIT_(bit)     BIT(16 + (bit))
+#define GPIO_CFG1_GPIOBUF_BIT_(bit)    BIT(0 + (bit))
+
+#define GPIO_CFG2                      (0x058)
+#define GPIO_CFG2_1588_POL_BIT_(bit)   BIT(0 + (bit))
+
+#define GPIO_CFG3                      (0x05C)
+#define GPIO_CFG3_1588_CH_SEL_BIT_(bit)        BIT(16 + (bit))
+#define GPIO_CFG3_1588_OE_BIT_(bit)    BIT(0 + (bit))
+
 #define FCT_RX_CTL                     (0xAC)
 #define FCT_RX_CTL_EN_(channel)                BIT(28 + (channel))
 #define FCT_RX_CTL_DIS_(channel)       BIT(24 + (channel))
 #define INT_BIT_DMA_TX_(channel)       BIT(16 + (channel))
 #define INT_BIT_ALL_TX_                        (0x000F0000)
 #define INT_BIT_SW_GP_                 BIT(9)
-#define INT_BIT_ALL_OTHER_             (0x00000280)
+#define INT_BIT_1588_                  BIT(7)
+#define INT_BIT_ALL_OTHER_             (INT_BIT_SW_GP_ | INT_BIT_1588_)
 #define INT_BIT_MAS_                   BIT(0)
 
 #define INT_SET                                (0x784)
 #define INT_MOD_CFG6                   (0x7D8)
 #define INT_MOD_CFG7                   (0x7DC)
 
+#define PTP_CMD_CTL                                    (0x0A00)
+#define PTP_CMD_CTL_PTP_CLK_STP_NSEC_                  BIT(6)
+#define PTP_CMD_CTL_PTP_CLOCK_STEP_SEC_                        BIT(5)
+#define PTP_CMD_CTL_PTP_CLOCK_LOAD_                    BIT(4)
+#define PTP_CMD_CTL_PTP_CLOCK_READ_                    BIT(3)
+#define PTP_CMD_CTL_PTP_ENABLE_                                BIT(2)
+#define PTP_CMD_CTL_PTP_DISABLE_                       BIT(1)
+#define PTP_CMD_CTL_PTP_RESET_                         BIT(0)
+#define PTP_GENERAL_CONFIG                             (0x0A04)
+#define PTP_GENERAL_CONFIG_CLOCK_EVENT_X_MASK_(channel) \
+       (0x7 << (1 + ((channel) << 2)))
+#define PTP_GENERAL_CONFIG_CLOCK_EVENT_100NS_  (0)
+#define PTP_GENERAL_CONFIG_CLOCK_EVENT_10US_   (1)
+#define PTP_GENERAL_CONFIG_CLOCK_EVENT_100US_  (2)
+#define PTP_GENERAL_CONFIG_CLOCK_EVENT_1MS_    (3)
+#define PTP_GENERAL_CONFIG_CLOCK_EVENT_10MS_   (4)
+#define PTP_GENERAL_CONFIG_CLOCK_EVENT_200MS_  (5)
+#define PTP_GENERAL_CONFIG_CLOCK_EVENT_X_SET_(channel, value) \
+       (((value) & 0x7) << (1 + ((channel) << 2)))
+#define PTP_GENERAL_CONFIG_RELOAD_ADD_X_(channel)      (BIT((channel) << 2))
+
+#define PTP_INT_STS                            (0x0A08)
+#define PTP_INT_EN_SET                         (0x0A0C)
+#define PTP_INT_EN_CLR                         (0x0A10)
+#define PTP_INT_BIT_TX_SWTS_ERR_               BIT(13)
+#define PTP_INT_BIT_TX_TS_                     BIT(12)
+#define PTP_INT_BIT_TIMER_B_                   BIT(1)
+#define PTP_INT_BIT_TIMER_A_                   BIT(0)
+
+#define PTP_CLOCK_SEC                          (0x0A14)
+#define PTP_CLOCK_NS                           (0x0A18)
+#define PTP_CLOCK_SUBNS                                (0x0A1C)
+#define PTP_CLOCK_RATE_ADJ                     (0x0A20)
+#define PTP_CLOCK_RATE_ADJ_DIR_                        BIT(31)
+#define PTP_CLOCK_STEP_ADJ                     (0x0A2C)
+#define PTP_CLOCK_STEP_ADJ_DIR_                        BIT(31)
+#define PTP_CLOCK_STEP_ADJ_VALUE_MASK_         (0x3FFFFFFF)
+#define PTP_CLOCK_TARGET_SEC_X(channel)                (0x0A30 + ((channel) << 4))
+#define PTP_CLOCK_TARGET_NS_X(channel)         (0x0A34 + ((channel) << 4))
+#define PTP_CLOCK_TARGET_RELOAD_SEC_X(channel) (0x0A38 + ((channel) << 4))
+#define PTP_CLOCK_TARGET_RELOAD_NS_X(channel)  (0x0A3C + ((channel) << 4))
+#define PTP_LATENCY                            (0x0A5C)
+#define PTP_LATENCY_TX_SET_(tx_latency)                (((u32)(tx_latency)) << 16)
+#define PTP_LATENCY_RX_SET_(rx_latency)                \
+       (((u32)(rx_latency)) & 0x0000FFFF)
+#define PTP_CAP_INFO                           (0x0A60)
+#define PTP_CAP_INFO_TX_TS_CNT_GET_(reg_val)   (((reg_val) & 0x00000070) >> 4)
+
+#define PTP_TX_MOD                             (0x0AA4)
+#define PTP_TX_MOD_TX_PTP_SYNC_TS_INSERT_      (0x10000000)
+
+#define PTP_TX_MOD2                            (0x0AA8)
+#define PTP_TX_MOD2_TX_PTP_CLR_UDPV4_CHKSUM_   (0x00000001)
+
+#define PTP_TX_EGRESS_SEC                      (0x0AAC)
+#define PTP_TX_EGRESS_NS                       (0x0AB0)
+#define PTP_TX_EGRESS_NS_CAPTURE_CAUSE_MASK_   (0xC0000000)
+#define PTP_TX_EGRESS_NS_CAPTURE_CAUSE_AUTO_   (0x00000000)
+#define PTP_TX_EGRESS_NS_CAPTURE_CAUSE_SW_     (0x40000000)
+#define PTP_TX_EGRESS_NS_TS_NS_MASK_           (0x3FFFFFFF)
+
+#define PTP_TX_MSG_HEADER                      (0x0AB4)
+#define PTP_TX_MSG_HEADER_MSG_TYPE_            (0x000F0000)
+#define PTP_TX_MSG_HEADER_MSG_TYPE_SYNC_       (0x00000000)
+
 #define DMAC_CFG                               (0xC00)
 #define DMAC_CFG_COAL_EN_                      BIT(16)
 #define DMAC_CFG_CH_ARB_SEL_RX_HIGH_           (0x00000000)
@@ -542,8 +628,12 @@ struct lan743x_tx_buffer_info;
 
 #define TX_FRAME_FLAG_IN_PROGRESS      BIT(0)
 
+#define TX_TS_FLAG_TIMESTAMPING_ENABLED        BIT(0)
+#define TX_TS_FLAG_ONE_STEP_SYNC       BIT(1)
+
 struct lan743x_tx {
        struct lan743x_adapter *adapter;
+       u32     ts_flags;
        u32     vector_flags;
        int     channel_number;
 
@@ -570,6 +660,10 @@ struct lan743x_tx {
        struct sk_buff *overflow_skb;
 };
 
+void lan743x_tx_set_timestamping_mode(struct lan743x_tx *tx,
+                                     bool enable_timestamping,
+                                     bool enable_onestep_sync);
+
 /* RX */
 struct lan743x_rx_descriptor;
 struct lan743x_rx_buffer_info;
@@ -610,6 +704,9 @@ struct lan743x_adapter {
        /* lock, used to prevent concurrent access to data port */
        struct mutex            dp_lock;
 
+       struct lan743x_gpio     gpio;
+       struct lan743x_ptp      ptp;
+
        u8                      mac_address[ETH_ALEN];
 
        struct lan743x_phy      phy;
@@ -660,6 +757,7 @@ struct lan743x_adapter {
 #define TX_DESC_DATA0_IPE_                     (0x00200000)
 #define TX_DESC_DATA0_TPE_                     (0x00100000)
 #define TX_DESC_DATA0_FCS_                     (0x00020000)
+#define TX_DESC_DATA0_TSE_                     (0x00010000)
 #define TX_DESC_DATA0_BUF_LENGTH_MASK_         (0x0000FFFF)
 #define TX_DESC_DATA0_EXT_LSO_                 (0x00200000)
 #define TX_DESC_DATA0_EXT_PAY_LENGTH_MASK_     (0x000FFFFF)
@@ -673,6 +771,7 @@ struct lan743x_tx_descriptor {
 } __aligned(DEFAULT_DMA_DESCRIPTOR_SPACING);
 
 #define TX_BUFFER_INFO_FLAG_ACTIVE             BIT(0)
+#define TX_BUFFER_INFO_FLAG_TIMESTAMP_REQUESTED        BIT(1)
 #define TX_BUFFER_INFO_FLAG_IGNORE_SYNC                BIT(2)
 #define TX_BUFFER_INFO_FLAG_SKB_FRAGMENT       BIT(3)
 struct lan743x_tx_buffer_info {
diff --git a/drivers/net/ethernet/microchip/lan743x_ptp.c b/drivers/net/ethernet/microchip/lan743x_ptp.c
new file mode 100644 (file)
index 0000000..029a2af
--- /dev/null
@@ -0,0 +1,1163 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/* Copyright (C) 2018 Microchip Technology Inc. */
+
+#include <linux/netdevice.h>
+#include "lan743x_main.h"
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/net_tstamp.h>
+
+#include "lan743x_ptp.h"
+
+#define LAN743X_NUMBER_OF_GPIO                 (12)
+#define LAN743X_PTP_MAX_FREQ_ADJ_IN_PPB                (31249999)
+#define LAN743X_PTP_MAX_FINE_ADJ_IN_SCALED_PPM (2047999934)
+
+static bool lan743x_ptp_is_enabled(struct lan743x_adapter *adapter);
+static void lan743x_ptp_enable(struct lan743x_adapter *adapter);
+static void lan743x_ptp_disable(struct lan743x_adapter *adapter);
+static void lan743x_ptp_reset(struct lan743x_adapter *adapter);
+static void lan743x_ptp_clock_set(struct lan743x_adapter *adapter,
+                                 u32 seconds, u32 nano_seconds,
+                                 u32 sub_nano_seconds);
+
+int lan743x_gpio_init(struct lan743x_adapter *adapter)
+{
+       struct lan743x_gpio *gpio = &adapter->gpio;
+
+       spin_lock_init(&gpio->gpio_lock);
+
+       gpio->gpio_cfg0 = 0; /* set all direction to input, data = 0 */
+       gpio->gpio_cfg1 = 0x0FFF0000;/* disable all gpio, set to open drain */
+       gpio->gpio_cfg2 = 0;/* set all to 1588 low polarity level */
+       gpio->gpio_cfg3 = 0;/* disable all 1588 output */
+       lan743x_csr_write(adapter, GPIO_CFG0, gpio->gpio_cfg0);
+       lan743x_csr_write(adapter, GPIO_CFG1, gpio->gpio_cfg1);
+       lan743x_csr_write(adapter, GPIO_CFG2, gpio->gpio_cfg2);
+       lan743x_csr_write(adapter, GPIO_CFG3, gpio->gpio_cfg3);
+
+       return 0;
+}
+
+static void lan743x_ptp_wait_till_cmd_done(struct lan743x_adapter *adapter,
+                                          u32 bit_mask)
+{
+       int timeout = 1000;
+       u32 data = 0;
+
+       while (timeout &&
+              (data = (lan743x_csr_read(adapter, PTP_CMD_CTL) &
+              bit_mask))) {
+               usleep_range(1000, 20000);
+               timeout--;
+       }
+       if (data) {
+               netif_err(adapter, drv, adapter->netdev,
+                         "timeout waiting for cmd to be done, cmd = 0x%08X\n",
+                         bit_mask);
+       }
+}
+
+static void lan743x_ptp_tx_ts_enqueue_ts(struct lan743x_adapter *adapter,
+                                        u32 seconds, u32 nano_seconds,
+                                        u32 header)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+
+       spin_lock_bh(&ptp->tx_ts_lock);
+       if (ptp->tx_ts_queue_size < LAN743X_PTP_NUMBER_OF_TX_TIMESTAMPS) {
+               ptp->tx_ts_seconds_queue[ptp->tx_ts_queue_size] = seconds;
+               ptp->tx_ts_nseconds_queue[ptp->tx_ts_queue_size] = nano_seconds;
+               ptp->tx_ts_header_queue[ptp->tx_ts_queue_size] = header;
+               ptp->tx_ts_queue_size++;
+       } else {
+               netif_err(adapter, drv, adapter->netdev,
+                         "tx ts queue overflow\n");
+       }
+       spin_unlock_bh(&ptp->tx_ts_lock);
+}
+
+static void lan743x_ptp_tx_ts_complete(struct lan743x_adapter *adapter)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+       struct skb_shared_hwtstamps tstamps;
+       u32 header, nseconds, seconds;
+       bool ignore_sync = false;
+       struct sk_buff *skb;
+       int c, i;
+
+       spin_lock_bh(&ptp->tx_ts_lock);
+       c = ptp->tx_ts_skb_queue_size;
+
+       if (c > ptp->tx_ts_queue_size)
+               c = ptp->tx_ts_queue_size;
+       if (c <= 0)
+               goto done;
+
+       for (i = 0; i < c; i++) {
+               ignore_sync = ((ptp->tx_ts_ignore_sync_queue &
+                               BIT(i)) != 0);
+               skb = ptp->tx_ts_skb_queue[i];
+               nseconds = ptp->tx_ts_nseconds_queue[i];
+               seconds = ptp->tx_ts_seconds_queue[i];
+               header = ptp->tx_ts_header_queue[i];
+
+               memset(&tstamps, 0, sizeof(tstamps));
+               tstamps.hwtstamp = ktime_set(seconds, nseconds);
+               if (!ignore_sync ||
+                   ((header & PTP_TX_MSG_HEADER_MSG_TYPE_) !=
+                   PTP_TX_MSG_HEADER_MSG_TYPE_SYNC_))
+                       skb_tstamp_tx(skb, &tstamps);
+
+               dev_kfree_skb(skb);
+
+               ptp->tx_ts_skb_queue[i] = NULL;
+               ptp->tx_ts_seconds_queue[i] = 0;
+               ptp->tx_ts_nseconds_queue[i] = 0;
+               ptp->tx_ts_header_queue[i] = 0;
+       }
+
+       /* shift queue */
+       ptp->tx_ts_ignore_sync_queue >>= c;
+       for (i = c; i < LAN743X_PTP_NUMBER_OF_TX_TIMESTAMPS; i++) {
+               ptp->tx_ts_skb_queue[i - c] = ptp->tx_ts_skb_queue[i];
+               ptp->tx_ts_seconds_queue[i - c] = ptp->tx_ts_seconds_queue[i];
+               ptp->tx_ts_nseconds_queue[i - c] = ptp->tx_ts_nseconds_queue[i];
+               ptp->tx_ts_header_queue[i - c] = ptp->tx_ts_header_queue[i];
+
+               ptp->tx_ts_skb_queue[i] = NULL;
+               ptp->tx_ts_seconds_queue[i] = 0;
+               ptp->tx_ts_nseconds_queue[i] = 0;
+               ptp->tx_ts_header_queue[i] = 0;
+       }
+       ptp->tx_ts_skb_queue_size -= c;
+       ptp->tx_ts_queue_size -= c;
+done:
+       ptp->pending_tx_timestamps -= c;
+       spin_unlock_bh(&ptp->tx_ts_lock);
+}
+
+#ifdef CONFIG_PTP_1588_CLOCK
+static int lan743x_ptp_reserve_event_ch(struct lan743x_adapter *adapter)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+       int result = -ENODEV;
+       int index = 0;
+
+       mutex_lock(&ptp->command_lock);
+       for (index = 0; index < LAN743X_PTP_NUMBER_OF_EVENT_CHANNELS; index++) {
+               if (!(test_bit(index, &ptp->used_event_ch))) {
+                       ptp->used_event_ch |= BIT(index);
+                       result = index;
+                       break;
+               }
+       }
+       mutex_unlock(&ptp->command_lock);
+       return result;
+}
+
+static void lan743x_ptp_release_event_ch(struct lan743x_adapter *adapter,
+                                        int event_channel)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+
+       mutex_lock(&ptp->command_lock);
+       if (test_bit(event_channel, &ptp->used_event_ch)) {
+               ptp->used_event_ch &= ~BIT(event_channel);
+       } else {
+               netif_warn(adapter, drv, adapter->netdev,
+                          "attempted release on a not used event_channel = %d\n",
+                          event_channel);
+       }
+       mutex_unlock(&ptp->command_lock);
+}
+
+static void lan743x_ptp_clock_get(struct lan743x_adapter *adapter,
+                                 u32 *seconds, u32 *nano_seconds,
+                                 u32 *sub_nano_seconds);
+static void lan743x_ptp_clock_step(struct lan743x_adapter *adapter,
+                                  s64 time_step_ns);
+
+static int lan743x_gpio_rsrv_ptp_out(struct lan743x_adapter *adapter,
+                                    int bit, int ptp_channel)
+{
+       struct lan743x_gpio *gpio = &adapter->gpio;
+       unsigned long irq_flags = 0;
+       int bit_mask = BIT(bit);
+       int ret = -EBUSY;
+
+       spin_lock_irqsave(&gpio->gpio_lock, irq_flags);
+
+       if (!(gpio->used_bits & bit_mask)) {
+               gpio->used_bits |= bit_mask;
+               gpio->output_bits |= bit_mask;
+               gpio->ptp_bits |= bit_mask;
+
+               /* set as output, and zero initial value */
+               gpio->gpio_cfg0 |= GPIO_CFG0_GPIO_DIR_BIT_(bit);
+               gpio->gpio_cfg0 &= ~GPIO_CFG0_GPIO_DATA_BIT_(bit);
+               lan743x_csr_write(adapter, GPIO_CFG0, gpio->gpio_cfg0);
+
+               /* enable gpio, and set buffer type to push pull */
+               gpio->gpio_cfg1 &= ~GPIO_CFG1_GPIOEN_BIT_(bit);
+               gpio->gpio_cfg1 |= GPIO_CFG1_GPIOBUF_BIT_(bit);
+               lan743x_csr_write(adapter, GPIO_CFG1, gpio->gpio_cfg1);
+
+               /* set 1588 polarity to high */
+               gpio->gpio_cfg2 |= GPIO_CFG2_1588_POL_BIT_(bit);
+               lan743x_csr_write(adapter, GPIO_CFG2, gpio->gpio_cfg2);
+
+               if (!ptp_channel) {
+                       /* use channel A */
+                       gpio->gpio_cfg3 &= ~GPIO_CFG3_1588_CH_SEL_BIT_(bit);
+               } else {
+                       /* use channel B */
+                       gpio->gpio_cfg3 |= GPIO_CFG3_1588_CH_SEL_BIT_(bit);
+               }
+               gpio->gpio_cfg3 |= GPIO_CFG3_1588_OE_BIT_(bit);
+               lan743x_csr_write(adapter, GPIO_CFG3, gpio->gpio_cfg3);
+
+               ret = bit;
+       }
+       spin_unlock_irqrestore(&gpio->gpio_lock, irq_flags);
+       return ret;
+}
+
+static void lan743x_gpio_release(struct lan743x_adapter *adapter, int bit)
+{
+       struct lan743x_gpio *gpio = &adapter->gpio;
+       unsigned long irq_flags = 0;
+       int bit_mask = BIT(bit);
+
+       spin_lock_irqsave(&gpio->gpio_lock, irq_flags);
+       if (gpio->used_bits & bit_mask) {
+               gpio->used_bits &= ~bit_mask;
+               if (gpio->output_bits & bit_mask) {
+                       gpio->output_bits &= ~bit_mask;
+
+                       if (gpio->ptp_bits & bit_mask) {
+                               gpio->ptp_bits &= ~bit_mask;
+                               /* disable ptp output */
+                               gpio->gpio_cfg3 &= ~GPIO_CFG3_1588_OE_BIT_(bit);
+                               lan743x_csr_write(adapter, GPIO_CFG3,
+                                                 gpio->gpio_cfg3);
+                       }
+                       /* release gpio output */
+
+                       /* disable gpio */
+                       gpio->gpio_cfg1 |= GPIO_CFG1_GPIOEN_BIT_(bit);
+                       gpio->gpio_cfg1 &= ~GPIO_CFG1_GPIOBUF_BIT_(bit);
+                       lan743x_csr_write(adapter, GPIO_CFG1, gpio->gpio_cfg1);
+
+                       /* reset back to input */
+                       gpio->gpio_cfg0 &= ~GPIO_CFG0_GPIO_DIR_BIT_(bit);
+                       gpio->gpio_cfg0 &= ~GPIO_CFG0_GPIO_DATA_BIT_(bit);
+                       lan743x_csr_write(adapter, GPIO_CFG0, gpio->gpio_cfg0);
+               }
+       }
+       spin_unlock_irqrestore(&gpio->gpio_lock, irq_flags);
+}
+
+static int lan743x_ptpci_adjfine(struct ptp_clock_info *ptpci, long scaled_ppm)
+{
+       struct lan743x_ptp *ptp =
+               container_of(ptpci, struct lan743x_ptp, ptp_clock_info);
+       struct lan743x_adapter *adapter =
+               container_of(ptp, struct lan743x_adapter, ptp);
+       u32 lan743x_rate_adj = 0;
+       bool positive = true;
+       u64 u64_delta = 0;
+
+       if ((scaled_ppm < (-LAN743X_PTP_MAX_FINE_ADJ_IN_SCALED_PPM)) ||
+           scaled_ppm > LAN743X_PTP_MAX_FINE_ADJ_IN_SCALED_PPM) {
+               return -EINVAL;
+       }
+       if (scaled_ppm > 0) {
+               u64_delta = (u64)scaled_ppm;
+               positive = true;
+       } else {
+               u64_delta = (u64)(-scaled_ppm);
+               positive = false;
+       }
+       u64_delta = (u64_delta << 19);
+       lan743x_rate_adj = div_u64(u64_delta, 1000000);
+
+       if (positive)
+               lan743x_rate_adj |= PTP_CLOCK_RATE_ADJ_DIR_;
+
+       lan743x_csr_write(adapter, PTP_CLOCK_RATE_ADJ,
+                         lan743x_rate_adj);
+
+       return 0;
+}
+
+static int lan743x_ptpci_adjfreq(struct ptp_clock_info *ptpci, s32 delta_ppb)
+{
+       struct lan743x_ptp *ptp =
+               container_of(ptpci, struct lan743x_ptp, ptp_clock_info);
+       struct lan743x_adapter *adapter =
+               container_of(ptp, struct lan743x_adapter, ptp);
+       u32 lan743x_rate_adj = 0;
+       bool positive = true;
+       u32 u32_delta = 0;
+       u64 u64_delta = 0;
+
+       if ((delta_ppb < (-LAN743X_PTP_MAX_FREQ_ADJ_IN_PPB)) ||
+           delta_ppb > LAN743X_PTP_MAX_FREQ_ADJ_IN_PPB) {
+               return -EINVAL;
+       }
+       if (delta_ppb > 0) {
+               u32_delta = (u32)delta_ppb;
+               positive = true;
+       } else {
+               u32_delta = (u32)(-delta_ppb);
+               positive = false;
+       }
+       u64_delta = (((u64)u32_delta) << 35);
+       lan743x_rate_adj = div_u64(u64_delta, 1000000000);
+
+       if (positive)
+               lan743x_rate_adj |= PTP_CLOCK_RATE_ADJ_DIR_;
+
+       lan743x_csr_write(adapter, PTP_CLOCK_RATE_ADJ,
+                         lan743x_rate_adj);
+
+       return 0;
+}
+
+static int lan743x_ptpci_adjtime(struct ptp_clock_info *ptpci, s64 delta)
+{
+       struct lan743x_ptp *ptp =
+               container_of(ptpci, struct lan743x_ptp, ptp_clock_info);
+       struct lan743x_adapter *adapter =
+               container_of(ptp, struct lan743x_adapter, ptp);
+
+       lan743x_ptp_clock_step(adapter, delta);
+
+       return 0;
+}
+
+static int lan743x_ptpci_gettime64(struct ptp_clock_info *ptpci,
+                                  struct timespec64 *ts)
+{
+       struct lan743x_ptp *ptp =
+               container_of(ptpci, struct lan743x_ptp, ptp_clock_info);
+       struct lan743x_adapter *adapter =
+               container_of(ptp, struct lan743x_adapter, ptp);
+       u32 nano_seconds = 0;
+       u32 seconds = 0;
+
+       lan743x_ptp_clock_get(adapter, &seconds, &nano_seconds, NULL);
+       ts->tv_sec = seconds;
+       ts->tv_nsec = nano_seconds;
+
+       return 0;
+}
+
+static int lan743x_ptpci_settime64(struct ptp_clock_info *ptpci,
+                                  const struct timespec64 *ts)
+{
+       struct lan743x_ptp *ptp =
+               container_of(ptpci, struct lan743x_ptp, ptp_clock_info);
+       struct lan743x_adapter *adapter =
+               container_of(ptp, struct lan743x_adapter, ptp);
+       u32 nano_seconds = 0;
+       u32 seconds = 0;
+
+       if (ts) {
+               if (ts->tv_sec > 0xFFFFFFFFLL ||
+                   ts->tv_sec < 0) {
+                       netif_warn(adapter, drv, adapter->netdev,
+                                  "ts->tv_sec out of range, %lld\n",
+                                  ts->tv_sec);
+                       return -ERANGE;
+               }
+               if (ts->tv_nsec >= 1000000000L ||
+                   ts->tv_nsec < 0) {
+                       netif_warn(adapter, drv, adapter->netdev,
+                                  "ts->tv_nsec out of range, %ld\n",
+                                  ts->tv_nsec);
+                       return -ERANGE;
+               }
+               seconds = ts->tv_sec;
+               nano_seconds = ts->tv_nsec;
+               lan743x_ptp_clock_set(adapter, seconds, nano_seconds, 0);
+       } else {
+               netif_warn(adapter, drv, adapter->netdev, "ts == NULL\n");
+               return -EINVAL;
+       }
+
+       return 0;
+}
+
+static void lan743x_ptp_perout_off(struct lan743x_adapter *adapter)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+       u32 general_config = 0;
+
+       if (ptp->perout_gpio_bit >= 0) {
+               lan743x_gpio_release(adapter, ptp->perout_gpio_bit);
+               ptp->perout_gpio_bit = -1;
+       }
+
+       if (ptp->perout_event_ch >= 0) {
+               /* set target to far in the future, effectively disabling it */
+               lan743x_csr_write(adapter,
+                                 PTP_CLOCK_TARGET_SEC_X(ptp->perout_event_ch),
+                                 0xFFFF0000);
+               lan743x_csr_write(adapter,
+                                 PTP_CLOCK_TARGET_NS_X(ptp->perout_event_ch),
+                                 0);
+
+               general_config = lan743x_csr_read(adapter, PTP_GENERAL_CONFIG);
+               general_config |= PTP_GENERAL_CONFIG_RELOAD_ADD_X_
+                                 (ptp->perout_event_ch);
+               lan743x_csr_write(adapter, PTP_GENERAL_CONFIG, general_config);
+               lan743x_ptp_release_event_ch(adapter, ptp->perout_event_ch);
+               ptp->perout_event_ch = -1;
+       }
+}
+
+static int lan743x_ptp_perout(struct lan743x_adapter *adapter, int on,
+                             struct ptp_perout_request *perout)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+       u32 period_sec = 0, period_nsec = 0;
+       u32 start_sec = 0, start_nsec = 0;
+       u32 general_config = 0;
+       int pulse_width = 0;
+       int perout_bit = 0;
+
+       if (!on) {
+               lan743x_ptp_perout_off(adapter);
+               return 0;
+       }
+
+       if (ptp->perout_event_ch >= 0 ||
+           ptp->perout_gpio_bit >= 0) {
+               /* already on, turn off first */
+               lan743x_ptp_perout_off(adapter);
+       }
+
+       ptp->perout_event_ch = lan743x_ptp_reserve_event_ch(adapter);
+       if (ptp->perout_event_ch < 0) {
+               netif_warn(adapter, drv, adapter->netdev,
+                          "Failed to reserve event channel for PEROUT\n");
+               goto failed;
+       }
+
+       switch (adapter->csr.id_rev & ID_REV_ID_MASK_) {
+       case ID_REV_ID_LAN7430_:
+               perout_bit = 2;/* GPIO 2 is preferred on EVB LAN7430 */
+               break;
+       case ID_REV_ID_LAN7431_:
+               perout_bit = 4;/* GPIO 4 is preferred on EVB LAN7431 */
+               break;
+       }
+
+       ptp->perout_gpio_bit = lan743x_gpio_rsrv_ptp_out(adapter,
+                                                        perout_bit,
+                                                        ptp->perout_event_ch);
+
+       if (ptp->perout_gpio_bit < 0) {
+               netif_warn(adapter, drv, adapter->netdev,
+                          "Failed to reserve gpio %d for PEROUT\n",
+                          perout_bit);
+               goto failed;
+       }
+
+       start_sec = perout->start.sec;
+       start_sec += perout->start.nsec / 1000000000;
+       start_nsec = perout->start.nsec % 1000000000;
+
+       period_sec = perout->period.sec;
+       period_sec += perout->period.nsec / 1000000000;
+       period_nsec = perout->period.nsec % 1000000000;
+
+       if (period_sec == 0) {
+               if (period_nsec >= 400000000) {
+                       pulse_width = PTP_GENERAL_CONFIG_CLOCK_EVENT_200MS_;
+               } else if (period_nsec >= 20000000) {
+                       pulse_width = PTP_GENERAL_CONFIG_CLOCK_EVENT_10MS_;
+               } else if (period_nsec >= 2000000) {
+                       pulse_width = PTP_GENERAL_CONFIG_CLOCK_EVENT_1MS_;
+               } else if (period_nsec >= 200000) {
+                       pulse_width = PTP_GENERAL_CONFIG_CLOCK_EVENT_100US_;
+               } else if (period_nsec >= 20000) {
+                       pulse_width = PTP_GENERAL_CONFIG_CLOCK_EVENT_10US_;
+               } else if (period_nsec >= 200) {
+                       pulse_width = PTP_GENERAL_CONFIG_CLOCK_EVENT_100NS_;
+               } else {
+                       netif_warn(adapter, drv, adapter->netdev,
+                                  "perout period too small, minimum is 200nS\n");
+                       goto failed;
+               }
+       } else {
+               pulse_width = PTP_GENERAL_CONFIG_CLOCK_EVENT_200MS_;
+       }
+
+       /* turn off by setting target far in future */
+       lan743x_csr_write(adapter,
+                         PTP_CLOCK_TARGET_SEC_X(ptp->perout_event_ch),
+                         0xFFFF0000);
+       lan743x_csr_write(adapter,
+                         PTP_CLOCK_TARGET_NS_X(ptp->perout_event_ch), 0);
+
+       /* Configure to pulse every period */
+       general_config = lan743x_csr_read(adapter, PTP_GENERAL_CONFIG);
+       general_config &= ~(PTP_GENERAL_CONFIG_CLOCK_EVENT_X_MASK_
+                         (ptp->perout_event_ch));
+       general_config |= PTP_GENERAL_CONFIG_CLOCK_EVENT_X_SET_
+                         (ptp->perout_event_ch, pulse_width);
+       general_config &= ~PTP_GENERAL_CONFIG_RELOAD_ADD_X_
+                         (ptp->perout_event_ch);
+       lan743x_csr_write(adapter, PTP_GENERAL_CONFIG, general_config);
+
+       /* set the reload to one toggle cycle */
+       lan743x_csr_write(adapter,
+                         PTP_CLOCK_TARGET_RELOAD_SEC_X(ptp->perout_event_ch),
+                         period_sec);
+       lan743x_csr_write(adapter,
+                         PTP_CLOCK_TARGET_RELOAD_NS_X(ptp->perout_event_ch),
+                         period_nsec);
+
+       /* set the start time */
+       lan743x_csr_write(adapter,
+                         PTP_CLOCK_TARGET_SEC_X(ptp->perout_event_ch),
+                         start_sec);
+       lan743x_csr_write(adapter,
+                         PTP_CLOCK_TARGET_NS_X(ptp->perout_event_ch),
+                         start_nsec);
+
+       return 0;
+
+failed:
+       lan743x_ptp_perout_off(adapter);
+       return -ENODEV;
+}
+
+static int lan743x_ptpci_enable(struct ptp_clock_info *ptpci,
+                               struct ptp_clock_request *request, int on)
+{
+       struct lan743x_ptp *ptp =
+               container_of(ptpci, struct lan743x_ptp, ptp_clock_info);
+       struct lan743x_adapter *adapter =
+               container_of(ptp, struct lan743x_adapter, ptp);
+
+       if (request) {
+               switch (request->type) {
+               case PTP_CLK_REQ_EXTTS:
+                       return -EINVAL;
+               case PTP_CLK_REQ_PEROUT:
+                       if (request->perout.index == 0)
+                               return lan743x_ptp_perout(adapter, on,
+                                                         &request->perout);
+                       return -EINVAL;
+               case PTP_CLK_REQ_PPS:
+                       return -EINVAL;
+               default:
+                       netif_err(adapter, drv, adapter->netdev,
+                                 "request->type == %d, Unknown\n",
+                                 request->type);
+                       break;
+               }
+       } else {
+               netif_err(adapter, drv, adapter->netdev, "request == NULL\n");
+       }
+       return 0;
+}
+
+static long lan743x_ptpci_do_aux_work(struct ptp_clock_info *ptpci)
+{
+       struct lan743x_ptp *ptp =
+               container_of(ptpci, struct lan743x_ptp, ptp_clock_info);
+       struct lan743x_adapter *adapter =
+               container_of(ptp, struct lan743x_adapter, ptp);
+       u32 cap_info, cause, header, nsec, seconds;
+       bool new_timestamp_available = false;
+       int count = 0;
+
+       while ((count < 100) &&
+              (lan743x_csr_read(adapter, PTP_INT_STS) & PTP_INT_BIT_TX_TS_)) {
+               count++;
+               cap_info = lan743x_csr_read(adapter, PTP_CAP_INFO);
+
+               if (PTP_CAP_INFO_TX_TS_CNT_GET_(cap_info) > 0) {
+                       seconds = lan743x_csr_read(adapter,
+                                                  PTP_TX_EGRESS_SEC);
+                       nsec = lan743x_csr_read(adapter, PTP_TX_EGRESS_NS);
+                       cause = (nsec &
+                                PTP_TX_EGRESS_NS_CAPTURE_CAUSE_MASK_);
+                       header = lan743x_csr_read(adapter,
+                                                 PTP_TX_MSG_HEADER);
+
+                       if (cause == PTP_TX_EGRESS_NS_CAPTURE_CAUSE_SW_) {
+                               nsec &= PTP_TX_EGRESS_NS_TS_NS_MASK_;
+                               lan743x_ptp_tx_ts_enqueue_ts(adapter,
+                                                            seconds, nsec,
+                                                            header);
+                               new_timestamp_available = true;
+                       } else if (cause ==
+                               PTP_TX_EGRESS_NS_CAPTURE_CAUSE_AUTO_) {
+                               netif_err(adapter, drv, adapter->netdev,
+                                         "Auto capture cause not supported\n");
+                       } else {
+                               netif_warn(adapter, drv, adapter->netdev,
+                                          "unknown tx timestamp capture cause\n");
+                       }
+               } else {
+                       netif_warn(adapter, drv, adapter->netdev,
+                                  "TX TS INT but no TX TS CNT\n");
+               }
+               lan743x_csr_write(adapter, PTP_INT_STS, PTP_INT_BIT_TX_TS_);
+       }
+
+       if (new_timestamp_available)
+               lan743x_ptp_tx_ts_complete(adapter);
+
+       lan743x_csr_write(adapter, INT_EN_SET, INT_BIT_1588_);
+
+       return -1;
+}
+
+static void lan743x_ptp_clock_get(struct lan743x_adapter *adapter,
+                                 u32 *seconds, u32 *nano_seconds,
+                                 u32 *sub_nano_seconds)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+
+       mutex_lock(&ptp->command_lock);
+
+       lan743x_csr_write(adapter, PTP_CMD_CTL, PTP_CMD_CTL_PTP_CLOCK_READ_);
+       lan743x_ptp_wait_till_cmd_done(adapter, PTP_CMD_CTL_PTP_CLOCK_READ_);
+
+       if (seconds)
+               (*seconds) = lan743x_csr_read(adapter, PTP_CLOCK_SEC);
+
+       if (nano_seconds)
+               (*nano_seconds) = lan743x_csr_read(adapter, PTP_CLOCK_NS);
+
+       if (sub_nano_seconds)
+               (*sub_nano_seconds) =
+               lan743x_csr_read(adapter, PTP_CLOCK_SUBNS);
+
+       mutex_unlock(&ptp->command_lock);
+}
+
+static void lan743x_ptp_clock_step(struct lan743x_adapter *adapter,
+                                  s64 time_step_ns)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+       u32 nano_seconds_step = 0;
+       u64 abs_time_step_ns = 0;
+       u32 unsigned_seconds = 0;
+       u32 nano_seconds = 0;
+       u32 remainder = 0;
+       s32 seconds = 0;
+
+       if (time_step_ns >  15000000000LL) {
+               /* convert to clock set */
+               lan743x_ptp_clock_get(adapter, &unsigned_seconds,
+                                     &nano_seconds, NULL);
+               unsigned_seconds += div_u64_rem(time_step_ns, 1000000000LL,
+                                               &remainder);
+               nano_seconds += remainder;
+               if (nano_seconds >= 1000000000) {
+                       unsigned_seconds++;
+                       nano_seconds -= 1000000000;
+               }
+               lan743x_ptp_clock_set(adapter, unsigned_seconds,
+                                     nano_seconds, 0);
+               return;
+       } else if (time_step_ns < -15000000000LL) {
+               /* convert to clock set */
+               time_step_ns = -time_step_ns;
+
+               lan743x_ptp_clock_get(adapter, &unsigned_seconds,
+                                     &nano_seconds, NULL);
+               unsigned_seconds -= div_u64_rem(time_step_ns, 1000000000LL,
+                                               &remainder);
+               nano_seconds_step = remainder;
+               if (nano_seconds < nano_seconds_step) {
+                       unsigned_seconds--;
+                       nano_seconds += 1000000000;
+               }
+               nano_seconds -= nano_seconds_step;
+               lan743x_ptp_clock_set(adapter, unsigned_seconds,
+                                     nano_seconds, 0);
+               return;
+       }
+
+       /* do clock step */
+       if (time_step_ns >= 0) {
+               abs_time_step_ns = (u64)(time_step_ns);
+               seconds = (s32)div_u64_rem(abs_time_step_ns, 1000000000,
+                                          &remainder);
+               nano_seconds = (u32)remainder;
+       } else {
+               abs_time_step_ns = (u64)(-time_step_ns);
+               seconds = -((s32)div_u64_rem(abs_time_step_ns, 1000000000,
+                                            &remainder));
+               nano_seconds = (u32)remainder;
+               if (nano_seconds > 0) {
+                       /* subtracting nano seconds is not allowed
+                        * convert to subtracting from seconds,
+                        * and adding to nanoseconds
+                        */
+                       seconds--;
+                       nano_seconds = (1000000000 - nano_seconds);
+               }
+       }
+
+       if (nano_seconds > 0) {
+               /* add 8 ns to cover the likely normal increment */
+               nano_seconds += 8;
+       }
+
+       if (nano_seconds >= 1000000000) {
+               /* carry into seconds */
+               seconds++;
+               nano_seconds -= 1000000000;
+       }
+
+       while (seconds) {
+               mutex_lock(&ptp->command_lock);
+               if (seconds > 0) {
+                       u32 adjustment_value = (u32)seconds;
+
+                       if (adjustment_value > 0xF)
+                               adjustment_value = 0xF;
+                       lan743x_csr_write(adapter, PTP_CLOCK_STEP_ADJ,
+                                         PTP_CLOCK_STEP_ADJ_DIR_ |
+                                         adjustment_value);
+                       seconds -= ((s32)adjustment_value);
+               } else {
+                       u32 adjustment_value = (u32)(-seconds);
+
+                       if (adjustment_value > 0xF)
+                               adjustment_value = 0xF;
+                       lan743x_csr_write(adapter, PTP_CLOCK_STEP_ADJ,
+                                         adjustment_value);
+                       seconds += ((s32)adjustment_value);
+               }
+               lan743x_csr_write(adapter, PTP_CMD_CTL,
+                                 PTP_CMD_CTL_PTP_CLOCK_STEP_SEC_);
+               lan743x_ptp_wait_till_cmd_done(adapter,
+                                              PTP_CMD_CTL_PTP_CLOCK_STEP_SEC_);
+               mutex_unlock(&ptp->command_lock);
+       }
+       if (nano_seconds) {
+               mutex_lock(&ptp->command_lock);
+               lan743x_csr_write(adapter, PTP_CLOCK_STEP_ADJ,
+                                 PTP_CLOCK_STEP_ADJ_DIR_ |
+                                 (nano_seconds &
+                                 PTP_CLOCK_STEP_ADJ_VALUE_MASK_));
+               lan743x_csr_write(adapter, PTP_CMD_CTL,
+                                 PTP_CMD_CTL_PTP_CLK_STP_NSEC_);
+               lan743x_ptp_wait_till_cmd_done(adapter,
+                                              PTP_CMD_CTL_PTP_CLK_STP_NSEC_);
+               mutex_unlock(&ptp->command_lock);
+       }
+}
+#endif /* CONFIG_PTP_1588_CLOCK */
+
+void lan743x_ptp_isr(void *context)
+{
+       struct lan743x_adapter *adapter = (struct lan743x_adapter *)context;
+       struct lan743x_ptp *ptp = NULL;
+       int enable_flag = 1;
+       u32 ptp_int_sts = 0;
+
+       ptp = &adapter->ptp;
+
+       lan743x_csr_write(adapter, INT_EN_CLR, INT_BIT_1588_);
+
+       ptp_int_sts = lan743x_csr_read(adapter, PTP_INT_STS);
+       ptp_int_sts &= lan743x_csr_read(adapter, PTP_INT_EN_SET);
+
+       if (ptp_int_sts & PTP_INT_BIT_TX_TS_) {
+               ptp_schedule_worker(ptp->ptp_clock, 0);
+               enable_flag = 0;/* tasklet will re-enable later */
+       }
+       if (ptp_int_sts & PTP_INT_BIT_TX_SWTS_ERR_) {
+               netif_err(adapter, drv, adapter->netdev,
+                         "PTP TX Software Timestamp Error\n");
+               /* clear int status bit */
+               lan743x_csr_write(adapter, PTP_INT_STS,
+                                 PTP_INT_BIT_TX_SWTS_ERR_);
+       }
+       if (ptp_int_sts & PTP_INT_BIT_TIMER_B_) {
+               /* clear int status bit */
+               lan743x_csr_write(adapter, PTP_INT_STS,
+                                 PTP_INT_BIT_TIMER_B_);
+       }
+       if (ptp_int_sts & PTP_INT_BIT_TIMER_A_) {
+               /* clear int status bit */
+               lan743x_csr_write(adapter, PTP_INT_STS,
+                                 PTP_INT_BIT_TIMER_A_);
+       }
+
+       if (enable_flag) {
+               /* re-enable isr */
+               lan743x_csr_write(adapter, INT_EN_SET, INT_BIT_1588_);
+       }
+}
+
+static void lan743x_ptp_tx_ts_enqueue_skb(struct lan743x_adapter *adapter,
+                                         struct sk_buff *skb, bool ignore_sync)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+
+       spin_lock_bh(&ptp->tx_ts_lock);
+       if (ptp->tx_ts_skb_queue_size < LAN743X_PTP_NUMBER_OF_TX_TIMESTAMPS) {
+               ptp->tx_ts_skb_queue[ptp->tx_ts_skb_queue_size] = skb;
+               if (ignore_sync)
+                       ptp->tx_ts_ignore_sync_queue |=
+                               BIT(ptp->tx_ts_skb_queue_size);
+               ptp->tx_ts_skb_queue_size++;
+       } else {
+               /* this should never happen, so long as the tx channel
+                * calls and honors the result from
+                * lan743x_ptp_request_tx_timestamp
+                */
+               netif_err(adapter, drv, adapter->netdev,
+                         "tx ts skb queue overflow\n");
+               dev_kfree_skb(skb);
+       }
+       spin_unlock_bh(&ptp->tx_ts_lock);
+}
+
+static void lan743x_ptp_sync_to_system_clock(struct lan743x_adapter *adapter)
+{
+       struct timespec64 ts;
+
+       memset(&ts, 0, sizeof(ts));
+       timekeeping_clocktai64(&ts);
+
+       lan743x_ptp_clock_set(adapter, ts.tv_sec, ts.tv_nsec, 0);
+}
+
+void lan743x_ptp_update_latency(struct lan743x_adapter *adapter,
+                               u32 link_speed)
+{
+       switch (link_speed) {
+       case 10:
+               lan743x_csr_write(adapter, PTP_LATENCY,
+                                 PTP_LATENCY_TX_SET_(0) |
+                                 PTP_LATENCY_RX_SET_(0));
+               break;
+       case 100:
+               lan743x_csr_write(adapter, PTP_LATENCY,
+                                 PTP_LATENCY_TX_SET_(181) |
+                                 PTP_LATENCY_RX_SET_(594));
+               break;
+       case 1000:
+               lan743x_csr_write(adapter, PTP_LATENCY,
+                                 PTP_LATENCY_TX_SET_(30) |
+                                 PTP_LATENCY_RX_SET_(525));
+               break;
+       }
+}
+
+int lan743x_ptp_init(struct lan743x_adapter *adapter)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+
+       mutex_init(&ptp->command_lock);
+       spin_lock_init(&ptp->tx_ts_lock);
+       ptp->used_event_ch = 0;
+       ptp->perout_event_ch = -1;
+       ptp->perout_gpio_bit = -1;
+       return 0;
+}
+
+int lan743x_ptp_open(struct lan743x_adapter *adapter)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+       int ret = -ENODEV;
+       u32 temp;
+
+       lan743x_ptp_reset(adapter);
+       lan743x_ptp_sync_to_system_clock(adapter);
+       temp = lan743x_csr_read(adapter, PTP_TX_MOD2);
+       temp |= PTP_TX_MOD2_TX_PTP_CLR_UDPV4_CHKSUM_;
+       lan743x_csr_write(adapter, PTP_TX_MOD2, temp);
+       lan743x_ptp_enable(adapter);
+       lan743x_csr_write(adapter, INT_EN_SET, INT_BIT_1588_);
+       lan743x_csr_write(adapter, PTP_INT_EN_SET,
+                         PTP_INT_BIT_TX_SWTS_ERR_ | PTP_INT_BIT_TX_TS_);
+       ptp->flags |= PTP_FLAG_ISR_ENABLED;
+
+#ifdef CONFIG_PTP_1588_CLOCK
+       snprintf(ptp->pin_config[0].name, 32, "lan743x_ptp_pin_0");
+       ptp->pin_config[0].index = 0;
+       ptp->pin_config[0].func = PTP_PF_PEROUT;
+       ptp->pin_config[0].chan = 0;
+
+       ptp->ptp_clock_info.owner = THIS_MODULE;
+       snprintf(ptp->ptp_clock_info.name, 16, "%pm",
+                adapter->netdev->dev_addr);
+       ptp->ptp_clock_info.max_adj = LAN743X_PTP_MAX_FREQ_ADJ_IN_PPB;
+       ptp->ptp_clock_info.n_alarm = 0;
+       ptp->ptp_clock_info.n_ext_ts = 0;
+       ptp->ptp_clock_info.n_per_out = 1;
+       ptp->ptp_clock_info.n_pins = 0;
+       ptp->ptp_clock_info.pps = 0;
+       ptp->ptp_clock_info.pin_config = NULL;
+       ptp->ptp_clock_info.adjfine = lan743x_ptpci_adjfine;
+       ptp->ptp_clock_info.adjfreq = lan743x_ptpci_adjfreq;
+       ptp->ptp_clock_info.adjtime = lan743x_ptpci_adjtime;
+       ptp->ptp_clock_info.gettime64 = lan743x_ptpci_gettime64;
+       ptp->ptp_clock_info.getcrosststamp = NULL;
+       ptp->ptp_clock_info.settime64 = lan743x_ptpci_settime64;
+       ptp->ptp_clock_info.enable = lan743x_ptpci_enable;
+       ptp->ptp_clock_info.do_aux_work = lan743x_ptpci_do_aux_work;
+       ptp->ptp_clock_info.verify = NULL;
+
+       ptp->ptp_clock = ptp_clock_register(&ptp->ptp_clock_info,
+                                           &adapter->pdev->dev);
+
+       if (IS_ERR(ptp->ptp_clock)) {
+               netif_err(adapter, ifup, adapter->netdev,
+                         "ptp_clock_register failed\n");
+               goto done;
+       }
+       ptp->flags |= PTP_FLAG_PTP_CLOCK_REGISTERED;
+       netif_info(adapter, ifup, adapter->netdev,
+                  "successfully registered ptp clock\n");
+
+       return 0;
+done:
+       lan743x_ptp_close(adapter);
+       return ret;
+#else
+       return 0;
+#endif
+}
+
+void lan743x_ptp_close(struct lan743x_adapter *adapter)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+       int index;
+
+#ifdef CONFIG_PTP_1588_CLOCK
+       if (ptp->flags & PTP_FLAG_PTP_CLOCK_REGISTERED) {
+               ptp_clock_unregister(ptp->ptp_clock);
+               ptp->ptp_clock = NULL;
+               ptp->flags &= ~PTP_FLAG_PTP_CLOCK_REGISTERED;
+               netif_info(adapter, drv, adapter->netdev,
+                          "ptp clock unregister\n");
+       }
+#endif
+
+       if (ptp->flags & PTP_FLAG_ISR_ENABLED) {
+               lan743x_csr_write(adapter, PTP_INT_EN_CLR,
+                                 PTP_INT_BIT_TX_SWTS_ERR_ |
+                                 PTP_INT_BIT_TX_TS_);
+               lan743x_csr_write(adapter, INT_EN_CLR, INT_BIT_1588_);
+               ptp->flags &= ~PTP_FLAG_ISR_ENABLED;
+       }
+
+       /* clean up pending timestamp requests */
+       lan743x_ptp_tx_ts_complete(adapter);
+       spin_lock_bh(&ptp->tx_ts_lock);
+       for (index = 0;
+               index < LAN743X_PTP_NUMBER_OF_TX_TIMESTAMPS;
+               index++) {
+               struct sk_buff *skb = ptp->tx_ts_skb_queue[index];
+
+               if (skb)
+                       dev_kfree_skb(skb);
+               ptp->tx_ts_skb_queue[index] = NULL;
+               ptp->tx_ts_seconds_queue[index] = 0;
+               ptp->tx_ts_nseconds_queue[index] = 0;
+       }
+       ptp->tx_ts_skb_queue_size = 0;
+       ptp->tx_ts_queue_size = 0;
+       ptp->pending_tx_timestamps = 0;
+       spin_unlock_bh(&ptp->tx_ts_lock);
+
+       lan743x_ptp_disable(adapter);
+}
+
+void lan743x_ptp_set_sync_ts_insert(struct lan743x_adapter *adapter,
+                                   bool ts_insert_enable)
+{
+       u32 ptp_tx_mod = lan743x_csr_read(adapter, PTP_TX_MOD);
+
+       if (ts_insert_enable)
+               ptp_tx_mod |= PTP_TX_MOD_TX_PTP_SYNC_TS_INSERT_;
+       else
+               ptp_tx_mod &= ~PTP_TX_MOD_TX_PTP_SYNC_TS_INSERT_;
+
+       lan743x_csr_write(adapter, PTP_TX_MOD, ptp_tx_mod);
+}
+
+static bool lan743x_ptp_is_enabled(struct lan743x_adapter *adapter)
+{
+       if (lan743x_csr_read(adapter, PTP_CMD_CTL) & PTP_CMD_CTL_PTP_ENABLE_)
+               return true;
+       return false;
+}
+
+static void lan743x_ptp_enable(struct lan743x_adapter *adapter)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+
+       mutex_lock(&ptp->command_lock);
+
+       if (lan743x_ptp_is_enabled(adapter)) {
+               netif_warn(adapter, drv, adapter->netdev,
+                          "PTP already enabled\n");
+               goto done;
+       }
+       lan743x_csr_write(adapter, PTP_CMD_CTL, PTP_CMD_CTL_PTP_ENABLE_);
+done:
+       mutex_unlock(&ptp->command_lock);
+}
+
+static void lan743x_ptp_disable(struct lan743x_adapter *adapter)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+
+       mutex_lock(&ptp->command_lock);
+       if (!lan743x_ptp_is_enabled(adapter)) {
+               netif_warn(adapter, drv, adapter->netdev,
+                          "PTP already disabled\n");
+               goto done;
+       }
+       lan743x_csr_write(adapter, PTP_CMD_CTL, PTP_CMD_CTL_PTP_DISABLE_);
+       lan743x_ptp_wait_till_cmd_done(adapter, PTP_CMD_CTL_PTP_ENABLE_);
+done:
+       mutex_unlock(&ptp->command_lock);
+}
+
+static void lan743x_ptp_reset(struct lan743x_adapter *adapter)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+
+       mutex_lock(&ptp->command_lock);
+
+       if (lan743x_ptp_is_enabled(adapter)) {
+               netif_err(adapter, drv, adapter->netdev,
+                         "Attempting reset while enabled\n");
+               goto done;
+       }
+
+       lan743x_csr_write(adapter, PTP_CMD_CTL, PTP_CMD_CTL_PTP_RESET_);
+       lan743x_ptp_wait_till_cmd_done(adapter, PTP_CMD_CTL_PTP_RESET_);
+done:
+       mutex_unlock(&ptp->command_lock);
+}
+
+static void lan743x_ptp_clock_set(struct lan743x_adapter *adapter,
+                                 u32 seconds, u32 nano_seconds,
+                                 u32 sub_nano_seconds)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+
+       mutex_lock(&ptp->command_lock);
+
+       lan743x_csr_write(adapter, PTP_CLOCK_SEC, seconds);
+       lan743x_csr_write(adapter, PTP_CLOCK_NS, nano_seconds);
+       lan743x_csr_write(adapter, PTP_CLOCK_SUBNS, sub_nano_seconds);
+
+       lan743x_csr_write(adapter, PTP_CMD_CTL, PTP_CMD_CTL_PTP_CLOCK_LOAD_);
+       lan743x_ptp_wait_till_cmd_done(adapter, PTP_CMD_CTL_PTP_CLOCK_LOAD_);
+       mutex_unlock(&ptp->command_lock);
+}
+
+bool lan743x_ptp_request_tx_timestamp(struct lan743x_adapter *adapter)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+       bool result = false;
+
+       spin_lock_bh(&ptp->tx_ts_lock);
+       if (ptp->pending_tx_timestamps < LAN743X_PTP_NUMBER_OF_TX_TIMESTAMPS) {
+               /* request granted */
+               ptp->pending_tx_timestamps++;
+               result = true;
+       }
+       spin_unlock_bh(&ptp->tx_ts_lock);
+       return result;
+}
+
+void lan743x_ptp_unrequest_tx_timestamp(struct lan743x_adapter *adapter)
+{
+       struct lan743x_ptp *ptp = &adapter->ptp;
+
+       spin_lock_bh(&ptp->tx_ts_lock);
+       if (ptp->pending_tx_timestamps > 0)
+               ptp->pending_tx_timestamps--;
+       else
+               netif_err(adapter, drv, adapter->netdev,
+                         "unrequest failed, pending_tx_timestamps==0\n");
+       spin_unlock_bh(&ptp->tx_ts_lock);
+}
+
+void lan743x_ptp_tx_timestamp_skb(struct lan743x_adapter *adapter,
+                                 struct sk_buff *skb, bool ignore_sync)
+{
+       lan743x_ptp_tx_ts_enqueue_skb(adapter, skb, ignore_sync);
+
+       lan743x_ptp_tx_ts_complete(adapter);
+}
+
+int lan743x_ptp_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
+{
+       struct lan743x_adapter *adapter = netdev_priv(netdev);
+       struct hwtstamp_config config;
+       int ret = 0;
+       int index;
+
+       if (!ifr) {
+               netif_err(adapter, drv, adapter->netdev,
+                         "SIOCSHWTSTAMP, ifr == NULL\n");
+               return -EINVAL;
+       }
+
+       if (copy_from_user(&config, ifr->ifr_data, sizeof(config)))
+               return -EFAULT;
+
+       if (config.flags) {
+               netif_warn(adapter, drv, adapter->netdev,
+                          "ignoring hwtstamp_config.flags == 0x%08X, expected 0\n",
+                          config.flags);
+       }
+
+       switch (config.tx_type) {
+       case HWTSTAMP_TX_OFF:
+               for (index = 0; index < LAN743X_MAX_TX_CHANNELS;
+                       index++)
+                       lan743x_tx_set_timestamping_mode(&adapter->tx[index],
+                                                        false, false);
+               lan743x_ptp_set_sync_ts_insert(adapter, false);
+               break;
+       case HWTSTAMP_TX_ON:
+               for (index = 0; index < LAN743X_MAX_TX_CHANNELS;
+                       index++)
+                       lan743x_tx_set_timestamping_mode(&adapter->tx[index],
+                                                        true, false);
+               lan743x_ptp_set_sync_ts_insert(adapter, false);
+               break;
+       case HWTSTAMP_TX_ONESTEP_SYNC:
+               for (index = 0; index < LAN743X_MAX_TX_CHANNELS;
+                       index++)
+                       lan743x_tx_set_timestamping_mode(&adapter->tx[index],
+                                                        true, true);
+
+               lan743x_ptp_set_sync_ts_insert(adapter, true);
+               break;
+       default:
+               netif_warn(adapter, drv, adapter->netdev,
+                          "  tx_type = %d, UNKNOWN\n", config.tx_type);
+               ret = -EINVAL;
+               break;
+       }
+
+       if (!ret)
+               return copy_to_user(ifr->ifr_data, &config,
+                       sizeof(config)) ? -EFAULT : 0;
+       return ret;
+}
diff --git a/drivers/net/ethernet/microchip/lan743x_ptp.h b/drivers/net/ethernet/microchip/lan743x_ptp.h
new file mode 100644 (file)
index 0000000..20f2223
--- /dev/null
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/* Copyright (C) 2018 Microchip Technology Inc. */
+
+#ifndef _LAN743X_PTP_H
+#define _LAN743X_PTP_H
+
+#include "linux/ptp_clock_kernel.h"
+#include "linux/netdevice.h"
+
+struct lan743x_adapter;
+
+/* GPIO */
+struct lan743x_gpio {
+       /* gpio_lock: used to prevent concurrent access to gpio settings */
+       spinlock_t gpio_lock;
+
+       int used_bits;
+       int output_bits;
+       int ptp_bits;
+       u32 gpio_cfg0;
+       u32 gpio_cfg1;
+       u32 gpio_cfg2;
+       u32 gpio_cfg3;
+};
+
+int lan743x_gpio_init(struct lan743x_adapter *adapter);
+
+void lan743x_ptp_isr(void *context);
+bool lan743x_ptp_request_tx_timestamp(struct lan743x_adapter *adapter);
+void lan743x_ptp_unrequest_tx_timestamp(struct lan743x_adapter *adapter);
+void lan743x_ptp_tx_timestamp_skb(struct lan743x_adapter *adapter,
+                                 struct sk_buff *skb, bool ignore_sync);
+int lan743x_ptp_init(struct lan743x_adapter *adapter);
+int lan743x_ptp_open(struct lan743x_adapter *adapter);
+void lan743x_ptp_close(struct lan743x_adapter *adapter);
+void lan743x_ptp_update_latency(struct lan743x_adapter *adapter,
+                               u32 link_speed);
+
+int lan743x_ptp_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd);
+
+#define LAN743X_PTP_NUMBER_OF_TX_TIMESTAMPS (4)
+
+#define PTP_FLAG_PTP_CLOCK_REGISTERED  BIT(1)
+#define PTP_FLAG_ISR_ENABLED                   BIT(2)
+
+struct lan743x_ptp {
+       int flags;
+
+       /* command_lock: used to prevent concurrent ptp commands */
+       struct mutex    command_lock;
+
+#ifdef CONFIG_PTP_1588_CLOCK
+       struct ptp_clock *ptp_clock;
+       struct ptp_clock_info ptp_clock_info;
+       struct ptp_pin_desc pin_config[1];
+#endif /* CONFIG_PTP_1588_CLOCK */
+
+#define LAN743X_PTP_NUMBER_OF_EVENT_CHANNELS (2)
+       unsigned long used_event_ch;
+
+       int perout_event_ch;
+       int perout_gpio_bit;
+
+       /* tx_ts_lock: used to prevent concurrent access to timestamp arrays */
+       spinlock_t      tx_ts_lock;
+       int pending_tx_timestamps;
+       struct sk_buff *tx_ts_skb_queue[LAN743X_PTP_NUMBER_OF_TX_TIMESTAMPS];
+       unsigned int    tx_ts_ignore_sync_queue;
+       int tx_ts_skb_queue_size;
+       u32 tx_ts_seconds_queue[LAN743X_PTP_NUMBER_OF_TX_TIMESTAMPS];
+       u32 tx_ts_nseconds_queue[LAN743X_PTP_NUMBER_OF_TX_TIMESTAMPS];
+       u32 tx_ts_header_queue[LAN743X_PTP_NUMBER_OF_TX_TIMESTAMPS];
+       int tx_ts_queue_size;
+};
+
+#endif /* _LAN743X_PTP_H */
index ae8149831e474c7a5c1b1d6e6dd108cec2ad8962..398011c87643c7f15a1da8fc2c72175213c006b0 100644 (file)
@@ -1094,12 +1094,9 @@ static void __vxge_hw_blockpool_destroy(struct __vxge_hw_blockpool *blockpool)
 {
        struct __vxge_hw_device *hldev;
        struct list_head *p, *n;
-       u16 ret;
 
-       if (blockpool == NULL) {
-               ret = 1;
-               goto exit;
-       }
+       if (!blockpool)
+               return;
 
        hldev = blockpool->hldev;
 
@@ -1122,8 +1119,7 @@ static void __vxge_hw_blockpool_destroy(struct __vxge_hw_blockpool *blockpool)
                list_del(&((struct __vxge_hw_blockpool_entry *)p)->item);
                kfree((void *)p);
        }
-       ret = 0;
-exit:
+
        return;
 }
 
@@ -2259,14 +2255,11 @@ static void vxge_hw_blockpool_block_add(struct __vxge_hw_device *devh,
        struct __vxge_hw_blockpool *blockpool;
        struct __vxge_hw_blockpool_entry *entry = NULL;
        dma_addr_t dma_addr;
-       enum vxge_hw_status status = VXGE_HW_OK;
-       u32 req_out;
 
        blockpool = &devh->block_pool;
 
        if (block_addr == NULL) {
                blockpool->req_out--;
-               status = VXGE_HW_FAIL;
                goto exit;
        }
 
@@ -2276,7 +2269,6 @@ static void vxge_hw_blockpool_block_add(struct __vxge_hw_device *devh,
        if (unlikely(pci_dma_mapping_error(devh->pdev, dma_addr))) {
                vxge_os_dma_free(devh->pdev, block_addr, &acc_handle);
                blockpool->req_out--;
-               status = VXGE_HW_FAIL;
                goto exit;
        }
 
@@ -2291,7 +2283,7 @@ static void vxge_hw_blockpool_block_add(struct __vxge_hw_device *devh,
        else
                list_del(&entry->item);
 
-       if (entry != NULL) {
+       if (entry) {
                entry->length = length;
                entry->memblock = block_addr;
                entry->dma_addr = dma_addr;
@@ -2299,13 +2291,10 @@ static void vxge_hw_blockpool_block_add(struct __vxge_hw_device *devh,
                entry->dma_handle = dma_h;
                list_add(&entry->item, &blockpool->free_block_list);
                blockpool->pool_size++;
-               status = VXGE_HW_OK;
-       } else
-               status = VXGE_HW_ERR_OUT_OF_MEMORY;
+       }
 
        blockpool->req_out--;
 
-       req_out = blockpool->req_out;
 exit:
        return;
 }
@@ -2357,7 +2346,6 @@ static void *__vxge_hw_blockpool_malloc(struct __vxge_hw_device *devh, u32 size,
        struct __vxge_hw_blockpool_entry *entry = NULL;
        struct __vxge_hw_blockpool  *blockpool;
        void *memblock = NULL;
-       enum vxge_hw_status status = VXGE_HW_OK;
 
        blockpool = &devh->block_pool;
 
@@ -2367,10 +2355,8 @@ static void *__vxge_hw_blockpool_malloc(struct __vxge_hw_device *devh, u32 size,
                                                &dma_object->handle,
                                                &dma_object->acc_handle);
 
-               if (memblock == NULL) {
-                       status = VXGE_HW_ERR_OUT_OF_MEMORY;
+               if (!memblock)
                        goto exit;
-               }
 
                dma_object->addr = pci_map_single(devh->pdev, memblock, size,
                                        PCI_DMA_BIDIRECTIONAL);
@@ -2379,7 +2365,6 @@ static void *__vxge_hw_blockpool_malloc(struct __vxge_hw_device *devh, u32 size,
                                dma_object->addr))) {
                        vxge_os_dma_free(devh->pdev, memblock,
                                &dma_object->acc_handle);
-                       status = VXGE_HW_ERR_OUT_OF_MEMORY;
                        goto exit;
                }
 
index 9ae4fcef725ef0237594a999152a89083ec73e96..76efed058f334b75f356641f77bcb55bb107e422 100644 (file)
@@ -503,7 +503,6 @@ static int nixge_start_xmit(struct sk_buff *skb, struct net_device *ndev)
        tx_skb->skb = skb;
 
        cur_p->cntrl |= XAXIDMA_BD_CTRL_TXEOF_MASK;
-       cur_p->app4 = (unsigned long)skb;
 
        tail_p = priv->tx_bd_p + sizeof(*priv->tx_bd_v) * priv->tx_bd_tail;
        /* Start the transfer */
@@ -739,22 +738,12 @@ static void nixge_dma_err_handler(unsigned long data)
                cur_p->phys = 0;
                cur_p->cntrl = 0;
                cur_p->status = 0;
-               cur_p->app0 = 0;
-               cur_p->app1 = 0;
-               cur_p->app2 = 0;
-               cur_p->app3 = 0;
-               cur_p->app4 = 0;
                cur_p->sw_id_offset = 0;
        }
 
        for (i = 0; i < RX_BD_NUM; i++) {
                cur_p = &lp->rx_bd_v[i];
                cur_p->status = 0;
-               cur_p->app0 = 0;
-               cur_p->app1 = 0;
-               cur_p->app2 = 0;
-               cur_p->app3 = 0;
-               cur_p->app4 = 0;
        }
 
        lp->tx_bd_ci = 0;
index 5ede6408649d66c25c85a25f0c9337feeb428670..82a1bd1f8a8ce3fd66acc6b0cc0c9e7bf6a57305 100644 (file)
@@ -2188,16 +2188,17 @@ int qed_get_queue_coalesce(struct qed_hwfn *p_hwfn, u16 *p_coal, void *handle)
 static int qed_fill_eth_dev_info(struct qed_dev *cdev,
                                 struct qed_dev_eth_info *info)
 {
+       struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev);
        int i;
 
        memset(info, 0, sizeof(*info));
 
-       info->num_tc = 1;
-
        if (IS_PF(cdev)) {
                int max_vf_vlan_filters = 0;
                int max_vf_mac_filters = 0;
 
+               info->num_tc = p_hwfn->hw_info.num_hw_tc;
+
                if (cdev->int_params.out.int_mode == QED_INT_MODE_MSIX) {
                        u16 num_queues = 0;
 
@@ -2248,6 +2249,8 @@ static int qed_fill_eth_dev_info(struct qed_dev *cdev,
        } else {
                u16 total_cids = 0;
 
+               info->num_tc = 1;
+
                /* Determine queues &  XDP support */
                for_each_hwfn(cdev, i) {
                        struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
@@ -2554,7 +2557,7 @@ static int qed_start_txq(struct qed_dev *cdev,
 
        rc = qed_eth_tx_queue_start(p_hwfn,
                                    p_hwfn->hw_info.opaque_fid,
-                                   p_params, 0,
+                                   p_params, p_params->tc,
                                    pbl_addr, pbl_size, ret_params);
 
        if (rc) {
index dbe81310c0b6282599f2605a4c37838d38ac9ee5..2094d86a7a087dac2eed0fe77bd71f66e7d975c7 100644 (file)
@@ -948,13 +948,14 @@ static void qed_update_pf_params(struct qed_dev *cdev,
                params->eth_pf_params.num_arfs_filters = 0;
 
        /* In case we might support RDMA, don't allow qede to be greedy
-        * with the L2 contexts. Allow for 64 queues [rx, tx, xdp] per hwfn.
+        * with the L2 contexts. Allow for 64 queues [rx, tx cos, xdp]
+        * per hwfn.
         */
        if (QED_IS_RDMA_PERSONALITY(QED_LEADING_HWFN(cdev))) {
                u16 *num_cons;
 
                num_cons = &params->eth_pf_params.num_cons;
-               *num_cons = min_t(u16, *num_cons, 192);
+               *num_cons = min_t(u16, *num_cons, QED_MAX_L2_CONS);
        }
 
        for (i = 0; i < cdev->num_hwfns; i++) {
index d7ed0d3dbf71706b88a195256648a06856db47a1..6a4d266fb8e2126f724cd4dbcfed3e02433c5d65 100644 (file)
@@ -52,6 +52,9 @@
 #include <linux/qed/qed_chain.h>
 #include <linux/qed/qed_eth_if.h>
 
+#include <net/pkt_cls.h>
+#include <net/tc_act/tc_gact.h>
+
 #define QEDE_MAJOR_VERSION             8
 #define QEDE_MINOR_VERSION             33
 #define QEDE_REVISION_VERSION          0
@@ -386,6 +389,15 @@ struct qede_tx_queue {
 #define QEDE_TXQ_XDP_TO_IDX(edev, txq) ((txq)->index - \
                                         QEDE_MAX_TSS_CNT(edev))
 #define QEDE_TXQ_IDX_TO_XDP(edev, idx) ((idx) + QEDE_MAX_TSS_CNT(edev))
+#define QEDE_NDEV_TXQ_ID_TO_FP_ID(edev, idx)   ((edev)->fp_num_rx + \
+                                                ((idx) % QEDE_TSS_COUNT(edev)))
+#define QEDE_NDEV_TXQ_ID_TO_TXQ_COS(edev, idx) ((idx) / QEDE_TSS_COUNT(edev))
+#define QEDE_TXQ_TO_NDEV_TXQ_ID(edev, txq)     ((QEDE_TSS_COUNT(edev) * \
+                                                (txq)->cos) + (txq)->index)
+#define QEDE_NDEV_TXQ_ID_TO_TXQ(edev, idx)     \
+       (&((edev)->fp_array[QEDE_NDEV_TXQ_ID_TO_FP_ID(edev, idx)].txq \
+       [QEDE_NDEV_TXQ_ID_TO_TXQ_COS(edev, idx)]))
+#define QEDE_FP_TC0_TXQ(fp)    (&((fp)->txq[0]))
 
        /* Regular Tx requires skb + metadata for release purpose,
         * while XDP requires the pages and the mapped address.
@@ -399,6 +411,8 @@ struct qede_tx_queue {
 
        /* Slowpath; Should be kept in end [unless missing padding] */
        void *handle;
+       u16 cos;
+       u16 ndev_txq_id;
 };
 
 #define BD_UNMAP_ADDR(bd)              HILO_U64(le32_to_cpu((bd)->addr.hi), \
@@ -458,7 +472,7 @@ void qede_arfs_filter_op(void *dev, void *filter, u8 fw_rc);
 void qede_free_arfs(struct qede_dev *edev);
 int qede_alloc_arfs(struct qede_dev *edev);
 int qede_add_cls_rule(struct qede_dev *edev, struct ethtool_rxnfc *info);
-int qede_del_cls_rule(struct qede_dev *edev, struct ethtool_rxnfc *info);
+int qede_delete_flow_filter(struct qede_dev *edev, u64 cookie);
 int qede_get_cls_rule_entry(struct qede_dev *edev, struct ethtool_rxnfc *cmd);
 int qede_get_cls_rule_all(struct qede_dev *edev, struct ethtool_rxnfc *info,
                          u32 *rule_locs);
@@ -524,6 +538,8 @@ bool qede_has_rx_work(struct qede_rx_queue *rxq);
 int qede_txq_has_work(struct qede_tx_queue *txq);
 void qede_recycle_rx_bd_ring(struct qede_rx_queue *rxq, u8 count);
 void qede_update_rx_prod(struct qede_dev *edev, struct qede_rx_queue *rxq);
+int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto,
+                           struct tc_cls_flower_offload *f);
 
 #define RX_RING_SIZE_POW       13
 #define RX_RING_SIZE           ((u16)BIT(RX_RING_SIZE_POW))
@@ -541,5 +557,7 @@ void qede_update_rx_prod(struct qede_dev *edev, struct qede_rx_queue *rxq);
 #define QEDE_RX_HDR_SIZE               256
 #define QEDE_MAX_JUMBO_PACKET_SIZE     9600
 #define        for_each_queue(i) for (i = 0; i < edev->num_queues; i++)
+#define for_each_cos_in_txq(edev, var) \
+       for ((var) = 0; (var) < (edev)->dev_info.num_tc; (var)++)
 
 #endif /* _QEDE_H_ */
index b37857f3f950895ff2060317f3920b158ba0119c..19652cd27ca78afd8de31214b59f5f7f310a8f1c 100644 (file)
@@ -222,7 +222,7 @@ static void qede_get_strings_stats_txq(struct qede_dev *edev,
                                QEDE_TXQ_XDP_TO_IDX(edev, txq),
                                qede_tqstats_arr[i].string);
                else
-                       sprintf(*buf, "%d: %s", txq->index,
+                       sprintf(*buf, "%d_%d: %s", txq->index, txq->cos,
                                qede_tqstats_arr[i].string);
                *buf += ETH_GSTRING_LEN;
        }
@@ -262,8 +262,13 @@ static void qede_get_strings_stats(struct qede_dev *edev, u8 *buf)
                if (fp->type & QEDE_FASTPATH_XDP)
                        qede_get_strings_stats_txq(edev, fp->xdp_tx, &buf);
 
-               if (fp->type & QEDE_FASTPATH_TX)
-                       qede_get_strings_stats_txq(edev, fp->txq, &buf);
+               if (fp->type & QEDE_FASTPATH_TX) {
+                       int cos;
+
+                       for_each_cos_in_txq(edev, cos)
+                               qede_get_strings_stats_txq(edev,
+                                                          &fp->txq[cos], &buf);
+               }
        }
 
        /* Account for non-queue statistics */
@@ -338,8 +343,12 @@ static void qede_get_ethtool_stats(struct net_device *dev,
                if (fp->type & QEDE_FASTPATH_XDP)
                        qede_get_ethtool_stats_txq(fp->xdp_tx, &buf);
 
-               if (fp->type & QEDE_FASTPATH_TX)
-                       qede_get_ethtool_stats_txq(fp->txq, &buf);
+               if (fp->type & QEDE_FASTPATH_TX) {
+                       int cos;
+
+                       for_each_cos_in_txq(edev, cos)
+                               qede_get_ethtool_stats_txq(&fp->txq[cos], &buf);
+               }
        }
 
        for (i = 0; i < QEDE_NUM_STATS; i++) {
@@ -366,7 +375,8 @@ static int qede_get_sset_count(struct net_device *dev, int stringset)
                                num_stats--;
 
                /* Account for the Regular Tx statistics */
-               num_stats += QEDE_TSS_COUNT(edev) * QEDE_NUM_TQSTATS;
+               num_stats += QEDE_TSS_COUNT(edev) * QEDE_NUM_TQSTATS *
+                               edev->dev_info.num_tc;
 
                /* Account for the Regular Rx statistics */
                num_stats += QEDE_RSS_COUNT(edev) * QEDE_NUM_RQSTATS;
@@ -741,9 +751,17 @@ static int qede_get_coalesce(struct net_device *dev,
                }
 
                for_each_queue(i) {
+                       struct qede_tx_queue *txq;
+
                        fp = &edev->fp_array[i];
+
+                       /* All TX queues of given fastpath uses same
+                        * coalescing value, so no need to iterate over
+                        * all TCs, TC0 txq should suffice.
+                        */
                        if (fp->type & QEDE_FASTPATH_TX) {
-                               tx_handle = fp->txq->handle;
+                               txq = QEDE_FP_TC0_TXQ(fp);
+                               tx_handle = txq->handle;
                                break;
                        }
                }
@@ -801,9 +819,17 @@ static int qede_set_coalesce(struct net_device *dev,
                }
 
                if (edev->fp_array[i].type & QEDE_FASTPATH_TX) {
+                       struct qede_tx_queue *txq;
+
+                       /* All TX queues of given fastpath uses same
+                        * coalescing value, so no need to iterate over
+                        * all TCs, TC0 txq should suffice.
+                        */
+                       txq = QEDE_FP_TC0_TXQ(fp);
+
                        rc = edev->ops->common->set_coalesce(edev->cdev,
                                                             0, txc,
-                                                            fp->txq->handle);
+                                                            txq->handle);
                        if (rc) {
                                DP_INFO(edev,
                                        "Set TX coalesce error, rc = %d\n", rc);
@@ -1259,7 +1285,7 @@ static int qede_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info)
                rc = qede_add_cls_rule(edev, info);
                break;
        case ETHTOOL_SRXCLSRLDEL:
-               rc = qede_del_cls_rule(edev, info);
+               rc = qede_delete_flow_filter(edev, info->fs.location);
                break;
        default:
                DP_INFO(edev, "Command parameters not supported\n");
@@ -1385,8 +1411,10 @@ static int qede_selftest_transmit_traffic(struct qede_dev *edev,
        u16 val;
 
        for_each_queue(i) {
-               if (edev->fp_array[i].type & QEDE_FASTPATH_TX) {
-                       txq = edev->fp_array[i].txq;
+               struct qede_fastpath *fp = &edev->fp_array[i];
+
+               if (fp->type & QEDE_FASTPATH_TX) {
+                       txq = QEDE_FP_TC0_TXQ(fp);
                        break;
                }
        }
index f9a327c821eb608eae5743250bb0dbc55bb1fc2f..9673d19308e65c37a7ad42e5752ac9f8238b7d4f 100644 (file)
@@ -83,7 +83,7 @@ struct qede_arfs_fltr_node {
        struct qede_arfs_tuple tuple;
 
        u32 flow_id;
-       u16 sw_id;
+       u64 sw_id;
        u16 rxq_id;
        u16 next_rxq_id;
        u8 vfid;
@@ -138,7 +138,7 @@ static void qede_configure_arfs_fltr(struct qede_dev *edev,
 
                n->tuple.stringify(&n->tuple, tuple_buffer);
                DP_VERBOSE(edev, NETIF_MSG_RX_STATUS,
-                          "%s sw_id[0x%x]: %s [vf %u queue %d]\n",
+                          "%s sw_id[0x%llx]: %s [vf %u queue %d]\n",
                           add_fltr ? "Adding" : "Deleting",
                           n->sw_id, tuple_buffer, n->vfid, rxq_id);
        }
@@ -152,7 +152,10 @@ static void
 qede_free_arfs_filter(struct qede_dev *edev,  struct qede_arfs_fltr_node *fltr)
 {
        kfree(fltr->data);
-       clear_bit(fltr->sw_id, edev->arfs->arfs_fltr_bmap);
+
+       if (fltr->sw_id < QEDE_RFS_MAX_FLTR)
+               clear_bit(fltr->sw_id, edev->arfs->arfs_fltr_bmap);
+
        kfree(fltr);
 }
 
@@ -214,7 +217,7 @@ void qede_arfs_filter_op(void *dev, void *filter, u8 fw_rc)
 
        if (fw_rc) {
                DP_NOTICE(edev,
-                         "Failed arfs filter configuration fw_rc=%d, flow_id=%d, sw_id=%d, src_port=%d, dst_port=%d, rxq=%d\n",
+                         "Failed arfs filter configuration fw_rc=%d, flow_id=%d, sw_id=0x%llx, src_port=%d, dst_port=%d, rxq=%d\n",
                          fw_rc, fltr->flow_id, fltr->sw_id,
                          ntohs(fltr->tuple.src_port),
                          ntohs(fltr->tuple.dst_port), fltr->rxq_id);
@@ -1348,7 +1351,7 @@ void qede_config_rx_mode(struct net_device *ndev)
 }
 
 static struct qede_arfs_fltr_node *
-qede_get_arfs_fltr_by_loc(struct hlist_head *head, u32 location)
+qede_get_arfs_fltr_by_loc(struct hlist_head *head, u64 location)
 {
        struct qede_arfs_fltr_node *fltr;
 
@@ -1599,6 +1602,69 @@ static int qede_flow_spec_validate_unused(struct qede_dev *edev,
        return 0;
 }
 
+static int qede_set_v4_tuple_to_profile(struct qede_dev *edev,
+                                       struct qede_arfs_tuple *t)
+{
+       /* We must have Only 4-tuples/l4 port/src ip/dst ip
+        * as an input.
+        */
+       if (t->src_port && t->dst_port && t->src_ipv4 && t->dst_ipv4) {
+               t->mode = QED_FILTER_CONFIG_MODE_5_TUPLE;
+       } else if (!t->src_port && t->dst_port &&
+                  !t->src_ipv4 && !t->dst_ipv4) {
+               t->mode = QED_FILTER_CONFIG_MODE_L4_PORT;
+       } else if (!t->src_port && !t->dst_port &&
+                  !t->dst_ipv4 && t->src_ipv4) {
+               t->mode = QED_FILTER_CONFIG_MODE_IP_SRC;
+       } else if (!t->src_port && !t->dst_port &&
+                  t->dst_ipv4 && !t->src_ipv4) {
+               t->mode = QED_FILTER_CONFIG_MODE_IP_DEST;
+       } else {
+               DP_INFO(edev, "Invalid N-tuple\n");
+               return -EOPNOTSUPP;
+       }
+
+       t->ip_comp = qede_flow_spec_ipv4_cmp;
+       t->build_hdr = qede_flow_build_ipv4_hdr;
+       t->stringify = qede_flow_stringify_ipv4_hdr;
+
+       return 0;
+}
+
+static int qede_set_v6_tuple_to_profile(struct qede_dev *edev,
+                                       struct qede_arfs_tuple *t,
+                                       struct in6_addr *zaddr)
+{
+       /* We must have Only 4-tuples/l4 port/src ip/dst ip
+        * as an input.
+        */
+       if (t->src_port && t->dst_port &&
+           memcmp(&t->src_ipv6, zaddr, sizeof(struct in6_addr)) &&
+           memcmp(&t->dst_ipv6, zaddr, sizeof(struct in6_addr))) {
+               t->mode = QED_FILTER_CONFIG_MODE_5_TUPLE;
+       } else if (!t->src_port && t->dst_port &&
+                  !memcmp(&t->src_ipv6, zaddr, sizeof(struct in6_addr)) &&
+                  !memcmp(&t->dst_ipv6, zaddr, sizeof(struct in6_addr))) {
+               t->mode = QED_FILTER_CONFIG_MODE_L4_PORT;
+       } else if (!t->src_port && !t->dst_port &&
+                  !memcmp(&t->dst_ipv6, zaddr, sizeof(struct in6_addr)) &&
+                  memcmp(&t->src_ipv6, zaddr, sizeof(struct in6_addr))) {
+               t->mode = QED_FILTER_CONFIG_MODE_IP_SRC;
+       } else if (!t->src_port && !t->dst_port &&
+                  memcmp(&t->dst_ipv6, zaddr, sizeof(struct in6_addr)) &&
+                  !memcmp(&t->src_ipv6, zaddr, sizeof(struct in6_addr))) {
+               t->mode = QED_FILTER_CONFIG_MODE_IP_DEST;
+       } else {
+               DP_INFO(edev, "Invalid N-tuple\n");
+               return -EOPNOTSUPP;
+       }
+
+       t->ip_comp = qede_flow_spec_ipv6_cmp;
+       t->build_hdr = qede_flow_build_ipv6_hdr;
+
+       return 0;
+}
+
 static int qede_flow_spec_to_tuple_ipv4_common(struct qede_dev *edev,
                                               struct qede_arfs_tuple *t,
                                               struct ethtool_rx_flow_spec *fs)
@@ -1638,27 +1704,7 @@ static int qede_flow_spec_to_tuple_ipv4_common(struct qede_dev *edev,
        t->src_port = fs->h_u.tcp_ip4_spec.psrc;
        t->dst_port = fs->h_u.tcp_ip4_spec.pdst;
 
-       /* We must either have a valid 4-tuple or only dst port
-        * or only src ip as an input
-        */
-       if (t->src_port && t->dst_port && t->src_ipv4 && t->dst_ipv4) {
-               t->mode = QED_FILTER_CONFIG_MODE_5_TUPLE;
-       } else if (!t->src_port && t->dst_port &&
-                  !t->src_ipv4 && !t->dst_ipv4) {
-               t->mode = QED_FILTER_CONFIG_MODE_L4_PORT;
-       }  else if (!t->src_port && !t->dst_port &&
-                   !t->dst_ipv4 && t->src_ipv4) {
-               t->mode = QED_FILTER_CONFIG_MODE_IP_SRC;
-       } else {
-               DP_INFO(edev, "Invalid N-tuple\n");
-               return -EOPNOTSUPP;
-       }
-
-       t->ip_comp = qede_flow_spec_ipv4_cmp;
-       t->build_hdr = qede_flow_build_ipv4_hdr;
-       t->stringify = qede_flow_stringify_ipv4_hdr;
-
-       return 0;
+       return qede_set_v4_tuple_to_profile(edev, t);
 }
 
 static int qede_flow_spec_to_tuple_tcpv4(struct qede_dev *edev,
@@ -1690,10 +1736,8 @@ static int qede_flow_spec_to_tuple_ipv6_common(struct qede_dev *edev,
                                               struct ethtool_rx_flow_spec *fs)
 {
        struct in6_addr zero_addr;
-       void *p;
 
-       p = &zero_addr;
-       memset(p, 0, sizeof(zero_addr));
+       memset(&zero_addr, 0, sizeof(zero_addr));
 
        if ((fs->h_u.tcp_ip6_spec.psrc &
             fs->m_u.tcp_ip6_spec.psrc) != fs->h_u.tcp_ip6_spec.psrc) {
@@ -1720,30 +1764,7 @@ static int qede_flow_spec_to_tuple_ipv6_common(struct qede_dev *edev,
        t->src_port = fs->h_u.tcp_ip6_spec.psrc;
        t->dst_port = fs->h_u.tcp_ip6_spec.pdst;
 
-       /* We must make sure we have a valid 4-tuple or only dest port
-        * or only src ip as an input
-        */
-       if (t->src_port && t->dst_port &&
-           memcmp(&t->src_ipv6, p, sizeof(struct in6_addr)) &&
-           memcmp(&t->dst_ipv6, p, sizeof(struct in6_addr))) {
-               t->mode = QED_FILTER_CONFIG_MODE_5_TUPLE;
-       } else if (!t->src_port && t->dst_port &&
-                  !memcmp(&t->src_ipv6, p, sizeof(struct in6_addr)) &&
-                  !memcmp(&t->dst_ipv6, p, sizeof(struct in6_addr))) {
-               t->mode = QED_FILTER_CONFIG_MODE_L4_PORT;
-       } else if (!t->src_port && !t->dst_port &&
-                  !memcmp(&t->dst_ipv6, p, sizeof(struct in6_addr)) &&
-                  memcmp(&t->src_ipv6, p, sizeof(struct in6_addr))) {
-               t->mode = QED_FILTER_CONFIG_MODE_IP_SRC;
-       } else {
-               DP_INFO(edev, "Invalid N-tuple\n");
-               return -EOPNOTSUPP;
-       }
-
-       t->ip_comp = qede_flow_spec_ipv6_cmp;
-       t->build_hdr = qede_flow_build_ipv6_hdr;
-
-       return 0;
+       return qede_set_v6_tuple_to_profile(edev, t, &zero_addr);
 }
 
 static int qede_flow_spec_to_tuple_tcpv6(struct qede_dev *edev,
@@ -1941,9 +1962,8 @@ int qede_add_cls_rule(struct qede_dev *edev, struct ethtool_rxnfc *info)
        return rc;
 }
 
-int qede_del_cls_rule(struct qede_dev *edev, struct ethtool_rxnfc *info)
+int qede_delete_flow_filter(struct qede_dev *edev, u64 cookie)
 {
-       struct ethtool_rx_flow_spec *fsp = &info->fs;
        struct qede_arfs_fltr_node *fltr = NULL;
        int rc = -EPERM;
 
@@ -1952,7 +1972,7 @@ int qede_del_cls_rule(struct qede_dev *edev, struct ethtool_rxnfc *info)
                goto unlock;
 
        fltr = qede_get_arfs_fltr_by_loc(QEDE_ARFS_BUCKET_HEAD(edev, 0),
-                                        fsp->location);
+                                        cookie);
        if (!fltr)
                goto unlock;
 
@@ -1982,3 +2002,293 @@ int qede_get_arfs_filter_count(struct qede_dev *edev)
        __qede_unlock(edev);
        return count;
 }
+
+static int qede_parse_actions(struct qede_dev *edev,
+                             struct tcf_exts *exts)
+{
+       int rc = -EINVAL, num_act = 0;
+       const struct tc_action *a;
+       bool is_drop = false;
+       LIST_HEAD(actions);
+
+       if (!tcf_exts_has_actions(exts)) {
+               DP_NOTICE(edev, "No tc actions received\n");
+               return rc;
+       }
+
+       tcf_exts_to_list(exts, &actions);
+       list_for_each_entry(a, &actions, list) {
+               num_act++;
+
+               if (is_tcf_gact_shot(a))
+                       is_drop = true;
+       }
+
+       if (num_act == 1 && is_drop)
+               return 0;
+
+       return rc;
+}
+
+static int
+qede_tc_parse_ports(struct qede_dev *edev,
+                   struct tc_cls_flower_offload *f,
+                   struct qede_arfs_tuple *t)
+{
+       if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
+               struct flow_dissector_key_ports *key, *mask;
+
+               key = skb_flow_dissector_target(f->dissector,
+                                               FLOW_DISSECTOR_KEY_PORTS,
+                                               f->key);
+               mask = skb_flow_dissector_target(f->dissector,
+                                                FLOW_DISSECTOR_KEY_PORTS,
+                                                f->mask);
+
+               if ((key->src && mask->src != U16_MAX) ||
+                   (key->dst && mask->dst != U16_MAX)) {
+                       DP_NOTICE(edev, "Do not support ports masks\n");
+                       return -EINVAL;
+               }
+
+               t->src_port = key->src;
+               t->dst_port = key->dst;
+       }
+
+       return 0;
+}
+
+static int
+qede_tc_parse_v6_common(struct qede_dev *edev,
+                       struct tc_cls_flower_offload *f,
+                       struct qede_arfs_tuple *t)
+{
+       struct in6_addr zero_addr, addr;
+
+       memset(&zero_addr, 0, sizeof(addr));
+       memset(&addr, 0xff, sizeof(addr));
+
+       if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_IPV6_ADDRS)) {
+               struct flow_dissector_key_ipv6_addrs *key, *mask;
+
+               key = skb_flow_dissector_target(f->dissector,
+                                               FLOW_DISSECTOR_KEY_IPV6_ADDRS,
+                                               f->key);
+               mask = skb_flow_dissector_target(f->dissector,
+                                                FLOW_DISSECTOR_KEY_IPV6_ADDRS,
+                                                f->mask);
+
+               if ((memcmp(&key->src, &zero_addr, sizeof(addr)) &&
+                    memcmp(&mask->src, &addr, sizeof(addr))) ||
+                   (memcmp(&key->dst, &zero_addr, sizeof(addr)) &&
+                    memcmp(&mask->dst, &addr, sizeof(addr)))) {
+                       DP_NOTICE(edev,
+                                 "Do not support IPv6 address prefix/mask\n");
+                       return -EINVAL;
+               }
+
+               memcpy(&t->src_ipv6, &key->src, sizeof(addr));
+               memcpy(&t->dst_ipv6, &key->dst, sizeof(addr));
+       }
+
+       if (qede_tc_parse_ports(edev, f, t))
+               return -EINVAL;
+
+       return qede_set_v6_tuple_to_profile(edev, t, &zero_addr);
+}
+
+static int
+qede_tc_parse_v4_common(struct qede_dev *edev,
+                       struct tc_cls_flower_offload *f,
+                       struct qede_arfs_tuple *t)
+{
+       if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_IPV4_ADDRS)) {
+               struct flow_dissector_key_ipv4_addrs *key, *mask;
+
+               key = skb_flow_dissector_target(f->dissector,
+                                               FLOW_DISSECTOR_KEY_IPV4_ADDRS,
+                                               f->key);
+               mask = skb_flow_dissector_target(f->dissector,
+                                                FLOW_DISSECTOR_KEY_IPV4_ADDRS,
+                                                f->mask);
+
+               if ((key->src && mask->src != U32_MAX) ||
+                   (key->dst && mask->dst != U32_MAX)) {
+                       DP_NOTICE(edev, "Do not support ipv4 prefix/masks\n");
+                       return -EINVAL;
+               }
+
+               t->src_ipv4 = key->src;
+               t->dst_ipv4 = key->dst;
+       }
+
+       if (qede_tc_parse_ports(edev, f, t))
+               return -EINVAL;
+
+       return qede_set_v4_tuple_to_profile(edev, t);
+}
+
+static int
+qede_tc_parse_tcp_v6(struct qede_dev *edev,
+                    struct tc_cls_flower_offload *f,
+                    struct qede_arfs_tuple *tuple)
+{
+       tuple->ip_proto = IPPROTO_TCP;
+       tuple->eth_proto = htons(ETH_P_IPV6);
+
+       return qede_tc_parse_v6_common(edev, f, tuple);
+}
+
+static int
+qede_tc_parse_tcp_v4(struct qede_dev *edev,
+                    struct tc_cls_flower_offload *f,
+                    struct qede_arfs_tuple *tuple)
+{
+       tuple->ip_proto = IPPROTO_TCP;
+       tuple->eth_proto = htons(ETH_P_IP);
+
+       return qede_tc_parse_v4_common(edev, f, tuple);
+}
+
+static int
+qede_tc_parse_udp_v6(struct qede_dev *edev,
+                    struct tc_cls_flower_offload *f,
+                    struct qede_arfs_tuple *tuple)
+{
+       tuple->ip_proto = IPPROTO_UDP;
+       tuple->eth_proto = htons(ETH_P_IPV6);
+
+       return qede_tc_parse_v6_common(edev, f, tuple);
+}
+
+static int
+qede_tc_parse_udp_v4(struct qede_dev *edev,
+                    struct tc_cls_flower_offload *f,
+                    struct qede_arfs_tuple *tuple)
+{
+       tuple->ip_proto = IPPROTO_UDP;
+       tuple->eth_proto = htons(ETH_P_IP);
+
+       return qede_tc_parse_v4_common(edev, f, tuple);
+}
+
+static int
+qede_parse_flower_attr(struct qede_dev *edev, __be16 proto,
+                      struct tc_cls_flower_offload *f,
+                      struct qede_arfs_tuple *tuple)
+{
+       int rc = -EINVAL;
+       u8 ip_proto = 0;
+
+       memset(tuple, 0, sizeof(*tuple));
+
+       if (f->dissector->used_keys &
+           ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
+             BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
+             BIT(FLOW_DISSECTOR_KEY_BASIC) |
+             BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS) |
+             BIT(FLOW_DISSECTOR_KEY_PORTS))) {
+               DP_NOTICE(edev, "Unsupported key set:0x%x\n",
+                         f->dissector->used_keys);
+               return -EOPNOTSUPP;
+       }
+
+       if (proto != htons(ETH_P_IP) &&
+           proto != htons(ETH_P_IPV6)) {
+               DP_NOTICE(edev, "Unsupported proto=0x%x\n", proto);
+               return -EPROTONOSUPPORT;
+       }
+
+       if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
+               struct flow_dissector_key_basic *key;
+
+               key = skb_flow_dissector_target(f->dissector,
+                                               FLOW_DISSECTOR_KEY_BASIC,
+                                               f->key);
+               ip_proto = key->ip_proto;
+       }
+
+       if (ip_proto == IPPROTO_TCP && proto == htons(ETH_P_IP))
+               rc = qede_tc_parse_tcp_v4(edev, f, tuple);
+       else if (ip_proto == IPPROTO_TCP && proto == htons(ETH_P_IPV6))
+               rc = qede_tc_parse_tcp_v6(edev, f, tuple);
+       else if (ip_proto == IPPROTO_UDP && proto == htons(ETH_P_IP))
+               rc = qede_tc_parse_udp_v4(edev, f, tuple);
+       else if (ip_proto == IPPROTO_UDP && proto == htons(ETH_P_IPV6))
+               rc = qede_tc_parse_udp_v6(edev, f, tuple);
+       else
+               DP_NOTICE(edev, "Invalid tc protocol request\n");
+
+       return rc;
+}
+
+int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto,
+                           struct tc_cls_flower_offload *f)
+{
+       struct qede_arfs_fltr_node *n;
+       int min_hlen, rc = -EINVAL;
+       struct qede_arfs_tuple t;
+
+       __qede_lock(edev);
+
+       if (!edev->arfs) {
+               rc = -EPERM;
+               goto unlock;
+       }
+
+       /* parse flower attribute and prepare filter */
+       if (qede_parse_flower_attr(edev, proto, f, &t))
+               goto unlock;
+
+       /* Validate profile mode and number of filters */
+       if ((edev->arfs->filter_count && edev->arfs->mode != t.mode) ||
+           edev->arfs->filter_count == QEDE_RFS_MAX_FLTR) {
+               DP_NOTICE(edev,
+                         "Filter configuration invalidated, filter mode=0x%x, configured mode=0x%x, filter count=0x%x\n",
+                         t.mode, edev->arfs->mode, edev->arfs->filter_count);
+               goto unlock;
+       }
+
+       /* parse tc actions and get the vf_id */
+       if (qede_parse_actions(edev, f->exts))
+               goto unlock;
+
+       if (qede_flow_find_fltr(edev, &t)) {
+               rc = -EEXIST;
+               goto unlock;
+       }
+
+       n = kzalloc(sizeof(*n), GFP_KERNEL);
+       if (!n) {
+               rc = -ENOMEM;
+               goto unlock;
+       }
+
+       min_hlen = qede_flow_get_min_header_size(&t);
+
+       n->data = kzalloc(min_hlen, GFP_KERNEL);
+       if (!n->data) {
+               kfree(n);
+               rc = -ENOMEM;
+               goto unlock;
+       }
+
+       memcpy(&n->tuple, &t, sizeof(n->tuple));
+
+       n->buf_len = min_hlen;
+       n->b_is_drop = true;
+       n->sw_id = f->cookie;
+
+       n->tuple.build_hdr(&n->tuple, n->data);
+
+       rc = qede_enqueue_fltr_and_config_searcher(edev, n, 0);
+       if (rc)
+               goto unlock;
+
+       qede_configure_arfs_fltr(edev, n, n->rxq_id, true);
+       rc = qede_poll_arfs_filter_config(edev, n);
+
+unlock:
+       __qede_unlock(edev);
+       return rc;
+}
index 8c9e95ba99170b1dc0c397b03ca41251242ab149..1a78027de071f1bc2286f0a6d31288feb8b84584 100644 (file)
@@ -408,12 +408,12 @@ static void qede_xdp_tx_int(struct qede_dev *edev, struct qede_tx_queue *txq)
 
 static int qede_tx_int(struct qede_dev *edev, struct qede_tx_queue *txq)
 {
+       unsigned int pkts_compl = 0, bytes_compl = 0;
        struct netdev_queue *netdev_txq;
        u16 hw_bd_cons;
-       unsigned int pkts_compl = 0, bytes_compl = 0;
        int rc;
 
-       netdev_txq = netdev_get_tx_queue(edev->ndev, txq->index);
+       netdev_txq = netdev_get_tx_queue(edev->ndev, txq->ndev_txq_id);
 
        hw_bd_cons = le16_to_cpu(*txq->hw_cons_ptr);
        barrier();
@@ -1365,9 +1365,14 @@ static bool qede_poll_is_more_work(struct qede_fastpath *fp)
                if (qede_txq_has_work(fp->xdp_tx))
                        return true;
 
-       if (likely(fp->type & QEDE_FASTPATH_TX))
-               if (qede_txq_has_work(fp->txq))
-                       return true;
+       if (likely(fp->type & QEDE_FASTPATH_TX)) {
+               int cos;
+
+               for_each_cos_in_txq(fp->edev, cos) {
+                       if (qede_txq_has_work(&fp->txq[cos]))
+                               return true;
+               }
+       }
 
        return false;
 }
@@ -1382,8 +1387,14 @@ int qede_poll(struct napi_struct *napi, int budget)
        struct qede_dev *edev = fp->edev;
        int rx_work_done = 0;
 
-       if (likely(fp->type & QEDE_FASTPATH_TX) && qede_txq_has_work(fp->txq))
-               qede_tx_int(edev, fp->txq);
+       if (likely(fp->type & QEDE_FASTPATH_TX)) {
+               int cos;
+
+               for_each_cos_in_txq(fp->edev, cos) {
+                       if (qede_txq_has_work(&fp->txq[cos]))
+                               qede_tx_int(edev, &fp->txq[cos]);
+               }
+       }
 
        if ((fp->type & QEDE_FASTPATH_XDP) && qede_txq_has_work(fp->xdp_tx))
                qede_xdp_tx_int(edev, fp->xdp_tx);
@@ -1444,8 +1455,8 @@ netdev_tx_t qede_start_xmit(struct sk_buff *skb, struct net_device *ndev)
 
        /* Get tx-queue context and netdev index */
        txq_index = skb_get_queue_mapping(skb);
-       WARN_ON(txq_index >= QEDE_TSS_COUNT(edev));
-       txq = edev->fp_array[edev->fp_num_rx + txq_index].txq;
+       WARN_ON(txq_index >= QEDE_TSS_COUNT(edev) * edev->dev_info.num_tc);
+       txq = QEDE_NDEV_TXQ_ID_TO_TXQ(edev, txq_index);
        netdev_txq = netdev_get_tx_queue(ndev, txq_index);
 
        WARN_ON(qed_chain_get_elem_left(&txq->tx_pbl) < (MAX_SKB_FRAGS + 1));
index 6a796040a32c47071e7bb677f24436ab930e7ed9..46d0f2eaa0c094c19ae47161f84320caeba2c567 100644 (file)
@@ -536,6 +536,97 @@ static int qede_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
        return 0;
 }
 
+static int qede_setup_tc(struct net_device *ndev, u8 num_tc)
+{
+       struct qede_dev *edev = netdev_priv(ndev);
+       int cos, count, offset;
+
+       if (num_tc > edev->dev_info.num_tc)
+               return -EINVAL;
+
+       netdev_reset_tc(ndev);
+       netdev_set_num_tc(ndev, num_tc);
+
+       for_each_cos_in_txq(edev, cos) {
+               count = QEDE_TSS_COUNT(edev);
+               offset = cos * QEDE_TSS_COUNT(edev);
+               netdev_set_tc_queue(ndev, cos, count, offset);
+       }
+
+       return 0;
+}
+
+static int
+qede_set_flower(struct qede_dev *edev, struct tc_cls_flower_offload *f,
+               __be16 proto)
+{
+       switch (f->command) {
+       case TC_CLSFLOWER_REPLACE:
+               return qede_add_tc_flower_fltr(edev, proto, f);
+       case TC_CLSFLOWER_DESTROY:
+               return qede_delete_flow_filter(edev, f->cookie);
+       default:
+               return -EOPNOTSUPP;
+       }
+}
+
+static int qede_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+                                 void *cb_priv)
+{
+       struct tc_cls_flower_offload *f;
+       struct qede_dev *edev = cb_priv;
+
+       if (!tc_cls_can_offload_and_chain0(edev->ndev, type_data))
+               return -EOPNOTSUPP;
+
+       switch (type) {
+       case TC_SETUP_CLSFLOWER:
+               f = type_data;
+               return qede_set_flower(edev, f, f->common.protocol);
+       default:
+               return -EOPNOTSUPP;
+       }
+}
+
+static int qede_setup_tc_block(struct qede_dev *edev,
+                              struct tc_block_offload *f)
+{
+       if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
+               return -EOPNOTSUPP;
+
+       switch (f->command) {
+       case TC_BLOCK_BIND:
+               return tcf_block_cb_register(f->block,
+                                            qede_setup_tc_block_cb,
+                                            edev, edev, f->extack);
+       case TC_BLOCK_UNBIND:
+               tcf_block_cb_unregister(f->block, qede_setup_tc_block_cb, edev);
+               return 0;
+       default:
+               return -EOPNOTSUPP;
+       }
+}
+
+static int
+qede_setup_tc_offload(struct net_device *dev, enum tc_setup_type type,
+                     void *type_data)
+{
+       struct qede_dev *edev = netdev_priv(dev);
+       struct tc_mqprio_qopt *mqprio;
+
+       switch (type) {
+       case TC_SETUP_BLOCK:
+               return qede_setup_tc_block(edev, type_data);
+       case TC_SETUP_QDISC_MQPRIO:
+               mqprio = type_data;
+
+               mqprio->hw = TC_MQPRIO_HW_OFFLOAD_TCS;
+               return qede_setup_tc(dev, mqprio->num_tc);
+       default:
+               return -EOPNOTSUPP;
+       }
+}
+
 static const struct net_device_ops qede_netdev_ops = {
        .ndo_open = qede_open,
        .ndo_stop = qede_close,
@@ -568,6 +659,7 @@ static const struct net_device_ops qede_netdev_ops = {
 #ifdef CONFIG_RFS_ACCEL
        .ndo_rx_flow_steer = qede_rx_flow_steer,
 #endif
+       .ndo_setup_tc = qede_setup_tc_offload,
 };
 
 static const struct net_device_ops qede_netdev_vf_ops = {
@@ -621,7 +713,8 @@ static struct qede_dev *qede_alloc_etherdev(struct qed_dev *cdev,
        struct qede_dev *edev;
 
        ndev = alloc_etherdev_mqs(sizeof(*edev),
-                                 info->num_queues, info->num_queues);
+                                 info->num_queues * info->num_tc,
+                                 info->num_queues);
        if (!ndev) {
                pr_err("etherdev allocation failed\n");
                return NULL;
@@ -688,7 +781,7 @@ static void qede_init_ndev(struct qede_dev *edev)
        /* user-changeble features */
        hw_features = NETIF_F_GRO | NETIF_F_GRO_HW | NETIF_F_SG |
                      NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
-                     NETIF_F_TSO | NETIF_F_TSO6;
+                     NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_HW_TC;
 
        if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1)
                hw_features |= NETIF_F_NTUPLE;
@@ -830,7 +923,8 @@ static int qede_alloc_fp_array(struct qede_dev *edev)
                }
 
                if (fp->type & QEDE_FASTPATH_TX) {
-                       fp->txq = kzalloc(sizeof(*fp->txq), GFP_KERNEL);
+                       fp->txq = kcalloc(edev->dev_info.num_tc,
+                                         sizeof(*fp->txq), GFP_KERNEL);
                        if (!fp->txq)
                                goto err;
                }
@@ -879,10 +973,15 @@ static void qede_sp_task(struct work_struct *work)
 static void qede_update_pf_params(struct qed_dev *cdev)
 {
        struct qed_pf_params pf_params;
+       u16 num_cons;
 
        /* 64 rx + 64 tx + 64 XDP */
        memset(&pf_params, 0, sizeof(struct qed_pf_params));
-       pf_params.eth_pf_params.num_cons = (MAX_SB_PER_PF_MIMD - 1) * 3;
+
+       /* 1 rx + 1 xdp + max tx cos */
+       num_cons = QED_MIN_L2_CONS;
+
+       pf_params.eth_pf_params.num_cons = (MAX_SB_PER_PF_MIMD - 1) * num_cons;
 
        /* Same for VFs - make sure they'll have sufficient connections
         * to support XDP Tx queues.
@@ -1363,8 +1462,12 @@ static void qede_free_mem_fp(struct qede_dev *edev, struct qede_fastpath *fp)
        if (fp->type & QEDE_FASTPATH_XDP)
                qede_free_mem_txq(edev, fp->xdp_tx);
 
-       if (fp->type & QEDE_FASTPATH_TX)
-               qede_free_mem_txq(edev, fp->txq);
+       if (fp->type & QEDE_FASTPATH_TX) {
+               int cos;
+
+               for_each_cos_in_txq(edev, cos)
+                       qede_free_mem_txq(edev, &fp->txq[cos]);
+       }
 }
 
 /* This function allocates all memory needed for a single fp (i.e. an entity
@@ -1391,9 +1494,13 @@ static int qede_alloc_mem_fp(struct qede_dev *edev, struct qede_fastpath *fp)
        }
 
        if (fp->type & QEDE_FASTPATH_TX) {
-               rc = qede_alloc_mem_txq(edev, fp->txq);
-               if (rc)
-                       goto out;
+               int cos;
+
+               for_each_cos_in_txq(edev, cos) {
+                       rc = qede_alloc_mem_txq(edev, &fp->txq[cos]);
+                       if (rc)
+                               goto out;
+               }
        }
 
 out:
@@ -1466,10 +1573,23 @@ static void qede_init_fp(struct qede_dev *edev)
                }
 
                if (fp->type & QEDE_FASTPATH_TX) {
-                       fp->txq->index = txq_index++;
-                       if (edev->dev_info.is_legacy)
-                               fp->txq->is_legacy = 1;
-                       fp->txq->dev = &edev->pdev->dev;
+                       int cos;
+
+                       for_each_cos_in_txq(edev, cos) {
+                               struct qede_tx_queue *txq = &fp->txq[cos];
+                               u16 ndev_tx_id;
+
+                               txq->cos = cos;
+                               txq->index = txq_index;
+                               ndev_tx_id = QEDE_TXQ_TO_NDEV_TXQ_ID(edev, txq);
+                               txq->ndev_txq_id = ndev_tx_id;
+
+                               if (edev->dev_info.is_legacy)
+                                       txq->is_legacy = 1;
+                               txq->dev = &edev->pdev->dev;
+                       }
+
+                       txq_index++;
                }
 
                snprintf(fp->name, sizeof(fp->name), "%s-fp-%d",
@@ -1483,7 +1603,9 @@ static int qede_set_real_num_queues(struct qede_dev *edev)
 {
        int rc = 0;
 
-       rc = netif_set_real_num_tx_queues(edev->ndev, QEDE_TSS_COUNT(edev));
+       rc = netif_set_real_num_tx_queues(edev->ndev,
+                                         QEDE_TSS_COUNT(edev) *
+                                         edev->dev_info.num_tc);
        if (rc) {
                DP_NOTICE(edev, "Failed to set real number of Tx queues\n");
                return rc;
@@ -1685,9 +1807,13 @@ static int qede_stop_queues(struct qede_dev *edev)
                fp = &edev->fp_array[i];
 
                if (fp->type & QEDE_FASTPATH_TX) {
-                       rc = qede_drain_txq(edev, fp->txq, true);
-                       if (rc)
-                               return rc;
+                       int cos;
+
+                       for_each_cos_in_txq(edev, cos) {
+                               rc = qede_drain_txq(edev, &fp->txq[cos], true);
+                               if (rc)
+                                       return rc;
+                       }
                }
 
                if (fp->type & QEDE_FASTPATH_XDP) {
@@ -1703,9 +1829,13 @@ static int qede_stop_queues(struct qede_dev *edev)
 
                /* Stop the Tx Queue(s) */
                if (fp->type & QEDE_FASTPATH_TX) {
-                       rc = qede_stop_txq(edev, fp->txq, i);
-                       if (rc)
-                               return rc;
+                       int cos;
+
+                       for_each_cos_in_txq(edev, cos) {
+                               rc = qede_stop_txq(edev, &fp->txq[cos], i);
+                               if (rc)
+                                       return rc;
+                       }
                }
 
                /* Stop the Rx Queue */
@@ -1758,6 +1888,7 @@ static int qede_start_txq(struct qede_dev *edev,
 
        params.p_sb = fp->sb_info;
        params.sb_idx = sb_idx;
+       params.tc = txq->cos;
 
        rc = edev->ops->q_tx_start(edev->cdev, rss_id, &params, phys_table,
                                   page_cnt, &ret_params);
@@ -1877,9 +2008,14 @@ static int qede_start_queues(struct qede_dev *edev, bool clear_stats)
                }
 
                if (fp->type & QEDE_FASTPATH_TX) {
-                       rc = qede_start_txq(edev, fp, fp->txq, i, TX_PI(0));
-                       if (rc)
-                               goto out;
+                       int cos;
+
+                       for_each_cos_in_txq(edev, cos) {
+                               rc = qede_start_txq(edev, fp, &fp->txq[cos], i,
+                                                   TX_PI(cos));
+                               if (rc)
+                                       goto out;
+                       }
                }
        }
 
@@ -1973,6 +2109,7 @@ static int qede_load(struct qede_dev *edev, enum qede_load_mode mode,
                     bool is_locked)
 {
        struct qed_link_params link_params;
+       u8 num_tc;
        int rc;
 
        DP_INFO(edev, "Starting qede load\n");
@@ -2019,6 +2156,10 @@ static int qede_load(struct qede_dev *edev, enum qede_load_mode mode,
                goto err4;
        DP_INFO(edev, "Start VPORT, RXQ and TXQ succeeded\n");
 
+       num_tc = netdev_get_num_tc(edev->ndev);
+       num_tc = num_tc ? num_tc : edev->dev_info.num_tc;
+       qede_setup_tc(edev->ndev, num_tc);
+
        /* Program un-configured VLANs */
        qede_configure_vlan_filters(edev);
 
@@ -2143,7 +2284,7 @@ static bool qede_is_txq_full(struct qede_dev *edev, struct qede_tx_queue *txq)
 {
        struct netdev_queue *netdev_txq;
 
-       netdev_txq = netdev_get_tx_queue(edev->ndev, txq->index);
+       netdev_txq = netdev_get_tx_queue(edev->ndev, txq->ndev_txq_id);
        if (netif_xmit_stopped(netdev_txq))
                return true;
 
@@ -2208,9 +2349,11 @@ static void qede_get_eth_tlv_data(void *dev, void *data)
        for_each_queue(i) {
                fp = &edev->fp_array[i];
                if (fp->type & QEDE_FASTPATH_TX) {
-                       if (fp->txq->sw_tx_cons != fp->txq->sw_tx_prod)
+                       struct qede_tx_queue *txq = QEDE_FP_TC0_TXQ(fp);
+
+                       if (txq->sw_tx_cons != txq->sw_tx_prod)
                                etlv->txqs_empty = false;
-                       if (qede_is_txq_full(edev, fp->txq))
+                       if (qede_is_txq_full(edev, txq))
                                etlv->num_txqs_full++;
                }
                if (fp->type & QEDE_FASTPATH_RX) {
index 7848cf04b29a83f0a356b3eb5360d6a6e65871f0..822aa393c370e2c653c4c466af18bd27358e9189 100644 (file)
@@ -276,13 +276,6 @@ static const unsigned crb_hub_agt[64] = {
        0,
 };
 
-static const u32 msi_tgt_status[8] = {
-       ISR_INT_TARGET_STATUS, ISR_INT_TARGET_STATUS_F1,
-       ISR_INT_TARGET_STATUS_F2, ISR_INT_TARGET_STATUS_F3,
-       ISR_INT_TARGET_STATUS_F4, ISR_INT_TARGET_STATUS_F5,
-       ISR_INT_TARGET_STATUS_F6, ISR_INT_TARGET_STATUS_F7
-};
-
 /*  PCI Windowing for DDR regions.  */
 
 #define QLCNIC_PCIE_SEM_TIMEOUT        10000
index 8ea1fa36ca438e516a79bc20d066ad3e947a3ecd..0d9c3831838fe0dc366bfa9ab2105e83b493a1bd 100644 (file)
@@ -19,6 +19,7 @@
 #include <linux/if_vlan.h>
 #include <linux/crc32.h>
 #include <linux/in.h>
+#include <linux/io.h>
 #include <linux/ip.h>
 #include <linux/tcp.h>
 #include <linux/interrupt.h>
 #include <linux/ipv6.h>
 #include <net/ip6_checksum.h>
 
-#include <asm/io.h>
-#include <asm/irq.h>
-
-#define RTL8169_VERSION "2.3LK-NAPI"
 #define MODULENAME "r8169"
 
 #define FIRMWARE_8168D_1       "rtl_nic/rtl8168d-1.fw"
@@ -144,136 +141,70 @@ enum mac_version {
        RTL_GIGA_MAC_NONE   = 0xff,
 };
 
-enum rtl_tx_desc_version {
-       RTL_TD_0        = 0,
-       RTL_TD_1        = 1,
-};
-
 #define JUMBO_1K       ETH_DATA_LEN
 #define JUMBO_4K       (4*1024 - ETH_HLEN - 2)
 #define JUMBO_6K       (6*1024 - ETH_HLEN - 2)
 #define JUMBO_7K       (7*1024 - ETH_HLEN - 2)
 #define JUMBO_9K       (9*1024 - ETH_HLEN - 2)
 
-#define _R(NAME,TD,FW,SZ) {    \
-       .name = NAME,           \
-       .txd_version = TD,      \
-       .fw_name = FW,          \
-       .jumbo_max = SZ,        \
-}
-
 static const struct {
        const char *name;
-       enum rtl_tx_desc_version txd_version;
        const char *fw_name;
-       u16 jumbo_max;
 } rtl_chip_infos[] = {
        /* PCI devices. */
-       [RTL_GIGA_MAC_VER_01] =
-               _R("RTL8169",           RTL_TD_0, NULL, JUMBO_7K),
-       [RTL_GIGA_MAC_VER_02] =
-               _R("RTL8169s",          RTL_TD_0, NULL, JUMBO_7K),
-       [RTL_GIGA_MAC_VER_03] =
-               _R("RTL8110s",          RTL_TD_0, NULL, JUMBO_7K),
-       [RTL_GIGA_MAC_VER_04] =
-               _R("RTL8169sb/8110sb",  RTL_TD_0, NULL, JUMBO_7K),
-       [RTL_GIGA_MAC_VER_05] =
-               _R("RTL8169sc/8110sc",  RTL_TD_0, NULL, JUMBO_7K),
-       [RTL_GIGA_MAC_VER_06] =
-               _R("RTL8169sc/8110sc",  RTL_TD_0, NULL, JUMBO_7K),
+       [RTL_GIGA_MAC_VER_01] = {"RTL8169"                              },
+       [RTL_GIGA_MAC_VER_02] = {"RTL8169s"                             },
+       [RTL_GIGA_MAC_VER_03] = {"RTL8110s"                             },
+       [RTL_GIGA_MAC_VER_04] = {"RTL8169sb/8110sb"                     },
+       [RTL_GIGA_MAC_VER_05] = {"RTL8169sc/8110sc"                     },
+       [RTL_GIGA_MAC_VER_06] = {"RTL8169sc/8110sc"                     },
        /* PCI-E devices. */
-       [RTL_GIGA_MAC_VER_07] =
-               _R("RTL8102e",          RTL_TD_1, NULL, JUMBO_1K),
-       [RTL_GIGA_MAC_VER_08] =
-               _R("RTL8102e",          RTL_TD_1, NULL, JUMBO_1K),
-       [RTL_GIGA_MAC_VER_09] =
-               _R("RTL8102e",          RTL_TD_1, NULL, JUMBO_1K),
-       [RTL_GIGA_MAC_VER_10] =
-               _R("RTL8101e",          RTL_TD_0, NULL, JUMBO_1K),
-       [RTL_GIGA_MAC_VER_11] =
-               _R("RTL8168b/8111b",    RTL_TD_0, NULL, JUMBO_4K),
-       [RTL_GIGA_MAC_VER_12] =
-               _R("RTL8168b/8111b",    RTL_TD_0, NULL, JUMBO_4K),
-       [RTL_GIGA_MAC_VER_13] =
-               _R("RTL8101e",          RTL_TD_0, NULL, JUMBO_1K),
-       [RTL_GIGA_MAC_VER_14] =
-               _R("RTL8100e",          RTL_TD_0, NULL, JUMBO_1K),
-       [RTL_GIGA_MAC_VER_15] =
-               _R("RTL8100e",          RTL_TD_0, NULL, JUMBO_1K),
-       [RTL_GIGA_MAC_VER_16] =
-               _R("RTL8101e",          RTL_TD_0, NULL, JUMBO_1K),
-       [RTL_GIGA_MAC_VER_17] =
-               _R("RTL8168b/8111b",    RTL_TD_0, NULL, JUMBO_4K),
-       [RTL_GIGA_MAC_VER_18] =
-               _R("RTL8168cp/8111cp",  RTL_TD_1, NULL, JUMBO_6K),
-       [RTL_GIGA_MAC_VER_19] =
-               _R("RTL8168c/8111c",    RTL_TD_1, NULL, JUMBO_6K),
-       [RTL_GIGA_MAC_VER_20] =
-               _R("RTL8168c/8111c",    RTL_TD_1, NULL, JUMBO_6K),
-       [RTL_GIGA_MAC_VER_21] =
-               _R("RTL8168c/8111c",    RTL_TD_1, NULL, JUMBO_6K),
-       [RTL_GIGA_MAC_VER_22] =
-               _R("RTL8168c/8111c",    RTL_TD_1, NULL, JUMBO_6K),
-       [RTL_GIGA_MAC_VER_23] =
-               _R("RTL8168cp/8111cp",  RTL_TD_1, NULL, JUMBO_6K),
-       [RTL_GIGA_MAC_VER_24] =
-               _R("RTL8168cp/8111cp",  RTL_TD_1, NULL, JUMBO_6K),
-       [RTL_GIGA_MAC_VER_25] =
-               _R("RTL8168d/8111d",    RTL_TD_1, FIRMWARE_8168D_1, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_26] =
-               _R("RTL8168d/8111d",    RTL_TD_1, FIRMWARE_8168D_2, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_27] =
-               _R("RTL8168dp/8111dp",  RTL_TD_1, NULL, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_28] =
-               _R("RTL8168dp/8111dp",  RTL_TD_1, NULL, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_29] =
-               _R("RTL8105e",          RTL_TD_1, FIRMWARE_8105E_1, JUMBO_1K),
-       [RTL_GIGA_MAC_VER_30] =
-               _R("RTL8105e",          RTL_TD_1, FIRMWARE_8105E_1, JUMBO_1K),
-       [RTL_GIGA_MAC_VER_31] =
-               _R("RTL8168dp/8111dp",  RTL_TD_1, NULL, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_32] =
-               _R("RTL8168e/8111e",    RTL_TD_1, FIRMWARE_8168E_1, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_33] =
-               _R("RTL8168e/8111e",    RTL_TD_1, FIRMWARE_8168E_2, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_34] =
-               _R("RTL8168evl/8111evl",RTL_TD_1, FIRMWARE_8168E_3, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_35] =
-               _R("RTL8168f/8111f",    RTL_TD_1, FIRMWARE_8168F_1, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_36] =
-               _R("RTL8168f/8111f",    RTL_TD_1, FIRMWARE_8168F_2, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_37] =
-               _R("RTL8402",           RTL_TD_1, FIRMWARE_8402_1,  JUMBO_1K),
-       [RTL_GIGA_MAC_VER_38] =
-               _R("RTL8411",           RTL_TD_1, FIRMWARE_8411_1,  JUMBO_9K),
-       [RTL_GIGA_MAC_VER_39] =
-               _R("RTL8106e",          RTL_TD_1, FIRMWARE_8106E_1, JUMBO_1K),
-       [RTL_GIGA_MAC_VER_40] =
-               _R("RTL8168g/8111g",    RTL_TD_1, FIRMWARE_8168G_2, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_41] =
-               _R("RTL8168g/8111g",    RTL_TD_1, NULL, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_42] =
-               _R("RTL8168g/8111g",    RTL_TD_1, FIRMWARE_8168G_3, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_43] =
-               _R("RTL8106e",          RTL_TD_1, FIRMWARE_8106E_2, JUMBO_1K),
-       [RTL_GIGA_MAC_VER_44] =
-               _R("RTL8411",           RTL_TD_1, FIRMWARE_8411_2,  JUMBO_9K),
-       [RTL_GIGA_MAC_VER_45] =
-               _R("RTL8168h/8111h",    RTL_TD_1, FIRMWARE_8168H_1, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_46] =
-               _R("RTL8168h/8111h",    RTL_TD_1, FIRMWARE_8168H_2, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_47] =
-               _R("RTL8107e",          RTL_TD_1, FIRMWARE_8107E_1, JUMBO_1K),
-       [RTL_GIGA_MAC_VER_48] =
-               _R("RTL8107e",          RTL_TD_1, FIRMWARE_8107E_2, JUMBO_1K),
-       [RTL_GIGA_MAC_VER_49] =
-               _R("RTL8168ep/8111ep",  RTL_TD_1, NULL, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_50] =
-               _R("RTL8168ep/8111ep",  RTL_TD_1, NULL, JUMBO_9K),
-       [RTL_GIGA_MAC_VER_51] =
-               _R("RTL8168ep/8111ep",  RTL_TD_1, NULL, JUMBO_9K),
+       [RTL_GIGA_MAC_VER_07] = {"RTL8102e"                             },
+       [RTL_GIGA_MAC_VER_08] = {"RTL8102e"                             },
+       [RTL_GIGA_MAC_VER_09] = {"RTL8102e"                             },
+       [RTL_GIGA_MAC_VER_10] = {"RTL8101e"                             },
+       [RTL_GIGA_MAC_VER_11] = {"RTL8168b/8111b"                       },
+       [RTL_GIGA_MAC_VER_12] = {"RTL8168b/8111b"                       },
+       [RTL_GIGA_MAC_VER_13] = {"RTL8101e"                             },
+       [RTL_GIGA_MAC_VER_14] = {"RTL8100e"                             },
+       [RTL_GIGA_MAC_VER_15] = {"RTL8100e"                             },
+       [RTL_GIGA_MAC_VER_16] = {"RTL8101e"                             },
+       [RTL_GIGA_MAC_VER_17] = {"RTL8168b/8111b"                       },
+       [RTL_GIGA_MAC_VER_18] = {"RTL8168cp/8111cp"                     },
+       [RTL_GIGA_MAC_VER_19] = {"RTL8168c/8111c"                       },
+       [RTL_GIGA_MAC_VER_20] = {"RTL8168c/8111c"                       },
+       [RTL_GIGA_MAC_VER_21] = {"RTL8168c/8111c"                       },
+       [RTL_GIGA_MAC_VER_22] = {"RTL8168c/8111c"                       },
+       [RTL_GIGA_MAC_VER_23] = {"RTL8168cp/8111cp"                     },
+       [RTL_GIGA_MAC_VER_24] = {"RTL8168cp/8111cp"                     },
+       [RTL_GIGA_MAC_VER_25] = {"RTL8168d/8111d",      FIRMWARE_8168D_1},
+       [RTL_GIGA_MAC_VER_26] = {"RTL8168d/8111d",      FIRMWARE_8168D_2},
+       [RTL_GIGA_MAC_VER_27] = {"RTL8168dp/8111dp"                     },
+       [RTL_GIGA_MAC_VER_28] = {"RTL8168dp/8111dp"                     },
+       [RTL_GIGA_MAC_VER_29] = {"RTL8105e",            FIRMWARE_8105E_1},
+       [RTL_GIGA_MAC_VER_30] = {"RTL8105e",            FIRMWARE_8105E_1},
+       [RTL_GIGA_MAC_VER_31] = {"RTL8168dp/8111dp"                     },
+       [RTL_GIGA_MAC_VER_32] = {"RTL8168e/8111e",      FIRMWARE_8168E_1},
+       [RTL_GIGA_MAC_VER_33] = {"RTL8168e/8111e",      FIRMWARE_8168E_2},
+       [RTL_GIGA_MAC_VER_34] = {"RTL8168evl/8111evl",  FIRMWARE_8168E_3},
+       [RTL_GIGA_MAC_VER_35] = {"RTL8168f/8111f",      FIRMWARE_8168F_1},
+       [RTL_GIGA_MAC_VER_36] = {"RTL8168f/8111f",      FIRMWARE_8168F_2},
+       [RTL_GIGA_MAC_VER_37] = {"RTL8402",             FIRMWARE_8402_1 },
+       [RTL_GIGA_MAC_VER_38] = {"RTL8411",             FIRMWARE_8411_1 },
+       [RTL_GIGA_MAC_VER_39] = {"RTL8106e",            FIRMWARE_8106E_1},
+       [RTL_GIGA_MAC_VER_40] = {"RTL8168g/8111g",      FIRMWARE_8168G_2},
+       [RTL_GIGA_MAC_VER_41] = {"RTL8168g/8111g"                       },
+       [RTL_GIGA_MAC_VER_42] = {"RTL8168g/8111g",      FIRMWARE_8168G_3},
+       [RTL_GIGA_MAC_VER_43] = {"RTL8106e",            FIRMWARE_8106E_2},
+       [RTL_GIGA_MAC_VER_44] = {"RTL8411",             FIRMWARE_8411_2 },
+       [RTL_GIGA_MAC_VER_45] = {"RTL8168h/8111h",      FIRMWARE_8168H_1},
+       [RTL_GIGA_MAC_VER_46] = {"RTL8168h/8111h",      FIRMWARE_8168H_2},
+       [RTL_GIGA_MAC_VER_47] = {"RTL8107e",            FIRMWARE_8107E_1},
+       [RTL_GIGA_MAC_VER_48] = {"RTL8107e",            FIRMWARE_8107E_2},
+       [RTL_GIGA_MAC_VER_49] = {"RTL8168ep/8111ep"                     },
+       [RTL_GIGA_MAC_VER_50] = {"RTL8168ep/8111ep"                     },
+       [RTL_GIGA_MAC_VER_51] = {"RTL8168ep/8111ep"                     },
 };
-#undef _R
 
 enum cfg_version {
        RTL_CFG_0 = 0x00,
@@ -784,7 +715,6 @@ MODULE_PARM_DESC(use_dac, "Enable PCI DAC. Unsafe on 32 bit PCI slot.");
 module_param_named(debug, debug.msg_enable, int, 0);
 MODULE_PARM_DESC(debug, "Debug verbosity level (0=none, ..., 16=all)");
 MODULE_LICENSE("GPL");
-MODULE_VERSION(RTL8169_VERSION);
 MODULE_FIRMWARE(FIRMWARE_8168D_1);
 MODULE_FIRMWARE(FIRMWARE_8168D_2);
 MODULE_FIRMWARE(FIRMWARE_8168E_1);
@@ -1635,7 +1565,6 @@ static void rtl8169_get_drvinfo(struct net_device *dev,
        struct rtl_fw *rtl_fw = tp->rtl_fw;
 
        strlcpy(info->driver, MODULENAME, sizeof(info->driver));
-       strlcpy(info->version, RTL8169_VERSION, sizeof(info->version));
        strlcpy(info->bus_info, pci_name(tp->pci_dev), sizeof(info->bus_info));
        BUILD_BUG_ON(sizeof(info->fw_version) < sizeof(rtl_fw->version));
        if (!IS_ERR_OR_NULL(rtl_fw))
@@ -6525,20 +6454,15 @@ static int rtl_rx(struct net_device *dev, struct rtl8169_private *tp, u32 budget
 static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance)
 {
        struct rtl8169_private *tp = dev_instance;
-       int handled = 0;
-       u16 status;
+       u16 status = rtl_get_events(tp);
 
-       status = rtl_get_events(tp);
-       if (status && status != 0xffff) {
-               status &= RTL_EVENT_NAPI | tp->event_slow;
-               if (status) {
-                       handled = 1;
+       if (status == 0xffff || !(status & (RTL_EVENT_NAPI | tp->event_slow)))
+               return IRQ_NONE;
 
-                       rtl_irq_disable(tp);
-                       napi_schedule_irqoff(&tp->napi);
-               }
-       }
-       return IRQ_RETVAL(handled);
+       rtl_irq_disable(tp);
+       napi_schedule_irqoff(&tp->napi);
+
+       return IRQ_HANDLED;
 }
 
 /*
@@ -7152,6 +7076,11 @@ static int rtl_alloc_irq(struct rtl8169_private *tp)
                RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~MSIEnable);
                RTL_W8(tp, Cfg9346, Cfg9346_Lock);
                flags = PCI_IRQ_LEGACY;
+       } else if (tp->mac_version == RTL_GIGA_MAC_VER_40) {
+               /* This version was reported to have issues with resume
+                * from suspend when using MSI-X
+                */
+               flags = PCI_IRQ_LEGACY | PCI_IRQ_MSI;
        } else {
                flags = PCI_IRQ_ALL_TYPES;
        }
@@ -7284,18 +7213,48 @@ static void rtl_hw_initialize(struct rtl8169_private *tp)
        }
 }
 
+/* Versions RTL8102e and from RTL8168c onwards support csum_v2 */
+static bool rtl_chip_supports_csum_v2(struct rtl8169_private *tp)
+{
+       switch (tp->mac_version) {
+       case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
+       case RTL_GIGA_MAC_VER_10 ... RTL_GIGA_MAC_VER_17:
+               return false;
+       default:
+               return true;
+       }
+}
+
+static int rtl_jumbo_max(struct rtl8169_private *tp)
+{
+       /* Non-GBit versions don't support jumbo frames */
+       if (!tp->supports_gmii)
+               return JUMBO_1K;
+
+       switch (tp->mac_version) {
+       /* RTL8169 */
+       case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
+               return JUMBO_7K;
+       /* RTL8168b */
+       case RTL_GIGA_MAC_VER_11:
+       case RTL_GIGA_MAC_VER_12:
+       case RTL_GIGA_MAC_VER_17:
+               return JUMBO_4K;
+       /* RTL8168c */
+       case RTL_GIGA_MAC_VER_18 ... RTL_GIGA_MAC_VER_24:
+               return JUMBO_6K;
+       default:
+               return JUMBO_9K;
+       }
+}
+
 static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
 {
        const struct rtl_cfg_info *cfg = rtl_cfg_infos + ent->driver_data;
        struct rtl8169_private *tp;
        struct net_device *dev;
        int chipset, region, i;
-       int rc;
-
-       if (netif_msg_drv(&debug)) {
-               printk(KERN_INFO "%s Gigabit Ethernet driver %s loaded\n",
-                      MODULENAME, RTL8169_VERSION);
-       }
+       int jumbo_max, rc;
 
        dev = devm_alloc_etherdev(&pdev->dev, sizeof (*tp));
        if (!dev)
@@ -7446,16 +7405,11 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
                /* Disallow toggling */
                dev->hw_features &= ~NETIF_F_HW_VLAN_CTAG_RX;
 
-       switch (rtl_chip_infos[chipset].txd_version) {
-       case RTL_TD_0:
-               tp->tso_csum = rtl8169_tso_csum_v1;
-               break;
-       case RTL_TD_1:
+       if (rtl_chip_supports_csum_v2(tp)) {
                tp->tso_csum = rtl8169_tso_csum_v2;
                dev->hw_features |= NETIF_F_IPV6_CSUM | NETIF_F_TSO6;
-               break;
-       default:
-               WARN_ON_ONCE(1);
+       } else {
+               tp->tso_csum = rtl8169_tso_csum_v1;
        }
 
        dev->hw_features |= NETIF_F_RXALL;
@@ -7463,7 +7417,8 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
 
        /* MTU range: 60 - hw-specific max */
        dev->min_mtu = ETH_ZLEN;
-       dev->max_mtu = rtl_chip_infos[chipset].jumbo_max;
+       jumbo_max = rtl_jumbo_max(tp);
+       dev->max_mtu = jumbo_max;
 
        tp->hw_start = cfg->hw_start;
        tp->event_slow = cfg->event_slow;
@@ -7494,12 +7449,12 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
                   rtl_chip_infos[chipset].name, dev->dev_addr,
                   (u32)(RTL_R32(tp, TxConfig) & 0xfcf0f8ff),
                   pci_irq_vector(pdev, 0));
-       if (rtl_chip_infos[chipset].jumbo_max != JUMBO_1K) {
-               netif_info(tp, probe, dev, "jumbo features [frames: %d bytes, "
-                          "tx checksumming: %s]\n",
-                          rtl_chip_infos[chipset].jumbo_max,
-                         tp->mac_version <= RTL_GIGA_MAC_VER_06 ? "ok" : "ko");
-       }
+
+       if (jumbo_max > JUMBO_1K)
+               netif_info(tp, probe, dev,
+                          "jumbo features [frames: %d bytes, tx checksumming: %s]\n",
+                          jumbo_max, tp->mac_version <= RTL_GIGA_MAC_VER_06 ?
+                          "ok" : "ko");
 
        if (r8168_check_dash(tp))
                rtl8168_driver_start(tp);
index 01589b6982e4d433d4936a4bb6fa2c2510d278c0..7aa5ebb6766cb23eb2553b0af3d699f8d48c69d3 100644 (file)
 #define NETSEC_EEPROM_PKT_ME_ADDRESS           0x20
 #define NETSEC_EEPROM_PKT_ME_SIZE              0x24
 
-#define DESC_NUM       128
-#define NAPI_BUDGET    (DESC_NUM / 2)
+#define DESC_NUM       256
 
 #define DESC_SZ        sizeof(struct netsec_de)
 
@@ -642,8 +641,6 @@ static struct sk_buff *netsec_get_rx_pkt_data(struct netsec_priv *priv,
 
        tmp_skb = netsec_alloc_skb(priv, &td);
 
-       dma_rmb();
-
        tail = dring->tail;
 
        if (!tmp_skb) {
@@ -657,8 +654,6 @@ static struct sk_buff *netsec_get_rx_pkt_data(struct netsec_priv *priv,
        /* move tail ahead */
        dring->tail = (dring->tail + 1) % DESC_NUM;
 
-       dring->pkt_cnt--;
-
        return skb;
 }
 
@@ -731,25 +726,24 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget)
        struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
        struct net_device *ndev = priv->ndev;
        struct netsec_rx_pkt_info rx_info;
-       int done = 0, rx_num = 0;
+       int done = 0;
        struct netsec_desc desc;
        struct sk_buff *skb;
        u16 len;
 
        while (done < budget) {
-               if (!rx_num) {
-                       rx_num = netsec_read(priv, NETSEC_REG_NRM_RX_PKTCNT);
-                       dring->pkt_cnt += rx_num;
+               u16 idx = dring->tail;
+               struct netsec_de *de = dring->vaddr + (DESC_SZ * idx);
 
-                       /* move head 'rx_num' */
-                       dring->head = (dring->head + rx_num) % DESC_NUM;
+               if (de->attr & (1U << NETSEC_RX_PKT_OWN_FIELD))
+                       break;
 
-                       rx_num = dring->pkt_cnt;
-                       if (!rx_num)
-                               break;
-               }
+               /* This  barrier is needed to keep us from reading
+                * any other fields out of the netsec_de until we have
+                * verified the descriptor has been written back
+                */
+               dma_rmb();
                done++;
-               rx_num--;
                skb = netsec_get_rx_pkt_data(priv, &rx_info, &desc, &len);
                if (unlikely(!skb) || rx_info.err_flag) {
                        netif_err(priv, drv, priv->ndev,
@@ -1664,7 +1658,7 @@ static int netsec_probe(struct platform_device *pdev)
        dev_info(&pdev->dev, "hardware revision %d.%d\n",
                 hw_ver >> 16, hw_ver & 0xffff);
 
-       netif_napi_add(ndev, &priv->napi, netsec_napi_poll, NAPI_BUDGET);
+       netif_napi_add(ndev, &priv->napi, netsec_napi_poll, NAPI_POLL_WEIGHT);
 
        ndev->netdev_ops = &netsec_netdev_ops;
        ndev->ethtool_ops = &netsec_ethtool_ops;
index f051ce35a440aa38441094de7b14b5da2ab065d7..832bce07c38519fff846f650dc32a0ea2056b735 100644 (file)
@@ -2358,14 +2358,16 @@ static int cpsw_ndo_vlan_rx_add_vid(struct net_device *ndev,
                int i;
 
                for (i = 0; i < cpsw->data.slaves; i++) {
-                       if (vid == cpsw->slaves[i].port_vlan)
-                               return -EINVAL;
+                       if (vid == cpsw->slaves[i].port_vlan) {
+                               ret = -EINVAL;
+                               goto err;
+                       }
                }
        }
 
        dev_info(priv->dev, "Adding vlanid %d to vlan filter\n", vid);
        ret = cpsw_add_vlan_ale_entry(priv, vid);
-
+err:
        pm_runtime_put(cpsw->dev);
        return ret;
 }
@@ -2391,22 +2393,17 @@ static int cpsw_ndo_vlan_rx_kill_vid(struct net_device *ndev,
 
                for (i = 0; i < cpsw->data.slaves; i++) {
                        if (vid == cpsw->slaves[i].port_vlan)
-                               return -EINVAL;
+                               goto err;
                }
        }
 
        dev_info(priv->dev, "removing vlanid %d from vlan filter\n", vid);
        ret = cpsw_ale_del_vlan(cpsw->ale, vid, 0);
-       if (ret != 0)
-               return ret;
-
-       ret = cpsw_ale_del_ucast(cpsw->ale, priv->mac_addr,
-                                HOST_PORT_NUM, ALE_VLAN, vid);
-       if (ret != 0)
-               return ret;
-
-       ret = cpsw_ale_del_mcast(cpsw->ale, priv->ndev->broadcast,
-                                0, ALE_VLAN, vid);
+       ret |= cpsw_ale_del_ucast(cpsw->ale, priv->mac_addr,
+                                 HOST_PORT_NUM, ALE_VLAN, vid);
+       ret |= cpsw_ale_del_mcast(cpsw->ale, priv->ndev->broadcast,
+                                 0, ALE_VLAN, vid);
+err:
        pm_runtime_put(cpsw->dev);
        return ret;
 }
index 93dc05c194d381b51336e95d552180a6997a5084..5766225a4ce117957cdd77fe2a9d4ad4abc8f150 100644 (file)
@@ -394,7 +394,7 @@ int cpsw_ale_del_mcast(struct cpsw_ale *ale, u8 *addr, int port_mask,
 
        idx = cpsw_ale_match_addr(ale, addr, (flags & ALE_VLAN) ? vid : 0);
        if (idx < 0)
-               return -EINVAL;
+               return -ENOENT;
 
        cpsw_ale_read(ale, idx, ale_entry);
 
index 79e9b103188b4d7dc0f2a256bdf8ba885035cca9..29aa8d772b0c1050b145a8d5fddcfa132bda9f97 100644 (file)
@@ -757,13 +757,16 @@ static int decode_evnt(struct dp83640_private *dp83640,
 
        phy_txts = data;
 
-       switch (words) { /* fall through in every case */
+       switch (words) {
        case 3:
                dp83640->edata.sec_hi = phy_txts->sec_hi;
+               /* fall through */
        case 2:
                dp83640->edata.sec_lo = phy_txts->sec_lo;
+               /* fall through */
        case 1:
                dp83640->edata.ns_hi = phy_txts->ns_hi;
+               /* fall through */
        case 0:
                dp83640->edata.ns_lo = phy_txts->ns_lo;
        }
index 5661226cf75b129b15877747ced78be732240779..4637d980310e1503fc314004d18906ee2c1ac50f 100644 (file)
@@ -60,6 +60,69 @@ enum {
        SFP_S_TX_DISABLE,
 };
 
+static const char  * const mod_state_strings[] = {
+       [SFP_MOD_EMPTY] = "empty",
+       [SFP_MOD_PROBE] = "probe",
+       [SFP_MOD_HPOWER] = "hpower",
+       [SFP_MOD_PRESENT] = "present",
+       [SFP_MOD_ERROR] = "error",
+};
+
+static const char *mod_state_to_str(unsigned short mod_state)
+{
+       if (mod_state >= ARRAY_SIZE(mod_state_strings))
+               return "Unknown module state";
+       return mod_state_strings[mod_state];
+}
+
+static const char * const dev_state_strings[] = {
+       [SFP_DEV_DOWN] = "down",
+       [SFP_DEV_UP] = "up",
+};
+
+static const char *dev_state_to_str(unsigned short dev_state)
+{
+       if (dev_state >= ARRAY_SIZE(dev_state_strings))
+               return "Unknown device state";
+       return dev_state_strings[dev_state];
+}
+
+static const char * const event_strings[] = {
+       [SFP_E_INSERT] = "insert",
+       [SFP_E_REMOVE] = "remove",
+       [SFP_E_DEV_DOWN] = "dev_down",
+       [SFP_E_DEV_UP] = "dev_up",
+       [SFP_E_TX_FAULT] = "tx_fault",
+       [SFP_E_TX_CLEAR] = "tx_clear",
+       [SFP_E_LOS_HIGH] = "los_high",
+       [SFP_E_LOS_LOW] = "los_low",
+       [SFP_E_TIMEOUT] = "timeout",
+};
+
+static const char *event_to_str(unsigned short event)
+{
+       if (event >= ARRAY_SIZE(event_strings))
+               return "Unknown event";
+       return event_strings[event];
+}
+
+static const char * const sm_state_strings[] = {
+       [SFP_S_DOWN] = "down",
+       [SFP_S_INIT] = "init",
+       [SFP_S_WAIT_LOS] = "wait_los",
+       [SFP_S_LINK_UP] = "link_up",
+       [SFP_S_TX_FAULT] = "tx_fault",
+       [SFP_S_REINIT] = "reinit",
+       [SFP_S_TX_DISABLE] = "rx_disable",
+};
+
+static const char *sm_state_to_str(unsigned short sm_state)
+{
+       if (sm_state >= ARRAY_SIZE(sm_state_strings))
+               return "Unknown state";
+       return sm_state_strings[sm_state];
+}
+
 static const char *gpio_of_names[] = {
        "mod-def0",
        "los",
@@ -1388,8 +1451,11 @@ static void sfp_sm_event(struct sfp *sfp, unsigned int event)
 {
        mutex_lock(&sfp->sm_mutex);
 
-       dev_dbg(sfp->dev, "SM: enter %u:%u:%u event %u\n",
-               sfp->sm_mod_state, sfp->sm_dev_state, sfp->sm_state, event);
+       dev_dbg(sfp->dev, "SM: enter %s:%s:%s event %s\n",
+               mod_state_to_str(sfp->sm_mod_state),
+               dev_state_to_str(sfp->sm_dev_state),
+               sm_state_to_str(sfp->sm_state),
+               event_to_str(event));
 
        /* This state machine tracks the insert/remove state of
         * the module, and handles probing the on-board EEPROM.
@@ -1520,8 +1586,10 @@ static void sfp_sm_event(struct sfp *sfp, unsigned int event)
                break;
        }
 
-       dev_dbg(sfp->dev, "SM: exit %u:%u:%u\n",
-               sfp->sm_mod_state, sfp->sm_dev_state, sfp->sm_state);
+       dev_dbg(sfp->dev, "SM: exit %s:%s:%s\n",
+               mod_state_to_str(sfp->sm_mod_state),
+               dev_state_to_str(sfp->sm_dev_state),
+               sm_state_to_str(sfp->sm_state));
 
        mutex_unlock(&sfp->sm_mutex);
 }
index 124211afb023fc8d729a8208aa43099b2b171690..97742708460bc8a415713537ff90060c07b07d27 100644 (file)
@@ -4415,7 +4415,6 @@ static int rtl8152_runtime_suspend(struct r8152 *tp)
 static int rtl8152_system_suspend(struct r8152 *tp)
 {
        struct net_device *netdev = tp->netdev;
-       int ret = 0;
 
        netif_device_detach(netdev);
 
@@ -4430,7 +4429,7 @@ static int rtl8152_system_suspend(struct r8152 *tp)
                napi_enable(napi);
        }
 
-       return ret;
+       return 0;
 }
 
 static int rtl8152_suspend(struct usb_interface *intf, pm_message_t message)
index 62311dde6e71d48cd929961c14a926dab5592cd1..76592090522607a4bdf5422b1d49ec99c6fd68ac 100644 (file)
@@ -30,7 +30,7 @@
 #include <linux/cpu.h>
 #include <linux/average.h>
 #include <linux/filter.h>
-#include <linux/netdevice.h>
+#include <linux/kernel.h>
 #include <linux/pci.h>
 #include <net/route.h>
 #include <net/xdp.h>
@@ -1878,8 +1878,8 @@ static void virtnet_clean_affinity(struct virtnet_info *vi, long hcpu)
 
        if (vi->affinity_hint_set) {
                for (i = 0; i < vi->max_queue_pairs; i++) {
-                       virtqueue_set_affinity(vi->rq[i].vq, -1);
-                       virtqueue_set_affinity(vi->sq[i].vq, -1);
+                       virtqueue_set_affinity(vi->rq[i].vq, NULL);
+                       virtqueue_set_affinity(vi->sq[i].vq, NULL);
                }
 
                vi->affinity_hint_set = false;
@@ -1888,28 +1888,41 @@ static void virtnet_clean_affinity(struct virtnet_info *vi, long hcpu)
 
 static void virtnet_set_affinity(struct virtnet_info *vi)
 {
-       int i;
-       int cpu;
+       cpumask_var_t mask;
+       int stragglers;
+       int group_size;
+       int i, j, cpu;
+       int num_cpu;
+       int stride;
 
-       /* In multiqueue mode, when the number of cpu is equal to the number of
-        * queue pairs, we let the queue pairs to be private to one cpu by
-        * setting the affinity hint to eliminate the contention.
-        */
-       if (vi->curr_queue_pairs == 1 ||
-           vi->max_queue_pairs != num_online_cpus()) {
+       if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) {
                virtnet_clean_affinity(vi, -1);
                return;
        }
 
-       i = 0;
-       for_each_online_cpu(cpu) {
-               virtqueue_set_affinity(vi->rq[i].vq, cpu);
-               virtqueue_set_affinity(vi->sq[i].vq, cpu);
-               netif_set_xps_queue(vi->dev, cpumask_of(cpu), i);
-               i++;
+       num_cpu = num_online_cpus();
+       stride = max_t(int, num_cpu / vi->curr_queue_pairs, 1);
+       stragglers = num_cpu >= vi->curr_queue_pairs ?
+                       num_cpu % vi->curr_queue_pairs :
+                       0;
+       cpu = cpumask_next(-1, cpu_online_mask);
+
+       for (i = 0; i < vi->curr_queue_pairs; i++) {
+               group_size = stride + (i < stragglers ? 1 : 0);
+
+               for (j = 0; j < group_size; j++) {
+                       cpumask_set_cpu(cpu, mask);
+                       cpu = cpumask_next_wrap(cpu, cpu_online_mask,
+                                               nr_cpu_ids, false);
+               }
+               virtqueue_set_affinity(vi->rq[i].vq, mask);
+               virtqueue_set_affinity(vi->sq[i].vq, mask);
+               __netif_set_xps_queue(vi->dev, cpumask_bits(mask), i, false);
+               cpumask_clear(mask);
        }
 
        vi->affinity_hint_set = true;
+       free_cpumask_var(mask);
 }
 
 static int virtnet_cpu_online(unsigned int cpu, struct hlist_node *node)
index 502c346aa790bf4f79218d816420f9997b69df3e..529ebca1e9e13eb94d4574a29172ec311e614e58 100644 (file)
@@ -130,12 +130,12 @@ ssize_t i2400mu_tx_bulk_out(struct i2400mu *i2400mu, void *buf, size_t buf_size)
                        dev_err(dev, "BM-CMD: too many stalls in "
                                "URB; resetting device\n");
                        usb_queue_reset_device(i2400mu->usb_iface);
-                       /* fallthrough */
                } else {
                        usb_clear_halt(i2400mu->usb_dev, pipe);
                        msleep(10);     /* give the device some time */
                        goto retry;
                }
+               /* fall through */
        case -EINVAL:                   /* while removing driver */
        case -ENODEV:                   /* dev disconnect ... */
        case -ENOENT:                   /* just ignore it */
index 99ef81b3d5a515d6cc4ae6f176514ceca3e2942d..3a0e7226768ade6ee33081ee04d6f90de734c2a2 100644 (file)
@@ -131,12 +131,12 @@ int i2400mu_tx(struct i2400mu *i2400mu, struct i2400m_msg_hdr *tx_msg,
                        dev_err(dev, "BM-CMD: too many stalls in "
                                "URB; resetting device\n");
                        usb_queue_reset_device(i2400mu->usb_iface);
-                       /* fallthrough */
                } else {
                        usb_clear_halt(i2400mu->usb_dev, usb_pipe);
                        msleep(10);     /* give the device some time */
                        goto retry;
                }
+               /* fall through */
        case -EINVAL:                   /* while removing driver */
        case -ENODEV:                   /* dev disconnect ... */
        case -ENOENT:                   /* just ignore it */
index fa39ffffd34dfd474167fadebcd5a7c5b42d4ca7..c9bd0e2b5db7ed5a062fbb0a5d1abbdffc4e50ec 100644 (file)
@@ -133,11 +133,8 @@ static void ath10k_ahb_clock_deinit(struct ath10k *ar)
 static int ath10k_ahb_clock_enable(struct ath10k *ar)
 {
        struct ath10k_ahb *ar_ahb = ath10k_ahb_priv(ar);
-       struct device *dev;
        int ret;
 
-       dev = &ar_ahb->pdev->dev;
-
        if (IS_ERR_OR_NULL(ar_ahb->cmd_clk) ||
            IS_ERR_OR_NULL(ar_ahb->ref_clk) ||
            IS_ERR_OR_NULL(ar_ahb->rtc_clk)) {
@@ -451,12 +448,10 @@ static int ath10k_ahb_resource_init(struct ath10k *ar)
 {
        struct ath10k_ahb *ar_ahb = ath10k_ahb_priv(ar);
        struct platform_device *pdev;
-       struct device *dev;
        struct resource *res;
        int ret;
 
        pdev = ar_ahb->pdev;
-       dev = &pdev->dev;
 
        res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
        if (!res) {
index 85c58ebbfb2618f30d16c8ee017b581cb043c06e..c40cd129afe7b2c006fff3d8b270add0a3843d32 100644 (file)
@@ -2095,6 +2095,7 @@ static int ath10k_core_init_firmware_features(struct ath10k *ar)
                ar->fw_stats_req_mask = WMI_STAT_PDEV | WMI_STAT_VDEV |
                        WMI_STAT_PEER;
                ar->max_spatial_stream = WMI_MAX_SPATIAL_STREAM;
+               ar->wmi.mgmt_max_num_pending_tx = TARGET_TLV_MGMT_NUM_MSDU_DESC;
                break;
        case ATH10K_FW_WMI_OP_VERSION_10_4:
                ar->max_num_peers = TARGET_10_4_NUM_PEERS;
index 427ee5752bb0368bf599186bfddee1d91af00048..9feea02e7d3730c9350c6c6cfda722acb2512c19 100644 (file)
@@ -186,6 +186,11 @@ struct ath10k_wmi {
        const struct wmi_ops *ops;
        const struct wmi_peer_flags_map *peer_flags;
 
+       u32 mgmt_max_num_pending_tx;
+
+       /* Protected by data_lock */
+       struct idr mgmt_pending_tx;
+
        u32 num_mem_chunks;
        u32 rx_decap_mode;
        struct ath10k_mem_chunk mem_chunks[WMI_MAX_MEM_REQS];
index 4926722e0c0d3333cf5a93bdbce747c6cbbef6cf..0baaad90b8d18708ae2058076e04c1f5627128e2 100644 (file)
@@ -2297,6 +2297,52 @@ static const struct file_operations fops_tpc_stats_final = {
        .llseek = default_llseek,
 };
 
+static ssize_t ath10k_write_warm_hw_reset(struct file *file,
+                                         const char __user *user_buf,
+                                         size_t count, loff_t *ppos)
+{
+       struct ath10k *ar = file->private_data;
+       int ret;
+       bool val;
+
+       if (kstrtobool_from_user(user_buf, count, &val))
+               return -EFAULT;
+
+       if (!val)
+               return -EINVAL;
+
+       mutex_lock(&ar->conf_mutex);
+
+       if (ar->state != ATH10K_STATE_ON) {
+               ret = -ENETDOWN;
+               goto exit;
+       }
+
+       if (!(test_bit(WMI_SERVICE_RESET_CHIP, ar->wmi.svc_map)))
+               ath10k_warn(ar, "wmi service for reset chip is not available\n");
+
+       ret = ath10k_wmi_pdev_set_param(ar, ar->wmi.pdev_param->pdev_reset,
+                                       WMI_RST_MODE_WARM_RESET);
+
+       if (ret) {
+               ath10k_warn(ar, "failed to enable warm hw reset: %d\n", ret);
+               goto exit;
+       }
+
+       ret = count;
+
+exit:
+       mutex_unlock(&ar->conf_mutex);
+       return ret;
+}
+
+static const struct file_operations fops_warm_hw_reset = {
+       .write = ath10k_write_warm_hw_reset,
+       .open = simple_open,
+       .owner = THIS_MODULE,
+       .llseek = default_llseek,
+};
+
 int ath10k_debug_create(struct ath10k *ar)
 {
        ar->debug.cal_data = vzalloc(ATH10K_DEBUG_CAL_DATA_LEN);
@@ -2425,6 +2471,9 @@ int ath10k_debug_register(struct ath10k *ar)
                                    ar->debug.debugfs_phy, ar,
                                    &fops_tpc_stats_final);
 
+       debugfs_create_file("warm_hw_reset", 0600, ar->debug.debugfs_phy, ar,
+                           &fops_warm_hw_reset);
+
        return 0;
 }
 
index be5b52aaffa61c2aa1f2ebaf0ec74042cc5d81c9..7cff0d52338fe1de23da635d23c0e469593f0c2a 100644 (file)
@@ -208,10 +208,10 @@ int ath10k_htt_tx_alloc_msdu_id(struct ath10k_htt *htt, struct sk_buff *skb)
        struct ath10k *ar = htt->ar;
        int ret;
 
-       lockdep_assert_held(&htt->tx_lock);
-
+       spin_lock_bh(&htt->tx_lock);
        ret = idr_alloc(&htt->pending_tx, skb, 0,
                        htt->max_num_pending_tx, GFP_ATOMIC);
+       spin_unlock_bh(&htt->tx_lock);
 
        ath10k_dbg(ar, ATH10K_DBG_HTT, "htt tx alloc msdu_id %d\n", ret);
 
@@ -1077,9 +1077,7 @@ int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
        len += sizeof(cmd->hdr);
        len += sizeof(cmd->mgmt_tx);
 
-       spin_lock_bh(&htt->tx_lock);
        res = ath10k_htt_tx_alloc_msdu_id(htt, msdu);
-       spin_unlock_bh(&htt->tx_lock);
        if (res < 0)
                goto err;
 
@@ -1161,9 +1159,7 @@ static int ath10k_htt_tx_32(struct ath10k_htt *htt,
        struct htt_msdu_ext_desc *ext_desc = NULL;
        struct htt_msdu_ext_desc *ext_desc_t = NULL;
 
-       spin_lock_bh(&htt->tx_lock);
        res = ath10k_htt_tx_alloc_msdu_id(htt, msdu);
-       spin_unlock_bh(&htt->tx_lock);
        if (res < 0)
                goto err;
 
@@ -1363,9 +1359,7 @@ static int ath10k_htt_tx_64(struct ath10k_htt *htt,
        struct htt_msdu_ext_desc_64 *ext_desc = NULL;
        struct htt_msdu_ext_desc_64 *ext_desc_t = NULL;
 
-       spin_lock_bh(&htt->tx_lock);
        res = ath10k_htt_tx_alloc_msdu_id(htt, msdu);
-       spin_unlock_bh(&htt->tx_lock);
        if (res < 0)
                goto err;
 
index a274bd809a08d55e829e98f30063a23e7a046122..977f79ebb4fd5911bd8910d415e1ba6d492359ff 100644 (file)
@@ -699,6 +699,7 @@ ath10k_rx_desc_get_l3_pad_bytes(struct ath10k_hw_params *hw,
 #define TARGET_TLV_NUM_TIDS                    ((TARGET_TLV_NUM_PEERS) * 2)
 #define TARGET_TLV_NUM_MSDU_DESC               (1024 + 32)
 #define TARGET_TLV_NUM_WOW_PATTERNS            22
+#define TARGET_TLV_MGMT_NUM_MSDU_DESC          (50)
 
 /* Target specific defines for WMI-HL-1.0 firmware */
 #define TARGET_HL_10_TLV_NUM_PEERS             14
index 95243b48a179ecd043c8b28be2ba31305414289e..90f9372dec2548f4eb3fc671fd63d8e12dc4c238 100644 (file)
@@ -101,6 +101,8 @@ static struct ieee80211_rate ath10k_rates_rev2[] = {
 #define ath10k_g_rates_rev2 (ath10k_rates_rev2 + 0)
 #define ath10k_g_rates_rev2_size (ARRAY_SIZE(ath10k_rates_rev2))
 
+#define ath10k_wmi_legacy_rates ath10k_rates
+
 static bool ath10k_mac_bitrate_is_cck(int bitrate)
 {
        switch (bitrate) {
@@ -3085,6 +3087,13 @@ static int ath10k_update_channel_list(struct ath10k *ar)
                        passive = channel->flags & IEEE80211_CHAN_NO_IR;
                        ch->passive = passive;
 
+                       /* the firmware is ignoring the "radar" flag of the
+                        * channel and is scanning actively using Probe Requests
+                        * on "Radar detection"/DFS channels which are not
+                        * marked as "available"
+                        */
+                       ch->passive |= ch->chan_radar;
+
                        ch->freq = channel->center_freq;
                        ch->band_center_freq1 = channel->center_freq;
                        ch->min_power = 0;
@@ -5439,8 +5448,12 @@ static void ath10k_bss_info_changed(struct ieee80211_hw *hw,
 {
        struct ath10k *ar = hw->priv;
        struct ath10k_vif *arvif = (void *)vif->drv_priv;
-       int ret = 0;
+       struct cfg80211_chan_def def;
        u32 vdev_param, pdev_param, slottime, preamble;
+       u16 bitrate, hw_value;
+       u8 rate;
+       int rateidx, ret = 0;
+       enum nl80211_band band;
 
        mutex_lock(&ar->conf_mutex);
 
@@ -5608,6 +5621,44 @@ static void ath10k_bss_info_changed(struct ieee80211_hw *hw,
                                    arvif->vdev_id, ret);
        }
 
+       if (changed & BSS_CHANGED_MCAST_RATE &&
+           !WARN_ON(ath10k_mac_vif_chan(arvif->vif, &def))) {
+               band = def.chan->band;
+               rateidx = vif->bss_conf.mcast_rate[band] - 1;
+
+               if (ar->phy_capability & WHAL_WLAN_11A_CAPABILITY)
+                       rateidx += ATH10K_MAC_FIRST_OFDM_RATE_IDX;
+
+               bitrate = ath10k_wmi_legacy_rates[rateidx].bitrate;
+               hw_value = ath10k_wmi_legacy_rates[rateidx].hw_value;
+               if (ath10k_mac_bitrate_is_cck(bitrate))
+                       preamble = WMI_RATE_PREAMBLE_CCK;
+               else
+                       preamble = WMI_RATE_PREAMBLE_OFDM;
+
+               rate = ATH10K_HW_RATECODE(hw_value, 0, preamble);
+
+               ath10k_dbg(ar, ATH10K_DBG_MAC,
+                          "mac vdev %d mcast_rate %x\n",
+                          arvif->vdev_id, rate);
+
+               vdev_param = ar->wmi.vdev_param->mcast_data_rate;
+               ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id,
+                                               vdev_param, rate);
+               if (ret)
+                       ath10k_warn(ar,
+                                   "failed to set mcast rate on vdev %i: %d\n",
+                                   arvif->vdev_id,  ret);
+
+               vdev_param = ar->wmi.vdev_param->bcast_data_rate;
+               ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id,
+                                               vdev_param, rate);
+               if (ret)
+                       ath10k_warn(ar,
+                                   "failed to set bcast rate on vdev %i: %d\n",
+                                   arvif->vdev_id,  ret);
+       }
+
        mutex_unlock(&ar->conf_mutex);
 }
 
@@ -6063,13 +6114,13 @@ static void ath10k_sta_rc_update_wk(struct work_struct *wk)
 
                mode = chan_to_phymode(&def);
                ath10k_dbg(ar, ATH10K_DBG_MAC, "mac update sta %pM peer bw %d phymode %d\n",
-                               sta->addr, bw, mode);
+                          sta->addr, bw, mode);
 
                err = ath10k_wmi_peer_set_param(ar, arvif->vdev_id, sta->addr,
-                               WMI_PEER_PHYMODE, mode);
+                                               WMI_PEER_PHYMODE, mode);
                if (err) {
                        ath10k_warn(ar, "failed to update STA %pM peer phymode %d: %d\n",
-                                       sta->addr, mode, err);
+                                   sta->addr, mode, err);
                        goto exit;
                }
 
@@ -6935,7 +6986,6 @@ ath10k_mac_bitrate_mask_get_single_rate(struct ath10k *ar,
                                        const struct cfg80211_bitrate_mask *mask,
                                        u8 *rate, u8 *nss)
 {
-       struct ieee80211_supported_band *sband = &ar->mac.sbands[band];
        int rate_idx;
        int i;
        u16 bitrate;
@@ -6945,8 +6995,11 @@ ath10k_mac_bitrate_mask_get_single_rate(struct ath10k *ar,
        if (hweight32(mask->control[band].legacy) == 1) {
                rate_idx = ffs(mask->control[band].legacy) - 1;
 
-               hw_rate = sband->bitrates[rate_idx].hw_value;
-               bitrate = sband->bitrates[rate_idx].bitrate;
+               if (ar->phy_capability & WHAL_WLAN_11A_CAPABILITY)
+                       rate_idx += ATH10K_MAC_FIRST_OFDM_RATE_IDX;
+
+               hw_rate = ath10k_wmi_legacy_rates[rate_idx].hw_value;
+               bitrate = ath10k_wmi_legacy_rates[rate_idx].bitrate;
 
                if (ath10k_mac_bitrate_is_cck(bitrate))
                        preamble = WMI_RATE_PREAMBLE_CCK;
index 5ecce04005d284e0fe939a21536050cc4f30481a..7fd63bbf8e2408372526924b66d9f825ea899e33 100644 (file)
@@ -31,6 +31,8 @@ struct wmi_ops {
                         struct wmi_scan_ev_arg *arg);
        int (*pull_mgmt_rx)(struct ath10k *ar, struct sk_buff *skb,
                            struct wmi_mgmt_rx_ev_arg *arg);
+       int (*pull_mgmt_tx_compl)(struct ath10k *ar, struct sk_buff *skb,
+                                 struct wmi_tlv_mgmt_tx_compl_ev_arg *arg);
        int (*pull_ch_info)(struct ath10k *ar, struct sk_buff *skb,
                            struct wmi_ch_info_ev_arg *arg);
        int (*pull_vdev_start)(struct ath10k *ar, struct sk_buff *skb,
@@ -261,6 +263,16 @@ ath10k_wmi_pull_scan(struct ath10k *ar, struct sk_buff *skb,
        return ar->wmi.ops->pull_scan(ar, skb, arg);
 }
 
+static inline int
+ath10k_wmi_pull_mgmt_tx_compl(struct ath10k *ar, struct sk_buff *skb,
+                             struct wmi_tlv_mgmt_tx_compl_ev_arg *arg)
+{
+       if (!ar->wmi.ops->pull_mgmt_tx_compl)
+               return -EOPNOTSUPP;
+
+       return ar->wmi.ops->pull_mgmt_tx_compl(ar, skb, arg);
+}
+
 static inline int
 ath10k_wmi_pull_mgmt_rx(struct ath10k *ar, struct sk_buff *skb,
                        struct wmi_mgmt_rx_ev_arg *arg)
index b04f86f8038af793068cad3a881897cb5b7c772e..cdc1e64d52ad50e30dd5fe59a0e300635eda3f9d 100644 (file)
@@ -618,6 +618,9 @@ static void ath10k_wmi_tlv_op_rx(struct ath10k *ar, struct sk_buff *skb)
        case WMI_TLV_TDLS_PEER_EVENTID:
                ath10k_wmi_event_tdls_peer(ar, skb);
                break;
+       case WMI_TLV_MGMT_TX_COMPLETION_EVENTID:
+               ath10k_wmi_event_mgmt_tx_compl(ar, skb);
+               break;
        default:
                ath10k_warn(ar, "Unknown eventid: %d\n", id);
                break;
@@ -659,6 +662,31 @@ static int ath10k_wmi_tlv_op_pull_scan_ev(struct ath10k *ar,
        return 0;
 }
 
+static int
+ath10k_wmi_tlv_op_pull_mgmt_tx_compl_ev(struct ath10k *ar, struct sk_buff *skb,
+                                       struct wmi_tlv_mgmt_tx_compl_ev_arg *arg)
+{
+       const void **tb;
+       const struct wmi_tlv_mgmt_tx_compl_ev *ev;
+       int ret;
+
+       tb = ath10k_wmi_tlv_parse_alloc(ar, skb->data, skb->len, GFP_ATOMIC);
+       if (IS_ERR(tb)) {
+               ret = PTR_ERR(tb);
+               ath10k_warn(ar, "failed to parse tlv: %d\n", ret);
+               return ret;
+       }
+
+       ev = tb[WMI_TLV_TAG_STRUCT_MGMT_TX_COMPL_EVENT];
+
+       arg->desc_id = ev->desc_id;
+       arg->status = ev->status;
+       arg->pdev_id = ev->pdev_id;
+
+       kfree(tb);
+       return 0;
+}
+
 static int ath10k_wmi_tlv_op_pull_mgmt_rx_ev(struct ath10k *ar,
                                             struct sk_buff *skb,
                                             struct wmi_mgmt_rx_ev_arg *arg)
@@ -1586,6 +1614,11 @@ static struct sk_buff *ath10k_wmi_tlv_op_gen_init(struct ath10k *ar)
        cfg->keep_alive_pattern_size = __cpu_to_le32(0);
        cfg->max_tdls_concurrent_sleep_sta = __cpu_to_le32(1);
        cfg->max_tdls_concurrent_buffer_sta = __cpu_to_le32(1);
+       cfg->wmi_send_separate = __cpu_to_le32(0);
+       cfg->num_ocb_vdevs = __cpu_to_le32(0);
+       cfg->num_ocb_channels = __cpu_to_le32(0);
+       cfg->num_ocb_schedules = __cpu_to_le32(0);
+       cfg->host_capab = __cpu_to_le32(0);
 
        ath10k_wmi_put_host_mem_chunks(ar, chunks);
 
@@ -1787,7 +1820,6 @@ ath10k_wmi_tlv_op_gen_vdev_start(struct ath10k *ar,
 {
        struct wmi_tlv_vdev_start_cmd *cmd;
        struct wmi_channel *ch;
-       struct wmi_p2p_noa_descriptor *noa;
        struct wmi_tlv *tlv;
        struct sk_buff *skb;
        size_t len;
@@ -1845,7 +1877,6 @@ ath10k_wmi_tlv_op_gen_vdev_start(struct ath10k *ar,
        tlv = ptr;
        tlv->tag = __cpu_to_le16(WMI_TLV_TAG_ARRAY_STRUCT);
        tlv->len = 0;
-       noa = (void *)tlv->value;
 
        /* Note: This is a nested TLV containing:
         * [wmi_tlv][wmi_p2p_noa_descriptor][wmi_tlv]..
@@ -2607,6 +2638,30 @@ ath10k_wmi_tlv_op_gen_request_stats(struct ath10k *ar, u32 stats_mask)
        return skb;
 }
 
+static int
+ath10k_wmi_mgmt_tx_alloc_msdu_id(struct ath10k *ar, struct sk_buff *skb,
+                                dma_addr_t paddr)
+{
+       struct ath10k_wmi *wmi = &ar->wmi;
+       struct ath10k_mgmt_tx_pkt_addr *pkt_addr;
+       int ret;
+
+       pkt_addr = kmalloc(sizeof(*pkt_addr), GFP_ATOMIC);
+       if (!pkt_addr)
+               return -ENOMEM;
+
+       pkt_addr->vaddr = skb;
+       pkt_addr->paddr = paddr;
+
+       spin_lock_bh(&ar->data_lock);
+       ret = idr_alloc(&wmi->mgmt_pending_tx, pkt_addr, 0,
+                       wmi->mgmt_max_num_pending_tx, GFP_ATOMIC);
+       spin_unlock_bh(&ar->data_lock);
+
+       ath10k_dbg(ar, ATH10K_DBG_WMI, "wmi mgmt tx alloc msdu_id ret %d\n", ret);
+       return ret;
+}
+
 static struct sk_buff *
 ath10k_wmi_tlv_op_gen_mgmt_tx_send(struct ath10k *ar, struct sk_buff *msdu,
                                   dma_addr_t paddr)
@@ -2618,9 +2673,9 @@ ath10k_wmi_tlv_op_gen_mgmt_tx_send(struct ath10k *ar, struct sk_buff *msdu,
        u32 buf_len = msdu->len;
        struct wmi_tlv *tlv;
        struct sk_buff *skb;
+       int len, desc_id;
        u32 vdev_id;
        void *ptr;
-       int len;
 
        if (!cb->vif)
                return ERR_PTR(-EINVAL);
@@ -2651,13 +2706,17 @@ ath10k_wmi_tlv_op_gen_mgmt_tx_send(struct ath10k *ar, struct sk_buff *msdu,
        if (!skb)
                return ERR_PTR(-ENOMEM);
 
+       desc_id = ath10k_wmi_mgmt_tx_alloc_msdu_id(ar, msdu, paddr);
+       if (desc_id < 0)
+               goto err_free_skb;
+
        ptr = (void *)skb->data;
        tlv = ptr;
        tlv->tag = __cpu_to_le16(WMI_TLV_TAG_STRUCT_MGMT_TX_CMD);
        tlv->len = __cpu_to_le16(sizeof(*cmd));
        cmd = (void *)tlv->value;
        cmd->vdev_id = __cpu_to_le32(vdev_id);
-       cmd->desc_id = 0;
+       cmd->desc_id = __cpu_to_le32(desc_id);
        cmd->chanfreq = 0;
        cmd->buf_len = __cpu_to_le32(buf_len);
        cmd->frame_len = __cpu_to_le32(msdu->len);
@@ -2674,6 +2733,10 @@ ath10k_wmi_tlv_op_gen_mgmt_tx_send(struct ath10k *ar, struct sk_buff *msdu,
        memcpy(ptr, msdu->data, buf_len);
 
        return skb;
+
+err_free_skb:
+       dev_kfree_skb(skb);
+       return ERR_PTR(desc_id);
 }
 
 static struct sk_buff *
@@ -2702,7 +2765,8 @@ ath10k_wmi_tlv_op_gen_force_fw_hang(struct ath10k *ar,
 
 static struct sk_buff *
 ath10k_wmi_tlv_op_gen_dbglog_cfg(struct ath10k *ar, u64 module_enable,
-                                u32 log_level) {
+                                u32 log_level)
+{
        struct wmi_tlv_dbglog_cmd *cmd;
        struct wmi_tlv *tlv;
        struct sk_buff *skb;
@@ -3837,6 +3901,7 @@ static const struct wmi_ops wmi_tlv_ops = {
 
        .pull_scan = ath10k_wmi_tlv_op_pull_scan_ev,
        .pull_mgmt_rx = ath10k_wmi_tlv_op_pull_mgmt_rx_ev,
+       .pull_mgmt_tx_compl = ath10k_wmi_tlv_op_pull_mgmt_tx_compl_ev,
        .pull_ch_info = ath10k_wmi_tlv_op_pull_ch_info_ev,
        .pull_vdev_start = ath10k_wmi_tlv_op_pull_vdev_start_ev,
        .pull_peer_kick = ath10k_wmi_tlv_op_pull_peer_kick_ev,
index 3e1e340cd8344732b572d13c7bba50603eb5a87d..4f0c20c906425fbe847ebeda69dc5eb9c66a6921 100644 (file)
@@ -320,6 +320,7 @@ enum wmi_tlv_event_id {
        WMI_TLV_TBTTOFFSET_UPDATE_EVENTID,
        WMI_TLV_OFFLOAD_BCN_TX_STATUS_EVENTID,
        WMI_TLV_OFFLOAD_PROB_RESP_TX_STATUS_EVENTID,
+       WMI_TLV_MGMT_TX_COMPLETION_EVENTID,
        WMI_TLV_TX_DELBA_COMPLETE_EVENTID = WMI_TLV_EV(WMI_TLV_GRP_BA_NEG),
        WMI_TLV_TX_ADDBA_COMPLETE_EVENTID,
        WMI_TLV_BA_RSP_SSN_EVENTID,
@@ -1573,6 +1574,17 @@ struct wmi_tlv {
        u8 value[0];
 } __packed;
 
+struct ath10k_mgmt_tx_pkt_addr {
+       void *vaddr;
+       dma_addr_t paddr;
+};
+
+struct wmi_tlv_mgmt_tx_compl_ev {
+       __le32 desc_id;
+       __le32 status;
+       __le32 pdev_id;
+};
+
 #define WMI_TLV_MGMT_RX_NUM_RSSI 4
 
 struct wmi_tlv_mgmt_rx_ev {
@@ -1670,6 +1682,11 @@ struct wmi_tlv_resource_config {
        __le32 keep_alive_pattern_size;
        __le32 max_tdls_concurrent_sleep_sta;
        __le32 max_tdls_concurrent_buffer_sta;
+       __le32 wmi_send_separate;
+       __le32 num_ocb_vdevs;
+       __le32 num_ocb_channels;
+       __le32 num_ocb_schedules;
+       __le32 host_capab;
 } __packed;
 
 struct wmi_tlv_init_cmd {
index 877249ac6fd446e44dad8455e55e2a8667444021..fd612d2905b055f5654019a5ee6b3d4820ad1086 100644 (file)
@@ -1333,7 +1333,7 @@ static struct wmi_pdev_param_map wmi_10_2_4_pdev_param_map = {
        .enable_per_tid_ampdu = WMI_PDEV_PARAM_UNSUPPORTED,
        .cca_threshold = WMI_PDEV_PARAM_UNSUPPORTED,
        .rts_fixed_rate = WMI_PDEV_PARAM_UNSUPPORTED,
-       .pdev_reset = WMI_PDEV_PARAM_UNSUPPORTED,
+       .pdev_reset = WMI_10X_PDEV_PARAM_PDEV_RESET,
        .wapi_mbssid_offset = WMI_PDEV_PARAM_UNSUPPORTED,
        .arp_srcaddr = WMI_PDEV_PARAM_UNSUPPORTED,
        .arp_dstaddr = WMI_PDEV_PARAM_UNSUPPORTED,
@@ -2313,6 +2313,59 @@ static bool ath10k_wmi_rx_is_decrypted(struct ath10k *ar,
        return true;
 }
 
+static int wmi_process_mgmt_tx_comp(struct ath10k *ar, u32 desc_id,
+                                   u32 status)
+{
+       struct ath10k_mgmt_tx_pkt_addr *pkt_addr;
+       struct ath10k_wmi *wmi = &ar->wmi;
+       struct ieee80211_tx_info *info;
+       struct sk_buff *msdu;
+       int ret;
+
+       spin_lock_bh(&ar->data_lock);
+
+       pkt_addr = idr_find(&wmi->mgmt_pending_tx, desc_id);
+       if (!pkt_addr) {
+               ath10k_warn(ar, "received mgmt tx completion for invalid msdu_id: %d\n",
+                           desc_id);
+               ret = -ENOENT;
+               goto out;
+       }
+
+       msdu = pkt_addr->vaddr;
+       dma_unmap_single(ar->dev, pkt_addr->paddr,
+                        msdu->len, DMA_FROM_DEVICE);
+       info = IEEE80211_SKB_CB(msdu);
+       info->flags |= status;
+       ieee80211_tx_status_irqsafe(ar->hw, msdu);
+
+       ret = 0;
+
+out:
+       idr_remove(&wmi->mgmt_pending_tx, desc_id);
+       spin_unlock_bh(&ar->data_lock);
+       return ret;
+}
+
+int ath10k_wmi_event_mgmt_tx_compl(struct ath10k *ar, struct sk_buff *skb)
+{
+       struct wmi_tlv_mgmt_tx_compl_ev_arg arg;
+       int ret;
+
+       ret = ath10k_wmi_pull_mgmt_tx_compl(ar, skb, &arg);
+       if (ret) {
+               ath10k_warn(ar, "failed to parse mgmt comp event: %d\n", ret);
+               return ret;
+       }
+
+       wmi_process_mgmt_tx_comp(ar, __le32_to_cpu(arg.desc_id),
+                                __le32_to_cpu(arg.status));
+
+       ath10k_dbg(ar, ATH10K_DBG_WMI, "wmi tlv evnt mgmt tx completion\n");
+
+       return 0;
+}
+
 int ath10k_wmi_event_mgmt_rx(struct ath10k *ar, struct sk_buff *skb)
 {
        struct wmi_mgmt_rx_ev_arg arg = {};
@@ -9073,6 +9126,11 @@ int ath10k_wmi_attach(struct ath10k *ar)
        INIT_WORK(&ar->radar_confirmation_work,
                  ath10k_radar_confirmation_work);
 
+       if (test_bit(ATH10K_FW_FEATURE_MGMT_TX_BY_REF,
+                    ar->running_fw->fw_file.fw_features)) {
+               idr_init(&ar->wmi.mgmt_pending_tx);
+       }
+
        return 0;
 }
 
@@ -9091,8 +9149,35 @@ void ath10k_wmi_free_host_mem(struct ath10k *ar)
        ar->wmi.num_mem_chunks = 0;
 }
 
+static int ath10k_wmi_mgmt_tx_clean_up_pending(int msdu_id, void *ptr,
+                                              void *ctx)
+{
+       struct ath10k_mgmt_tx_pkt_addr *pkt_addr = ptr;
+       struct ath10k *ar = ctx;
+       struct sk_buff *msdu;
+
+       ath10k_dbg(ar, ATH10K_DBG_WMI,
+                  "force cleanup mgmt msdu_id %hu\n", msdu_id);
+
+       msdu = pkt_addr->vaddr;
+       dma_unmap_single(ar->dev, pkt_addr->paddr,
+                        msdu->len, DMA_FROM_DEVICE);
+       ieee80211_free_txskb(ar->hw, msdu);
+
+       return 0;
+}
+
 void ath10k_wmi_detach(struct ath10k *ar)
 {
+       if (test_bit(ATH10K_FW_FEATURE_MGMT_TX_BY_REF,
+                    ar->running_fw->fw_file.fw_features)) {
+               spin_lock_bh(&ar->data_lock);
+               idr_for_each(&ar->wmi.mgmt_pending_tx,
+                            ath10k_wmi_mgmt_tx_clean_up_pending, ar);
+               idr_destroy(&ar->wmi.mgmt_pending_tx);
+               spin_unlock_bh(&ar->data_lock);
+       }
+
        cancel_work_sync(&ar->svc_rdy_work);
 
        if (ar->svc_rdy_skb)
index d68afb65402a069528b0dc3a01eab142c833898c..36220258e3c7e686af0ffb5a2ab82869b8be5b01 100644 (file)
@@ -462,6 +462,7 @@ static inline char *wmi_service_name(int service_id)
        SVCSTR(WMI_SERVICE_HTT_MGMT_TX_COMP_VALID_FLAGS);
        SVCSTR(WMI_SERVICE_HOST_DFS_CHECK_SUPPORT);
        SVCSTR(WMI_SERVICE_TPC_STATS_FINAL);
+       SVCSTR(WMI_SERVICE_RESET_CHIP);
        default:
                return NULL;
        }
@@ -3934,7 +3935,11 @@ enum wmi_10x_pdev_param {
        WMI_10X_PDEV_PARAM_REMOVE_MCAST2UCAST_BUFFER,
        WMI_10X_PDEV_PARAM_PEER_STA_PS_STATECHG_ENABLE,
        WMI_10X_PDEV_PARAM_RTS_FIXED_RATE,
-       WMI_10X_PDEV_PARAM_CAL_PERIOD
+       WMI_10X_PDEV_PARAM_CAL_PERIOD,
+       WMI_10X_PDEV_PARAM_ATF_STRICT_SCH,
+       WMI_10X_PDEV_PARAM_ATF_SCHED_DURATION,
+       WMI_10X_PDEV_PARAM_SET_PROMISC_MODE_CMDID,
+       WMI_10X_PDEV_PARAM_PDEV_RESET
 };
 
 enum wmi_10_4_pdev_param {
@@ -6501,6 +6506,15 @@ struct wmi_force_fw_hang_cmd {
        __le32 delay_ms;
 } __packed;
 
+enum wmi_pdev_reset_mode_type {
+       WMI_RST_MODE_TX_FLUSH = 1,
+       WMI_RST_MODE_WARM_RESET,
+       WMI_RST_MODE_COLD_RESET,
+       WMI_RST_MODE_WARM_RESET_RESTORE_CAL,
+       WMI_RST_MODE_COLD_RESET_RESTORE_CAL,
+       WMI_RST_MODE_MAX,
+};
+
 enum ath10k_dbglog_level {
        ATH10K_DBGLOG_LEVEL_VERBOSE = 0,
        ATH10K_DBGLOG_LEVEL_INFO = 1,
@@ -6600,6 +6614,12 @@ struct wmi_scan_ev_arg {
        __le32 vdev_id;
 };
 
+struct wmi_tlv_mgmt_tx_compl_ev_arg {
+       __le32 desc_id;
+       __le32 status;
+       __le32 pdev_id;
+};
+
 struct wmi_mgmt_rx_ev_arg {
        __le32 channel;
        __le32 snr;
@@ -7071,6 +7091,7 @@ int ath10k_wmi_start_scan_verify(const struct wmi_start_scan_arg *arg);
 
 int ath10k_wmi_event_scan(struct ath10k *ar, struct sk_buff *skb);
 int ath10k_wmi_event_mgmt_rx(struct ath10k *ar, struct sk_buff *skb);
+int ath10k_wmi_event_mgmt_tx_compl(struct ath10k *ar, struct sk_buff *skb);
 void ath10k_wmi_event_chan_info(struct ath10k *ar, struct sk_buff *skb);
 void ath10k_wmi_event_echo(struct ath10k *ar, struct sk_buff *skb);
 int ath10k_wmi_event_debug_mesg(struct ath10k *ar, struct sk_buff *skb);
index b1b8bc3268309ecff781d981a7563a5c024aa0ac..ae08572c4b58e1d58f8553b1ec17c5f0716d159c 100644 (file)
@@ -483,7 +483,6 @@ static u32
 ath5k_hw_rf_gainf_corr(struct ath5k_hw *ah)
 {
        u32 mix, step;
-       u32 *rf;
        const struct ath5k_gain_opt *go;
        const struct ath5k_gain_opt_step *g_step;
        const struct ath5k_rf_reg *rf_regs;
@@ -502,7 +501,6 @@ ath5k_hw_rf_gainf_corr(struct ath5k_hw *ah)
        if (ah->ah_rf_banks == NULL)
                return 0;
 
-       rf = ah->ah_rf_banks;
        ah->ah_gain.g_f_corr = 0;
 
        /* No VGA (Variable Gain Amplifier) override, skip */
@@ -549,13 +547,10 @@ ath5k_hw_rf_check_gainf_readback(struct ath5k_hw *ah)
 {
        const struct ath5k_rf_reg *rf_regs;
        u32 step, mix_ovr, level[4];
-       u32 *rf;
 
        if (ah->ah_rf_banks == NULL)
                return false;
 
-       rf = ah->ah_rf_banks;
-
        if (ah->ah_radio == AR5K_RF5111) {
 
                rf_regs = rf_regs_5111;
index 334dbd834b3a6b37447ff324ed43f4a389d603f9..bde5a10d470c8e74e169e3f4479d9f2487542c34 100644 (file)
@@ -534,7 +534,7 @@ int ath6kl_bmi_init(struct ath6kl *ar)
        /* cmd + addr + len + data_size */
        ar->bmi.max_cmd_size = ar->bmi.max_data_size + (sizeof(u32) * 3);
 
-       ar->bmi.cmd_buf = kzalloc(ar->bmi.max_cmd_size, GFP_ATOMIC);
+       ar->bmi.cmd_buf = kzalloc(ar->bmi.max_cmd_size, GFP_KERNEL);
        if (!ar->bmi.cmd_buf)
                return -ENOMEM;
 
index 546243e117379379b5c9b69bdbb7b4a48b70c665..434b6682964662fa3b9e723fcead47745e5dd622 100644 (file)
@@ -746,10 +746,8 @@ static int ath6kl_htc_pipe_tx_complete(struct ath6kl *ar, struct sk_buff *skb)
        struct htc_endpoint *ep;
        struct htc_packet *packet;
        u8 ep_id, *netdata;
-       u32 netlen;
 
        netdata = skb->data;
-       netlen = skb->len;
 
        htc_hdr = (struct htc_frame_hdr *) netdata;
 
@@ -855,12 +853,8 @@ static int htc_process_trailer(struct htc_target *target, u8 *buffer,
 {
        struct htc_credit_report *report;
        struct htc_record_hdr *record;
-       u8 *record_buf, *orig_buf;
-       int orig_len, status;
-
-       orig_buf = buffer;
-       orig_len = len;
-       status = 0;
+       u8 *record_buf;
+       int status = 0;
 
        while (len > 0) {
                if (len < sizeof(struct htc_record_hdr)) {
index 808fb30be9ad1299d0412fc4e424188b6244e267..0c61dbaa62a410529dcba1d49f0ade9c90012622 100644 (file)
@@ -272,7 +272,7 @@ int ath6kl_read_fwlogs(struct ath6kl *ar)
 {
        struct ath6kl_dbglog_hdr debug_hdr;
        struct ath6kl_dbglog_buf debug_buf;
-       u32 address, length, dropped, firstbuf, debug_hdr_addr;
+       u32 address, length, firstbuf, debug_hdr_addr;
        int ret, loop;
        u8 *buf;
 
@@ -303,7 +303,6 @@ int ath6kl_read_fwlogs(struct ath6kl *ar)
        address = TARG_VTOP(ar->target_type,
                            le32_to_cpu(debug_hdr.dbuf_addr));
        firstbuf = address;
-       dropped = le32_to_cpu(debug_hdr.dropped);
        ret = ath6kl_diag_read(ar, address, &debug_buf, sizeof(debug_buf));
        if (ret)
                goto out;
index 618d12ed4b4037937cb93265c66865d7ac10b772..b22ed499f7ba7164dd96ab9bfdbf76979df9d483 100644 (file)
@@ -1701,7 +1701,6 @@ void aggr_recv_addba_req_evt(struct ath6kl_vif *vif, u8 tid_mux, u16 seq_no,
        struct ath6kl_sta *sta;
        struct aggr_info_conn *aggr_conn = NULL;
        struct rxtid *rxtid;
-       struct rxtid_stats *stats;
        u16 hold_q_size;
        u8 tid, aid;
 
@@ -1722,7 +1721,6 @@ void aggr_recv_addba_req_evt(struct ath6kl_vif *vif, u8 tid_mux, u16 seq_no,
                return;
 
        rxtid = &aggr_conn->rx_tid[tid];
-       stats = &aggr_conn->stat[tid];
 
        if (win_sz < AGGR_WIN_SZ_MIN || win_sz > AGGR_WIN_SZ_MAX)
                ath6kl_dbg(ATH6KL_DBG_WLAN_RX, "%s: win_sz %d, tid %d\n",
index 50fcd343c41af12d865d55f1ceb5427797bd20d7..fd9db8ca99d79b1f9ca438035788f5782a0abd95 100644 (file)
@@ -676,10 +676,10 @@ static int ar9002_hw_calibrate(struct ath_hw *ah, struct ath9k_channel *chan,
                        return 0;
 
                ah->cal_list_curr = currCal = currCal->calNext;
-               if (currCal->calState == CAL_WAITING) {
+               if (currCal->calState == CAL_WAITING)
                        ath9k_hw_reset_calibration(ah, currCal);
-                       return 0;
-               }
+
+               return 0;
        }
 
        /* Do NF cal only at longer intervals */
index fe5102ca5010c28dfdc04cf415cd035f7d47b9eb..98c5f524a360348e15e36751a173e620d715fd3f 100644 (file)
@@ -1800,6 +1800,8 @@ static void ar9003_hw_spectral_scan_config(struct ath_hw *ah,
 
 static void ar9003_hw_spectral_scan_trigger(struct ath_hw *ah)
 {
+       REG_SET_BIT(ah, AR_PHY_SPECTRAL_SCAN,
+                   AR_PHY_SPECTRAL_SCAN_ENABLE);
        /* Activate spectral scan */
        REG_SET_BIT(ah, AR_PHY_SPECTRAL_SCAN,
                    AR_PHY_SPECTRAL_SCAN_ACTIVE);
index 32fb85e076d647899a4f884c2ee858a5101df8c1..bb319f22761fbe46379837ed21dee4f343475dc5 100644 (file)
@@ -2942,16 +2942,19 @@ void ath9k_hw_apply_txpower(struct ath_hw *ah, struct ath9k_channel *chan,
        struct ath_regulatory *reg = ath9k_hw_regulatory(ah);
        struct ieee80211_channel *channel;
        int chan_pwr, new_pwr;
+       u16 ctl = NO_CTL;
 
        if (!chan)
                return;
 
+       if (!test)
+               ctl = ath9k_regd_get_ctl(reg, chan);
+
        channel = chan->chan;
        chan_pwr = min_t(int, channel->max_power * 2, MAX_RATE_POWER);
        new_pwr = min_t(int, chan_pwr, reg->power_limit);
 
-       ah->eep_ops->set_txpower(ah, chan,
-                                ath9k_regd_get_ctl(reg, chan),
+       ah->eep_ops->set_txpower(ah, chan, ctl,
                                 get_antenna_gain(ah, chan), new_pwr, test);
 }
 
index 7fdb152be0bb81b33ffae48767d104e69c7daf38..43b6c8508e4938f0dc95d0ac345044824d2fb7b2 100644 (file)
@@ -62,7 +62,7 @@ static void ath_tx_rc_status(struct ath_softc *sc, struct ath_buf *bf,
                             struct ath_tx_status *ts, int nframes, int nbad,
                             int txok);
 static void ath_tx_update_baw(struct ath_softc *sc, struct ath_atx_tid *tid,
-                             int seqno);
+                             struct ath_buf *bf);
 static struct ath_buf *ath_tx_setup_buffer(struct ath_softc *sc,
                                           struct ath_txq *txq,
                                           struct ath_atx_tid *tid,
@@ -86,7 +86,8 @@ static void ath_tx_status(struct ieee80211_hw *hw, struct sk_buff *skb)
        struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
        struct ieee80211_sta *sta = info->status.status_driver_data[0];
 
-       if (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS) {
+       if (info->flags & (IEEE80211_TX_CTL_REQ_TX_STATUS |
+                          IEEE80211_TX_STATUS_EOSP)) {
                ieee80211_tx_status(hw, skb);
                return;
        }
@@ -295,7 +296,7 @@ static void ath_tx_flush_tid(struct ath_softc *sc, struct ath_atx_tid *tid)
                }
 
                if (fi->baw_tracked) {
-                       ath_tx_update_baw(sc, tid, bf->bf_state.seqno);
+                       ath_tx_update_baw(sc, tid, bf);
                        sendbar = true;
                }
 
@@ -311,10 +312,15 @@ static void ath_tx_flush_tid(struct ath_softc *sc, struct ath_atx_tid *tid)
 }
 
 static void ath_tx_update_baw(struct ath_softc *sc, struct ath_atx_tid *tid,
-                             int seqno)
+                             struct ath_buf *bf)
 {
+       struct ath_frame_info *fi = get_frame_info(bf->bf_mpdu);
+       u16 seqno = bf->bf_state.seqno;
        int index, cindex;
 
+       if (!fi->baw_tracked)
+               return;
+
        index  = ATH_BA_INDEX(tid->seq_start, seqno);
        cindex = (tid->baw_head + index) & (ATH_TID_MAX_BUFS - 1);
 
@@ -335,6 +341,9 @@ static void ath_tx_addto_baw(struct ath_softc *sc, struct ath_atx_tid *tid,
        u16 seqno = bf->bf_state.seqno;
        int index, cindex;
 
+       if (fi->baw_tracked)
+               return;
+
        index  = ATH_BA_INDEX(tid->seq_start, seqno);
        cindex = (tid->baw_head + index) & (ATH_TID_MAX_BUFS - 1);
        __set_bit(cindex, tid->tx_buf);
@@ -611,7 +620,7 @@ static void ath_tx_complete_aggr(struct ath_softc *sc, struct ath_txq *txq,
                         * complete the acked-ones/xretried ones; update
                         * block-ack window
                         */
-                       ath_tx_update_baw(sc, tid, seqno);
+                       ath_tx_update_baw(sc, tid, bf);
 
                        if (rc_update && (acked_cnt == 1 || txfail_cnt == 1)) {
                                memcpy(tx_info->control.rates, rates, sizeof(rates));
@@ -641,7 +650,7 @@ static void ath_tx_complete_aggr(struct ath_softc *sc, struct ath_txq *txq,
                                 * run out of tx buf.
                                 */
                                if (!tbf) {
-                                       ath_tx_update_baw(sc, tid, seqno);
+                                       ath_tx_update_baw(sc, tid, bf);
 
                                        ath_tx_complete_buf(sc, bf, txq,
                                                            &bf_head, NULL, ts,
@@ -969,7 +978,8 @@ ath_tx_get_tid_subframe(struct ath_softc *sc, struct ath_txq *txq,
                bf->bf_lastbf = bf;
 
                tx_info = IEEE80211_SKB_CB(skb);
-               tx_info->flags &= ~IEEE80211_TX_CTL_CLEAR_PS_FILT;
+               tx_info->flags &= ~(IEEE80211_TX_CTL_CLEAR_PS_FILT |
+                                   IEEE80211_TX_STATUS_EOSP);
 
                /*
                 * No aggregation session is running, but there may be frames
@@ -1009,11 +1019,14 @@ ath_tx_get_tid_subframe(struct ath_softc *sc, struct ath_txq *txq,
 
                        INIT_LIST_HEAD(&bf_head);
                        list_add(&bf->list, &bf_head);
-                       ath_tx_update_baw(sc, tid, seqno);
+                       ath_tx_update_baw(sc, tid, bf);
                        ath_tx_complete_buf(sc, bf, txq, &bf_head, NULL, &ts, 0);
                        continue;
                }
 
+               if (bf_isampdu(bf))
+                       ath_tx_addto_baw(sc, tid, bf);
+
                return bf;
        }
 
@@ -1071,8 +1084,6 @@ ath_tx_form_aggr(struct ath_softc *sc, struct ath_txq *txq,
                bf->bf_next = NULL;
 
                /* link buffers of this frame to the aggregate */
-               if (!fi->baw_tracked)
-                       ath_tx_addto_baw(sc, tid, bf);
                bf->bf_state.ndelim = ndelim;
 
                list_add_tail(&bf->list, bf_q);
@@ -1659,6 +1670,22 @@ void ath_tx_aggr_wakeup(struct ath_softc *sc, struct ath_node *an)
        }
 }
 
+
+static void
+ath9k_set_moredata(struct ath_softc *sc, struct ath_buf *bf, bool val)
+{
+       struct ieee80211_hdr *hdr;
+       u16 mask = cpu_to_le16(IEEE80211_FCTL_MOREDATA);
+       u16 mask_val = mask * val;
+
+       hdr = (struct ieee80211_hdr *) bf->bf_mpdu->data;
+       if ((hdr->frame_control & mask) != mask_val) {
+               hdr->frame_control = (hdr->frame_control & ~mask) | mask_val;
+               dma_sync_single_for_device(sc->dev, bf->bf_buf_addr,
+                       sizeof(*hdr), DMA_TO_DEVICE);
+       }
+}
+
 void ath9k_release_buffered_frames(struct ieee80211_hw *hw,
                                   struct ieee80211_sta *sta,
                                   u16 tids, int nframes,
@@ -1689,12 +1716,11 @@ void ath9k_release_buffered_frames(struct ieee80211_hw *hw,
                        if (!bf)
                                break;
 
+                       ath9k_set_moredata(sc, bf, true);
                        list_add_tail(&bf->list, &bf_q);
                        ath_set_rates(tid->an->vif, tid->an->sta, bf);
-                       if (bf_isampdu(bf)) {
-                               ath_tx_addto_baw(sc, tid, bf);
+                       if (bf_isampdu(bf))
                                bf->bf_state.bf_type &= ~BUF_AGGR;
-                       }
                        if (bf_tail)
                                bf_tail->bf_next = bf;
 
@@ -1712,6 +1738,9 @@ void ath9k_release_buffered_frames(struct ieee80211_hw *hw,
        if (list_empty(&bf_q))
                return;
 
+       if (!more_data)
+               ath9k_set_moredata(sc, bf_tail, false);
+
        info = IEEE80211_SKB_CB(bf_tail->bf_mpdu);
        info->flags |= IEEE80211_TX_STATUS_EOSP;
 
@@ -2407,7 +2436,6 @@ void ath_tx_cabq(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
                .txq = sc->beacon.cabq
        };
        struct ath_tx_info info = {};
-       struct ieee80211_hdr *hdr;
        struct ath_buf *bf_tail = NULL;
        struct ath_buf *bf;
        LIST_HEAD(bf_q);
@@ -2451,15 +2479,10 @@ void ath_tx_cabq(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
        if (list_empty(&bf_q))
                return;
 
-       bf = list_first_entry(&bf_q, struct ath_buf, list);
-       hdr = (struct ieee80211_hdr *) bf->bf_mpdu->data;
-
-       if (hdr->frame_control & cpu_to_le16(IEEE80211_FCTL_MOREDATA)) {
-               hdr->frame_control &= ~cpu_to_le16(IEEE80211_FCTL_MOREDATA);
-               dma_sync_single_for_device(sc->dev, bf->bf_buf_addr,
-                       sizeof(*hdr), DMA_TO_DEVICE);
-       }
+       bf = list_last_entry(&bf_q, struct ath_buf, list);
+       ath9k_set_moredata(sc, bf, false);
 
+       bf = list_first_entry(&bf_q, struct ath_buf, list);
        ath_txq_lock(sc, txctl.txq);
        ath_tx_fill_desc(sc, bf, txctl.txq, 0);
        ath_tx_txqaddbuf(sc, txctl.txq, &bf_q, false);
index e63b07830f2cf86fcbb16f2d352f85c23deb50d9..f79c337105cb465f6895f39e7d37d7d1d624db74 100644 (file)
@@ -689,11 +689,12 @@ static int wil_cfg80211_scan(struct wiphy *wiphy,
 
        wil_dbg_misc(wil, "scan: wdev=0x%p iftype=%d\n", wdev, wdev->iftype);
 
-       /* check we are client side */
+       /* scan is supported on client interfaces and on AP interface */
        switch (wdev->iftype) {
        case NL80211_IFTYPE_STATION:
        case NL80211_IFTYPE_P2P_CLIENT:
        case NL80211_IFTYPE_P2P_DEVICE:
+       case NL80211_IFTYPE_AP:
                break;
        default:
                return -EOPNOTSUPP;
@@ -1089,18 +1090,51 @@ int wil_cfg80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
        int rc;
        bool tx_status;
 
-       /* Note, currently we do not support the "wait" parameter, user-space
-        * must call remain_on_channel before mgmt_tx or listen on a channel
-        * another way (AP/PCP or connected station)
-        * in addition we need to check if specified "chan" argument is
-        * different from currently "listened" channel and fail if it is.
+       wil_dbg_misc(wil, "mgmt_tx: channel %d offchan %d, wait %d\n",
+                    params->chan ? params->chan->hw_value : -1,
+                    params->offchan,
+                    params->wait);
+
+       /* Note, currently we support the "wait" parameter only on AP mode.
+        * In other modes, user-space must call remain_on_channel before
+        * mgmt_tx or listen on a channel other than active one.
         */
 
-       rc = wmi_mgmt_tx(vif, buf, len);
-       tx_status = (rc == 0);
+       if (params->chan && params->chan->hw_value == 0) {
+               wil_err(wil, "invalid channel\n");
+               return -EINVAL;
+       }
 
+       if (wdev->iftype != NL80211_IFTYPE_AP) {
+               wil_dbg_misc(wil,
+                            "send WMI_SW_TX_REQ_CMDID on non-AP interfaces\n");
+               rc = wmi_mgmt_tx(vif, buf, len);
+               goto out;
+       }
+
+       if (!params->chan || params->chan->hw_value == vif->channel) {
+               wil_dbg_misc(wil,
+                            "send WMI_SW_TX_REQ_CMDID for on-channel\n");
+               rc = wmi_mgmt_tx(vif, buf, len);
+               goto out;
+       }
+
+       if (params->offchan == 0) {
+               wil_err(wil,
+                       "invalid channel params: current %d requested %d, off-channel not allowed\n",
+                       vif->channel, params->chan->hw_value);
+               return -EBUSY;
+       }
+
+       /* use the wmi_mgmt_tx_ext only on AP mode and off-channel */
+       rc = wmi_mgmt_tx_ext(vif, buf, len, params->chan->hw_value,
+                            params->wait);
+
+out:
+       tx_status = (rc == 0);
        cfg80211_mgmt_tx_status(wdev, cookie ? *cookie : 0, buf, len,
                                tx_status, GFP_KERNEL);
+
        return rc;
 }
 
index 58ce044b1130df227fc331734f951888cbfcb5ba..51c3330bc316f8c47ff005a55e16a46ec234ef10 100644 (file)
@@ -1388,7 +1388,7 @@ static const struct file_operations fops_bf = {
 };
 
 /*---------temp------------*/
-static void print_temp(struct seq_file *s, const char *prefix, u32 t)
+static void print_temp(struct seq_file *s, const char *prefix, s32 t)
 {
        switch (t) {
        case 0:
@@ -1396,7 +1396,8 @@ static void print_temp(struct seq_file *s, const char *prefix, u32 t)
                seq_printf(s, "%s N/A\n", prefix);
        break;
        default:
-               seq_printf(s, "%s %d.%03d\n", prefix, t / 1000, t % 1000);
+               seq_printf(s, "%s %s%d.%03d\n", prefix, (t < 0 ? "-" : ""),
+                          abs(t / 1000), abs(t % 1000));
                break;
        }
 }
@@ -1404,7 +1405,7 @@ static void print_temp(struct seq_file *s, const char *prefix, u32 t)
 static int wil_temp_debugfs_show(struct seq_file *s, void *data)
 {
        struct wil6210_priv *wil = s->private;
-       u32 t_m, t_r;
+       s32 t_m, t_r;
        int rc = wmi_get_temperature(wil, &t_m, &t_r);
 
        if (rc) {
@@ -1640,6 +1641,7 @@ static void wil_print_rxtid(struct seq_file *s, struct wil_tid_ampdu_rx *r)
        int i;
        u16 index = ((r->head_seq_num - r->ssn) & 0xfff) % r->buf_size;
        unsigned long long drop_dup = r->drop_dup, drop_old = r->drop_old;
+       unsigned long long drop_dup_mcast = r->drop_dup_mcast;
 
        seq_printf(s, "([%2d]) 0x%03x [", r->buf_size, r->head_seq_num);
        for (i = 0; i < r->buf_size; i++) {
@@ -1649,9 +1651,9 @@ static void wil_print_rxtid(struct seq_file *s, struct wil_tid_ampdu_rx *r)
                        seq_printf(s, "%c", r->reorder_buf[i] ? '*' : '_');
        }
        seq_printf(s,
-                  "] total %llu drop %llu (dup %llu + old %llu) last 0x%03x\n",
-                  r->total, drop_dup + drop_old, drop_dup, drop_old,
-                  r->ssn_last_drop);
+                  "] total %llu drop %llu (dup %llu + old %llu + dup mcast %llu) last 0x%03x\n",
+                  r->total, drop_dup + drop_old + drop_dup_mcast, drop_dup,
+                  drop_old, drop_dup_mcast, r->ssn_last_drop);
 }
 
 static void wil_print_rxtid_crypto(struct seq_file *s, int tid,
@@ -1733,13 +1735,12 @@ __acquires(&p->tid_rx_lock) __releases(&p->tid_rx_lock)
                                   p->stats.rx_short_frame,
                                   p->stats.rx_large_frame,
                                   p->stats.rx_replay);
-
-                       if (wil->use_enhanced_dma_hw)
-                               seq_printf(s,
-                                          "mic error  %lu, key error %lu, amsdu error %lu\n",
-                                          p->stats.rx_mic_error,
-                                          p->stats.rx_key_error,
-                                          p->stats.rx_amsdu_error);
+                       seq_printf(s,
+                                  "mic error %lu, key error %lu, amsdu error %lu, csum error %lu\n",
+                                  p->stats.rx_mic_error,
+                                  p->stats.rx_key_error,
+                                  p->stats.rx_amsdu_error,
+                                  p->stats.rx_csum_err);
 
                        seq_puts(s, "Rx/MCS:");
                        for (mcs = 0; mcs < ARRAY_SIZE(p->stats.rx_per_mcs);
@@ -1801,6 +1802,343 @@ static const struct file_operations fops_mids = {
        .llseek         = seq_lseek,
 };
 
+static int wil_tx_latency_debugfs_show(struct seq_file *s, void *data)
+__acquires(&p->tid_rx_lock) __releases(&p->tid_rx_lock)
+{
+       struct wil6210_priv *wil = s->private;
+       int i, bin;
+
+       for (i = 0; i < ARRAY_SIZE(wil->sta); i++) {
+               struct wil_sta_info *p = &wil->sta[i];
+               char *status = "unknown";
+               u8 aid = 0;
+               u8 mid;
+
+               if (!p->tx_latency_bins)
+                       continue;
+
+               switch (p->status) {
+               case wil_sta_unused:
+                       status = "unused   ";
+                       break;
+               case wil_sta_conn_pending:
+                       status = "pending  ";
+                       break;
+               case wil_sta_connected:
+                       status = "connected";
+                       aid = p->aid;
+                       break;
+               }
+               mid = (p->status != wil_sta_unused) ? p->mid : U8_MAX;
+               seq_printf(s, "[%d] %pM %s MID %d AID %d\n", i, p->addr, status,
+                          mid, aid);
+
+               if (p->status == wil_sta_connected) {
+                       u64 num_packets = 0;
+                       u64 tx_latency_avg = p->stats.tx_latency_total_us;
+
+                       seq_puts(s, "Tx/Latency bin:");
+                       for (bin = 0; bin < WIL_NUM_LATENCY_BINS; bin++) {
+                               seq_printf(s, " %lld",
+                                          p->tx_latency_bins[bin]);
+                               num_packets += p->tx_latency_bins[bin];
+                       }
+                       seq_puts(s, "\n");
+                       if (!num_packets)
+                               continue;
+                       do_div(tx_latency_avg, num_packets);
+                       seq_printf(s, "Tx/Latency min/avg/max (us): %d/%lld/%d",
+                                  p->stats.tx_latency_min_us,
+                                  tx_latency_avg,
+                                  p->stats.tx_latency_max_us);
+
+                       seq_puts(s, "\n");
+               }
+       }
+
+       return 0;
+}
+
+static int wil_tx_latency_seq_open(struct inode *inode, struct file *file)
+{
+       return single_open(file, wil_tx_latency_debugfs_show,
+                          inode->i_private);
+}
+
+static ssize_t wil_tx_latency_write(struct file *file, const char __user *buf,
+                                   size_t len, loff_t *ppos)
+{
+       struct seq_file *s = file->private_data;
+       struct wil6210_priv *wil = s->private;
+       int val, rc, i;
+       bool enable;
+
+       rc = kstrtoint_from_user(buf, len, 0, &val);
+       if (rc) {
+               wil_err(wil, "Invalid argument\n");
+               return rc;
+       }
+       if (val == 1)
+               /* default resolution */
+               val = 500;
+       if (val && (val < 50 || val > 1000)) {
+               wil_err(wil, "Invalid resolution %d\n", val);
+               return -EINVAL;
+       }
+
+       enable = !!val;
+       if (wil->tx_latency == enable)
+               return len;
+
+       wil_info(wil, "%s TX latency measurements (resolution %dusec)\n",
+                enable ? "Enabling" : "Disabling", val);
+
+       if (enable) {
+               size_t sz = sizeof(u64) * WIL_NUM_LATENCY_BINS;
+
+               wil->tx_latency_res = val;
+               for (i = 0; i < ARRAY_SIZE(wil->sta); i++) {
+                       struct wil_sta_info *sta = &wil->sta[i];
+
+                       kfree(sta->tx_latency_bins);
+                       sta->tx_latency_bins = kzalloc(sz, GFP_KERNEL);
+                       if (!sta->tx_latency_bins)
+                               return -ENOMEM;
+                       sta->stats.tx_latency_min_us = U32_MAX;
+                       sta->stats.tx_latency_max_us = 0;
+                       sta->stats.tx_latency_total_us = 0;
+               }
+       }
+       wil->tx_latency = enable;
+
+       return len;
+}
+
+static const struct file_operations fops_tx_latency = {
+       .open           = wil_tx_latency_seq_open,
+       .release        = single_release,
+       .read           = seq_read,
+       .write          = wil_tx_latency_write,
+       .llseek         = seq_lseek,
+};
+
+static void wil_link_stats_print_basic(struct wil6210_vif *vif,
+                                      struct seq_file *s,
+                                      struct wmi_link_stats_basic *basic)
+{
+       char per[5] = "?";
+
+       if (basic->per_average != 0xff)
+               snprintf(per, sizeof(per), "%d%%", basic->per_average);
+
+       seq_printf(s, "CID %d {\n"
+                  "\tTxMCS %d TxTpt %d\n"
+                  "\tGoodput(rx:tx) %d:%d\n"
+                  "\tRxBcastFrames %d\n"
+                  "\tRSSI %d SQI %d SNR %d PER %s\n"
+                  "\tRx RFC %d Ant num %d\n"
+                  "\tSectors(rx:tx) my %d:%d peer %d:%d\n"
+                  "}\n",
+                  basic->cid,
+                  basic->bf_mcs, le32_to_cpu(basic->tx_tpt),
+                  le32_to_cpu(basic->rx_goodput),
+                  le32_to_cpu(basic->tx_goodput),
+                  le32_to_cpu(basic->rx_bcast_frames),
+                  basic->rssi, basic->sqi, basic->snr, per,
+                  basic->selected_rfc, basic->rx_effective_ant_num,
+                  basic->my_rx_sector, basic->my_tx_sector,
+                  basic->other_rx_sector, basic->other_tx_sector);
+}
+
+static void wil_link_stats_print_global(struct wil6210_priv *wil,
+                                       struct seq_file *s,
+                                       struct wmi_link_stats_global *global)
+{
+       seq_printf(s, "Frames(rx:tx) %d:%d\n"
+                  "BA Frames(rx:tx) %d:%d\n"
+                  "Beacons %d\n"
+                  "Rx Errors (MIC:CRC) %d:%d\n"
+                  "Tx Errors (no ack) %d\n",
+                  le32_to_cpu(global->rx_frames),
+                  le32_to_cpu(global->tx_frames),
+                  le32_to_cpu(global->rx_ba_frames),
+                  le32_to_cpu(global->tx_ba_frames),
+                  le32_to_cpu(global->tx_beacons),
+                  le32_to_cpu(global->rx_mic_errors),
+                  le32_to_cpu(global->rx_crc_errors),
+                  le32_to_cpu(global->tx_fail_no_ack));
+}
+
+static void wil_link_stats_debugfs_show_vif(struct wil6210_vif *vif,
+                                           struct seq_file *s)
+{
+       struct wil6210_priv *wil = vif_to_wil(vif);
+       struct wmi_link_stats_basic *stats;
+       int i;
+
+       if (!vif->fw_stats_ready) {
+               seq_puts(s, "no statistics\n");
+               return;
+       }
+
+       seq_printf(s, "TSF %lld\n", vif->fw_stats_tsf);
+       for (i = 0; i < ARRAY_SIZE(wil->sta); i++) {
+               if (wil->sta[i].status == wil_sta_unused)
+                       continue;
+               if (wil->sta[i].mid != vif->mid)
+                       continue;
+
+               stats = &wil->sta[i].fw_stats_basic;
+               wil_link_stats_print_basic(vif, s, stats);
+       }
+}
+
+static int wil_link_stats_debugfs_show(struct seq_file *s, void *data)
+{
+       struct wil6210_priv *wil = s->private;
+       struct wil6210_vif *vif;
+       int i, rc;
+
+       rc = mutex_lock_interruptible(&wil->vif_mutex);
+       if (rc)
+               return rc;
+
+       /* iterate over all MIDs and show per-cid statistics. Then show the
+        * global statistics
+        */
+       for (i = 0; i < wil->max_vifs; i++) {
+               vif = wil->vifs[i];
+
+               seq_printf(s, "MID %d ", i);
+               if (!vif) {
+                       seq_puts(s, "unused\n");
+                       continue;
+               }
+
+               wil_link_stats_debugfs_show_vif(vif, s);
+       }
+
+       mutex_unlock(&wil->vif_mutex);
+
+       return 0;
+}
+
+static int wil_link_stats_seq_open(struct inode *inode, struct file *file)
+{
+       return single_open(file, wil_link_stats_debugfs_show, inode->i_private);
+}
+
+static ssize_t wil_link_stats_write(struct file *file, const char __user *buf,
+                                   size_t len, loff_t *ppos)
+{
+       struct seq_file *s = file->private_data;
+       struct wil6210_priv *wil = s->private;
+       int cid, interval, rc, i;
+       struct wil6210_vif *vif;
+       char *kbuf = kmalloc(len + 1, GFP_KERNEL);
+
+       if (!kbuf)
+               return -ENOMEM;
+
+       rc = simple_write_to_buffer(kbuf, len, ppos, buf, len);
+       if (rc != len) {
+               kfree(kbuf);
+               return rc >= 0 ? -EIO : rc;
+       }
+
+       kbuf[len] = '\0';
+       /* specify cid (use -1 for all cids) and snapshot interval in ms */
+       rc = sscanf(kbuf, "%d %d", &cid, &interval);
+       kfree(kbuf);
+       if (rc < 0)
+               return rc;
+       if (rc < 2 || interval < 0)
+               return -EINVAL;
+
+       wil_info(wil, "request link statistics, cid %d interval %d\n",
+                cid, interval);
+
+       rc = mutex_lock_interruptible(&wil->vif_mutex);
+       if (rc)
+               return rc;
+
+       for (i = 0; i < wil->max_vifs; i++) {
+               vif = wil->vifs[i];
+               if (!vif)
+                       continue;
+
+               rc = wmi_link_stats_cfg(vif, WMI_LINK_STATS_TYPE_BASIC,
+                                       (cid == -1 ? 0xff : cid), interval);
+               if (rc)
+                       wil_err(wil, "link statistics failed for mid %d\n", i);
+       }
+       mutex_unlock(&wil->vif_mutex);
+
+       return len;
+}
+
+static const struct file_operations fops_link_stats = {
+       .open           = wil_link_stats_seq_open,
+       .release        = single_release,
+       .read           = seq_read,
+       .write          = wil_link_stats_write,
+       .llseek         = seq_lseek,
+};
+
+static int
+wil_link_stats_global_debugfs_show(struct seq_file *s, void *data)
+{
+       struct wil6210_priv *wil = s->private;
+
+       if (!wil->fw_stats_global.ready)
+               return 0;
+
+       seq_printf(s, "TSF %lld\n", wil->fw_stats_global.tsf);
+       wil_link_stats_print_global(wil, s, &wil->fw_stats_global.stats);
+
+       return 0;
+}
+
+static int
+wil_link_stats_global_seq_open(struct inode *inode, struct file *file)
+{
+       return single_open(file, wil_link_stats_global_debugfs_show,
+                          inode->i_private);
+}
+
+static ssize_t
+wil_link_stats_global_write(struct file *file, const char __user *buf,
+                           size_t len, loff_t *ppos)
+{
+       struct seq_file *s = file->private_data;
+       struct wil6210_priv *wil = s->private;
+       int interval, rc;
+       struct wil6210_vif *vif = ndev_to_vif(wil->main_ndev);
+
+       /* specify snapshot interval in ms */
+       rc = kstrtoint_from_user(buf, len, 0, &interval);
+       if (rc || interval < 0) {
+               wil_err(wil, "Invalid argument\n");
+               return -EINVAL;
+       }
+
+       wil_info(wil, "request global link stats, interval %d\n", interval);
+
+       rc = wmi_link_stats_cfg(vif, WMI_LINK_STATS_TYPE_GLOBAL, 0, interval);
+       if (rc)
+               wil_err(wil, "global link stats failed %d\n", rc);
+
+       return rc ? rc : len;
+}
+
+static const struct file_operations fops_link_stats_global = {
+       .open           = wil_link_stats_global_seq_open,
+       .release        = single_release,
+       .read           = seq_read,
+       .write          = wil_link_stats_global_write,
+       .llseek         = seq_lseek,
+};
+
 static ssize_t wil_read_file_led_cfg(struct file *file, char __user *user_buf,
                                     size_t count, loff_t *ppos)
 {
@@ -2134,6 +2472,9 @@ static const struct {
        {"srings",      0444,           &fops_srings},
        {"status_msg",  0444,           &fops_status_msg},
        {"rx_buff_mgmt",        0444,   &fops_rx_buff_mgmt},
+       {"tx_latency",  0644,           &fops_tx_latency},
+       {"link_stats",  0644,           &fops_link_stats},
+       {"link_stats_global",   0644,   &fops_link_stats_global},
 };
 
 static void wil6210_debugfs_init_files(struct wil6210_priv *wil,
@@ -2250,10 +2591,14 @@ int wil6210_debugfs_init(struct wil6210_priv *wil)
 
 void wil6210_debugfs_remove(struct wil6210_priv *wil)
 {
+       int i;
+
        debugfs_remove_recursive(wil->debug);
        wil->debug = NULL;
 
        kfree(wil->dbg_data.data_arr);
+       for (i = 0; i < ARRAY_SIZE(wil->sta); i++)
+               kfree(wil->sta[i].tx_latency_bins);
 
        /* free pmc memory without sending command to fw, as it will
         * be reset on the way down anyway
index 540fc20984d8fe8c65ef56b074f47030da518084..3e2bbbceca06e9dcbf0f83dd64f79596c8a2dd43 100644 (file)
@@ -1,5 +1,6 @@
 /*
  * Copyright (c) 2014-2015,2017 Qualcomm Atheros, Inc.
+ * Copyright (c) 2018, The Linux Foundation. All rights reserved.
  *
  * Permission to use, copy, modify, and/or distribute this software for any
  * purpose with or without fee is hereby granted, provided that the above
@@ -22,6 +23,8 @@
 MODULE_FIRMWARE(WIL_FW_NAME_DEFAULT);
 MODULE_FIRMWARE(WIL_FW_NAME_SPARROW_PLUS);
 MODULE_FIRMWARE(WIL_BOARD_FILE_NAME);
+MODULE_FIRMWARE(WIL_FW_NAME_TALYN);
+MODULE_FIRMWARE(WIL_BRD_NAME_TALYN);
 
 static
 void wil_memset_toio_32(volatile void __iomem *dst, u32 val,
index 718161b829c2b9ef741c3e4b28f992cfc129b930..388b3d4717cae06f0bcd60f4004fe7224674c24f 100644 (file)
@@ -145,7 +145,7 @@ fw_handle_capabilities(struct wil6210_priv *wil, const void *data,
                                    capabilities);
        bitmap_zero(wil->fw_capabilities, WMI_FW_CAPABILITY_MAX);
        memcpy(wil->fw_capabilities, rec->capabilities,
-              min(sizeof(wil->fw_capabilities), capa_size));
+              min_t(size_t, sizeof(wil->fw_capabilities), capa_size));
        wil_hex_dump_fw("CAPA", DUMP_PREFIX_OFFSET, 16, 1,
                        rec->capabilities, capa_size, false);
        return 0;
index d7e112da6a8d3ba520aa9ed232053cd9481f16a8..5d287a8e1b458a8aca674275a55c007df944e4c3 100644 (file)
@@ -625,6 +625,15 @@ static irqreturn_t wil6210_irq_misc_thread(int irq, void *cookie)
 
        wil6210_unmask_irq_misc(wil, false);
 
+       /* in non-triple MSI case, this is done inside wil6210_thread_irq
+        * because it has to be done after unmasking the pseudo.
+        */
+       if (wil->n_msi == 3 && wil->suspend_resp_rcvd) {
+               wil_dbg_irq(wil, "set suspend_resp_comp to true\n");
+               wil->suspend_resp_comp = true;
+               wake_up_interruptible(&wil->wq);
+       }
+
        return IRQ_HANDLED;
 }
 
@@ -782,6 +791,40 @@ static irqreturn_t wil6210_hardirq(int irq, void *cookie)
        return rc;
 }
 
+static int wil6210_request_3msi(struct wil6210_priv *wil, int irq)
+{
+       int rc;
+
+       /* IRQ's are in the following order:
+        * - Tx
+        * - Rx
+        * - Misc
+        */
+       rc = request_irq(irq, wil->txrx_ops.irq_tx, IRQF_SHARED,
+                        WIL_NAME "_tx", wil);
+       if (rc)
+               return rc;
+
+       rc = request_irq(irq + 1, wil->txrx_ops.irq_rx, IRQF_SHARED,
+                        WIL_NAME "_rx", wil);
+       if (rc)
+               goto free0;
+
+       rc = request_threaded_irq(irq + 2, wil6210_irq_misc,
+                                 wil6210_irq_misc_thread,
+                                 IRQF_SHARED, WIL_NAME "_misc", wil);
+       if (rc)
+               goto free1;
+
+       return 0;
+free1:
+       free_irq(irq + 1, wil);
+free0:
+       free_irq(irq, wil);
+
+       return rc;
+}
+
 /* can't use wil_ioread32_and_clear because ICC value is not set yet */
 static inline void wil_clear32(void __iomem *addr)
 {
@@ -822,11 +865,12 @@ void wil6210_clear_halp(struct wil6210_priv *wil)
        wil6210_unmask_halp(wil);
 }
 
-int wil6210_init_irq(struct wil6210_priv *wil, int irq, bool use_msi)
+int wil6210_init_irq(struct wil6210_priv *wil, int irq)
 {
        int rc;
 
-       wil_dbg_misc(wil, "init_irq: %s\n", use_msi ? "MSI" : "INTx");
+       wil_dbg_misc(wil, "init_irq: %s, n_msi=%d\n",
+                    wil->n_msi ? "MSI" : "INTx", wil->n_msi);
 
        if (wil->use_enhanced_dma_hw) {
                wil->txrx_ops.irq_tx = wil6210_irq_tx_edma;
@@ -835,10 +879,14 @@ int wil6210_init_irq(struct wil6210_priv *wil, int irq, bool use_msi)
                wil->txrx_ops.irq_tx = wil6210_irq_tx;
                wil->txrx_ops.irq_rx = wil6210_irq_rx;
        }
-       rc = request_threaded_irq(irq, wil6210_hardirq,
-                                 wil6210_thread_irq,
-                                 use_msi ? 0 : IRQF_SHARED,
-                                 WIL_NAME, wil);
+
+       if (wil->n_msi == 3)
+               rc = wil6210_request_3msi(wil, irq);
+       else
+               rc = request_threaded_irq(irq, wil6210_hardirq,
+                                         wil6210_thread_irq,
+                                         wil->n_msi ? 0 : IRQF_SHARED,
+                                         WIL_NAME, wil);
        return rc;
 }
 
@@ -848,4 +896,8 @@ void wil6210_fini_irq(struct wil6210_priv *wil, int irq)
 
        wil_mask_irq(wil);
        free_irq(irq, wil);
+       if (wil->n_msi == 3) {
+               free_irq(irq + 1, wil);
+               free_irq(irq + 2, wil);
+       }
 }
index 4de19bd40a589d8389e330aa1f09c15b861f7a8a..7debed6bec06b4dcbbd66dc6e91a9392c9857387 100644 (file)
@@ -28,6 +28,7 @@
 #define WAIT_FOR_HALP_VOTE_MS 100
 #define WAIT_FOR_SCAN_ABORT_MS 1000
 #define WIL_DEFAULT_NUM_RX_STATUS_RINGS 1
+#define WIL_BOARD_FILE_MAX_NAMELEN 128
 
 bool debug_fw; /* = false; */
 module_param(debug_fw, bool, 0444);
@@ -278,6 +279,7 @@ __acquires(&sta->tid_rx_lock) __releases(&sta->tid_rx_lock)
        }
        /* statistics */
        memset(&sta->stats, 0, sizeof(sta->stats));
+       sta->stats.tx_latency_min_us = U32_MAX;
 }
 
 static bool wil_vif_is_connected(struct wil6210_priv *wil, u8 mid)
@@ -1128,6 +1130,9 @@ void wil_refresh_fw_capabilities(struct wil6210_priv *wil)
                wiphy->max_sched_scan_plans = WMI_MAX_PLANS_NUM;
        }
 
+       if (test_bit(WMI_FW_CAPABILITY_TX_REQ_EXT, wil->fw_capabilities))
+               wiphy->flags |= WIPHY_FLAG_OFFCHAN_TX;
+
        if (wil->platform_ops.set_features) {
                features = (test_bit(WMI_FW_CAPABILITY_REF_CLOCK_CONTROL,
                                     wil->fw_capabilities) &&
@@ -1135,8 +1140,20 @@ void wil_refresh_fw_capabilities(struct wil6210_priv *wil)
                                     wil->platform_capa)) ?
                        BIT(WIL_PLATFORM_FEATURE_FW_EXT_CLK_CONTROL) : 0;
 
+               if (wil->n_msi == 3)
+                       features |= BIT(WIL_PLATFORM_FEATURE_TRIPLE_MSI);
+
                wil->platform_ops.set_features(wil->platform_handle, features);
        }
+
+       if (test_bit(WMI_FW_CAPABILITY_BACK_WIN_SIZE_64,
+                    wil->fw_capabilities)) {
+               wil->max_agg_wsize = WIL_MAX_AGG_WSIZE_64;
+               wil->max_ampdu_size = WIL_MAX_AMPDU_SIZE_128;
+       } else {
+               wil->max_agg_wsize = WIL_MAX_AGG_WSIZE;
+               wil->max_ampdu_size = WIL_MAX_AMPDU_SIZE;
+       }
 }
 
 void wil_mbox_ring_le2cpus(struct wil6210_mbox_ring *r)
@@ -1148,6 +1165,28 @@ void wil_mbox_ring_le2cpus(struct wil6210_mbox_ring *r)
        le32_to_cpus(&r->head);
 }
 
+/* construct actual board file name to use */
+void wil_get_board_file(struct wil6210_priv *wil, char *buf, size_t len)
+{
+       const char *board_file;
+       const char *wil_talyn_fw_name = ftm_mode ? WIL_FW_NAME_FTM_TALYN :
+                             WIL_FW_NAME_TALYN;
+
+       if (wil->board_file) {
+               board_file = wil->board_file;
+       } else {
+               /* If specific FW file is used for Talyn,
+                * use specific board file
+                */
+               if (strcmp(wil->wil_fw_name, wil_talyn_fw_name) == 0)
+                       board_file = WIL_BRD_NAME_TALYN;
+               else
+                       board_file = WIL_BOARD_FILE_NAME;
+       }
+
+       strlcpy(buf, board_file, len);
+}
+
 static int wil_get_bl_info(struct wil6210_priv *wil)
 {
        struct net_device *ndev = wil->main_ndev;
@@ -1269,7 +1308,7 @@ static int wil_get_otp_info(struct wil6210_priv *wil)
 
 static int wil_wait_for_fw_ready(struct wil6210_priv *wil)
 {
-       ulong to = msecs_to_jiffies(1000);
+       ulong to = msecs_to_jiffies(2000);
        ulong left = wait_for_completion_timeout(&wil->wmi_ready, to);
 
        if (0 == left) {
@@ -1519,8 +1558,17 @@ int wil_reset(struct wil6210_priv *wil, bool load_fw)
 
        wil_set_oob_mode(wil, oob_mode);
        if (load_fw) {
+               char board_file[WIL_BOARD_FILE_MAX_NAMELEN];
+
+               if  (wil->secured_boot) {
+                       wil_err(wil, "secured boot is not supported\n");
+                       return -ENOTSUPP;
+               }
+
+               board_file[0] = '\0';
+               wil_get_board_file(wil, board_file, sizeof(board_file));
                wil_info(wil, "Use firmware <%s> + board <%s>\n",
-                        wil->wil_fw_name, WIL_BOARD_FILE_NAME);
+                        wil->wil_fw_name, board_file);
 
                if (!no_flash)
                        wil_bl_prepare_halt(wil);
@@ -1532,11 +1580,9 @@ int wil_reset(struct wil6210_priv *wil, bool load_fw)
                if (rc)
                        goto out;
                if (wil->brd_file_addr)
-                       rc = wil_request_board(wil, WIL_BOARD_FILE_NAME);
+                       rc = wil_request_board(wil, board_file);
                else
-                       rc = wil_request_firmware(wil,
-                                                 WIL_BOARD_FILE_NAME,
-                                                 true);
+                       rc = wil_request_firmware(wil, board_file, true);
                if (rc)
                        goto out;
 
@@ -1568,6 +1614,13 @@ int wil_reset(struct wil6210_priv *wil, bool load_fw)
 
                wil->txrx_ops.configure_interrupt_moderation(wil);
 
+               /* Enable OFU rdy valid bug fix, to prevent hang in oful34_rx
+                * while there is back-pressure from Host during RX
+                */
+               if (wil->hw_version >= HW_VER_TALYN_MB)
+                       wil_s(wil, RGF_DMA_MISC_CTL,
+                             BIT_OFUL34_RDY_VALID_BUG_FIX_EN);
+
                rc = wil_restore_vifs(wil);
                if (rc) {
                        wil_err(wil, "failed to restore vifs, rc %d\n", rc);
index 8b148cb91372f4fac59aa3cac56469f809c1145e..89119e7facd00c661600bdd4d7648570b733621c 100644 (file)
 #include <linux/rtnetlink.h>
 #include <linux/pm_runtime.h>
 
-static bool use_msi = true;
-module_param(use_msi, bool, 0444);
-MODULE_PARM_DESC(use_msi, " Use MSI interrupt, default - true");
+static int n_msi = 3;
+module_param(n_msi, int, 0444);
+MODULE_PARM_DESC(n_msi, " Use MSI interrupt: 0 - use INTx, 1 - single, or 3 - (default) ");
 
-static bool ftm_mode;
+bool ftm_mode;
 module_param(ftm_mode, bool, 0444);
 MODULE_PARM_DESC(ftm_mode, " Set factory test mode, default - false");
 
@@ -93,6 +93,10 @@ int wil_set_capabilities(struct wil6210_priv *wil)
                if (wil_r(wil, RGF_USER_OTP_HW_RD_MACHINE_1) &
                    BIT_NO_FLASH_INDICATION)
                        set_bit(hw_capa_no_flash, wil->hw_capa);
+               wil_fw_name = ftm_mode ? WIL_FW_NAME_FTM_TALYN :
+                             WIL_FW_NAME_TALYN;
+               if (wil_fw_verify_file_exists(wil, wil_fw_name))
+                       wil->wil_fw_name = wil_fw_name;
                break;
        case JTAG_DEV_ID_TALYN_MB:
                wil->hw_name = "Talyn-MB";
@@ -104,6 +108,10 @@ int wil_set_capabilities(struct wil6210_priv *wil)
                set_bit(hw_capa_no_flash, wil->hw_capa);
                wil->use_enhanced_dma_hw = true;
                wil->use_rx_hw_reordering = true;
+               wil_fw_name = ftm_mode ? WIL_FW_NAME_FTM_TALYN :
+                             WIL_FW_NAME_TALYN;
+               if (wil_fw_verify_file_exists(wil, wil_fw_name))
+                       wil->wil_fw_name = wil_fw_name;
                break;
        default:
                wil_err(wil, "Unknown board hardware, chip_id 0x%08x, chip_revision 0x%08x\n",
@@ -142,12 +150,24 @@ int wil_set_capabilities(struct wil6210_priv *wil)
 
 void wil_disable_irq(struct wil6210_priv *wil)
 {
-       disable_irq(wil->pdev->irq);
+       int irq = wil->pdev->irq;
+
+       disable_irq(irq);
+       if (wil->n_msi == 3) {
+               disable_irq(irq + 1);
+               disable_irq(irq + 2);
+       }
 }
 
 void wil_enable_irq(struct wil6210_priv *wil)
 {
-       enable_irq(wil->pdev->irq);
+       int irq = wil->pdev->irq;
+
+       enable_irq(irq);
+       if (wil->n_msi == 3) {
+               enable_irq(irq + 1);
+               enable_irq(irq + 2);
+       }
 }
 
 static void wil_remove_all_additional_vifs(struct wil6210_priv *wil)
@@ -174,28 +194,47 @@ static int wil_if_pcie_enable(struct wil6210_priv *wil)
         * and only MSI should be used
         */
        int msi_only = pdev->msi_enabled;
-       bool _use_msi = use_msi;
 
        wil_dbg_misc(wil, "if_pcie_enable\n");
 
        pci_set_master(pdev);
 
-       wil_dbg_misc(wil, "Setup %s interrupt\n", use_msi ? "MSI" : "INTx");
+       /* how many MSI interrupts to request? */
+       switch (n_msi) {
+       case 3:
+       case 1:
+               wil_dbg_misc(wil, "Setup %d MSI interrupts\n", n_msi);
+               break;
+       case 0:
+               wil_dbg_misc(wil, "MSI interrupts disabled, use INTx\n");
+               break;
+       default:
+               wil_err(wil, "Invalid n_msi=%d, default to 1\n", n_msi);
+               n_msi = 1;
+       }
+
+       if (n_msi == 3 &&
+           pci_alloc_irq_vectors(pdev, n_msi, n_msi, PCI_IRQ_MSI) < n_msi) {
+               wil_err(wil, "3 MSI mode failed, try 1 MSI\n");
+               n_msi = 1;
+       }
 
-       if (use_msi && pci_enable_msi(pdev)) {
+       if (n_msi == 1 && pci_enable_msi(pdev)) {
                wil_err(wil, "pci_enable_msi failed, use INTx\n");
-               _use_msi = false;
+               n_msi = 0;
        }
 
-       if (!_use_msi && msi_only) {
+       wil->n_msi = n_msi;
+
+       if (wil->n_msi == 0 && msi_only) {
                wil_err(wil, "Interrupt pin not routed, unable to use INTx\n");
                rc = -ENODEV;
                goto stop_master;
        }
 
-       rc = wil6210_init_irq(wil, pdev->irq, _use_msi);
+       rc = wil6210_init_irq(wil, pdev->irq);
        if (rc)
-               goto stop_master;
+               goto release_vectors;
 
        /* need reset here to obtain MAC */
        mutex_lock(&wil->mutex);
@@ -208,8 +247,9 @@ static int wil_if_pcie_enable(struct wil6210_priv *wil)
 
  release_irq:
        wil6210_fini_irq(wil, pdev->irq);
-       /* safe to call if no MSI */
-       pci_disable_msi(pdev);
+ release_vectors:
+       /* safe to call if no allocation */
+       pci_free_irq_vectors(pdev);
  stop_master:
        pci_clear_master(pdev);
        return rc;
index 22475a1ddb7f0129139a08956f46cdfd2f4e7fbe..b608aa16b4f1edf5fa772c6be4cb920b860b6a01 100644 (file)
@@ -95,7 +95,7 @@ __acquires(&sta->tid_rx_lock) __releases(&sta->tid_rx_lock)
 {
        struct wil6210_vif *vif;
        struct net_device *ndev;
-       int tid, cid, mid, mcast;
+       int tid, cid, mid, mcast, retry;
        u16 seq;
        struct wil_sta_info *sta;
        struct wil_tid_ampdu_rx *r;
@@ -103,7 +103,7 @@ __acquires(&sta->tid_rx_lock) __releases(&sta->tid_rx_lock)
        int index;
 
        wil->txrx_ops.get_reorder_params(wil, skb, &tid, &cid, &mid, &seq,
-                                        &mcast);
+                                        &mcast, &retry);
        sta = &wil->sta[cid];
 
        wil_dbg_txrx(wil, "MID %d CID %d TID %d Seq 0x%03x mcast %01x\n",
@@ -117,11 +117,6 @@ __acquires(&sta->tid_rx_lock) __releases(&sta->tid_rx_lock)
        }
        ndev = vif_to_ndev(vif);
 
-       if (unlikely(mcast)) {
-               wil_netif_rx_any(skb, ndev);
-               return;
-       }
-
        spin_lock(&sta->tid_rx_lock);
 
        r = sta->tid_rx[tid];
@@ -130,6 +125,19 @@ __acquires(&sta->tid_rx_lock) __releases(&sta->tid_rx_lock)
                goto out;
        }
 
+       if (unlikely(mcast)) {
+               if (retry && seq == r->mcast_last_seq) {
+                       r->drop_dup_mcast++;
+                       wil_dbg_txrx(wil, "Rx drop: dup mcast seq 0x%03x\n",
+                                    seq);
+                       dev_kfree_skb(skb);
+                       goto out;
+               }
+               r->mcast_last_seq = seq;
+               wil_netif_rx_any(skb, ndev);
+               goto out;
+       }
+
        r->total++;
        hseq = r->head_seq_num;
 
@@ -262,6 +270,7 @@ struct wil_tid_ampdu_rx *wil_tid_ampdu_rx_alloc(struct wil6210_priv *wil,
        r->buf_size = size;
        r->stored_mpdu_num = 0;
        r->first_time = true;
+       r->mcast_last_seq = U16_MAX;
        return r;
 }
 
@@ -288,7 +297,7 @@ void wil_tid_ampdu_rx_free(struct wil6210_priv *wil,
 /* ADDBA processing */
 static u16 wil_agg_size(struct wil6210_priv *wil, u16 req_agg_wsize)
 {
-       u16 max_agg_size = min_t(u16, WIL_MAX_AGG_WSIZE, WIL_MAX_AMPDU_SIZE /
+       u16 max_agg_size = min_t(u16, wil->max_agg_wsize, wil->max_ampdu_size /
                                 (mtu_max + WIL_MAX_MPDU_OVERHEAD));
 
        if (!req_agg_wsize)
@@ -355,11 +364,11 @@ __acquires(&sta->tid_rx_lock) __releases(&sta->tid_rx_lock)
        if (status == WLAN_STATUS_SUCCESS) {
                if (req_agg_wsize == 0) {
                        wil_dbg_misc(wil, "Suggest BACK wsize %d\n",
-                                    WIL_MAX_AGG_WSIZE);
-                       agg_wsize = WIL_MAX_AGG_WSIZE;
+                                    wil->max_agg_wsize);
+                       agg_wsize = wil->max_agg_wsize;
                } else {
                        agg_wsize = min_t(u16,
-                                         WIL_MAX_AGG_WSIZE, req_agg_wsize);
+                                         wil->max_agg_wsize, req_agg_wsize);
                }
        }
 
index 2098f3cc1cec598a20035f62e90efa3813f4d899..6a7943e487fb11ba0fc62966da01d5de66826d0a 100644 (file)
@@ -281,6 +281,12 @@ static int wil_vring_alloc_skb(struct wil6210_priv *wil, struct wil_ring *vring,
        skb_reserve(skb, headroom);
        skb_put(skb, sz);
 
+       /**
+        * Make sure that the network stack calculates checksum for packets
+        * which failed the HW checksum calculation
+        */
+       skb->ip_summed = CHECKSUM_NONE;
+
        pa = dma_map_single(dev, skb->data, skb->len, DMA_FROM_DEVICE);
        if (unlikely(dma_mapping_error(dev, pa))) {
                kfree_skb(skb);
@@ -569,6 +575,8 @@ static struct sk_buff *wil_vring_reap_rx(struct wil6210_priv *wil,
                 * mis-calculates TCP checksum - if it should be 0x0,
                 * it writes 0xffff in violation of RFC 1624
                 */
+               else
+                       stats->rx_csum_err++;
        }
 
        if (snaplen) {
@@ -678,6 +686,21 @@ static int wil_rx_crypto_check(struct wil6210_priv *wil, struct sk_buff *skb)
        return 0;
 }
 
+static int wil_rx_error_check(struct wil6210_priv *wil, struct sk_buff *skb,
+                             struct wil_net_stats *stats)
+{
+       struct vring_rx_desc *d = wil_skb_rxdesc(skb);
+
+       if ((d->dma.status & RX_DMA_STATUS_ERROR) &&
+           (d->dma.error & RX_DMA_ERROR_MIC)) {
+               stats->rx_mic_error++;
+               wil_dbg_txrx(wil, "MIC error, dropping packet\n");
+               return -EFAULT;
+       }
+
+       return 0;
+}
+
 static void wil_get_netif_rx_params(struct sk_buff *skb, int *cid,
                                    int *security)
 {
@@ -736,6 +759,12 @@ void wil_netif_rx_any(struct sk_buff *skb, struct net_device *ndev)
                goto stats;
        }
 
+       /* check errors reported by HW and update statistics */
+       if (unlikely(wil->txrx_ops.rx_error_check(wil, skb, stats))) {
+               dev_kfree_skb(skb);
+               return;
+       }
+
        if (wdev->iftype == NL80211_IFTYPE_AP && !vif->ap_isolate) {
                if (mcast) {
                        /* send multicast frames both to higher layers in
@@ -1672,6 +1701,11 @@ static int __wil_tx_vring_tso(struct wil6210_priv *wil, struct wil6210_vif *vif,
         */
        wmb();
 
+       if (wil->tx_latency)
+               *(ktime_t *)&skb->cb = ktime_get();
+       else
+               memset(skb->cb, 0, sizeof(ktime_t));
+
        wil_w(wil, vring->hwtail, vring->swhead);
        return 0;
 
@@ -1823,6 +1857,11 @@ static int __wil_tx_ring(struct wil6210_priv *wil, struct wil6210_vif *vif,
         */
        wmb();
 
+       if (wil->tx_latency)
+               *(ktime_t *)&skb->cb = ktime_get();
+       else
+               memset(skb->cb, 0, sizeof(ktime_t));
+
        wil_w(wil, ring->hwtail, ring->swhead);
 
        return 0;
@@ -2044,6 +2083,31 @@ netdev_tx_t wil_start_xmit(struct sk_buff *skb, struct net_device *ndev)
        return NET_XMIT_DROP;
 }
 
+void wil_tx_latency_calc(struct wil6210_priv *wil, struct sk_buff *skb,
+                        struct wil_sta_info *sta)
+{
+       int skb_time_us;
+       int bin;
+
+       if (!wil->tx_latency)
+               return;
+
+       if (ktime_to_ms(*(ktime_t *)&skb->cb) == 0)
+               return;
+
+       skb_time_us = ktime_us_delta(ktime_get(), *(ktime_t *)&skb->cb);
+       bin = skb_time_us / wil->tx_latency_res;
+       bin = min_t(int, bin, WIL_NUM_LATENCY_BINS - 1);
+
+       wil_dbg_txrx(wil, "skb time %dus => bin %d\n", skb_time_us, bin);
+       sta->tx_latency_bins[bin]++;
+       sta->stats.tx_latency_total_us += skb_time_us;
+       if (skb_time_us < sta->stats.tx_latency_min_us)
+               sta->stats.tx_latency_min_us = skb_time_us;
+       if (skb_time_us > sta->stats.tx_latency_max_us)
+               sta->stats.tx_latency_max_us = skb_time_us;
+}
+
 /**
  * Clean up transmitted skb's from the Tx VRING
  *
@@ -2130,6 +2194,9 @@ int wil_tx_complete(struct wil6210_vif *vif, int ringid)
                                        if (stats) {
                                                stats->tx_packets++;
                                                stats->tx_bytes += skb->len;
+
+                                               wil_tx_latency_calc(wil, skb,
+                                                       &wil->sta[cid]);
                                        }
                                } else {
                                        ndev->stats.tx_errors++;
@@ -2180,7 +2247,7 @@ static inline void wil_tx_fini(struct wil6210_priv *wil) {}
 
 static void wil_get_reorder_params(struct wil6210_priv *wil,
                                   struct sk_buff *skb, int *tid, int *cid,
-                                  int *mid, u16 *seq, int *mcast)
+                                  int *mid, u16 *seq, int *mcast, int *retry)
 {
        struct vring_rx_desc *d = wil_skb_rxdesc(skb);
 
@@ -2189,6 +2256,7 @@ static void wil_get_reorder_params(struct wil6210_priv *wil,
        *mid = wil_rxdesc_mid(d);
        *seq = wil_rxdesc_seq(d);
        *mcast = wil_rxdesc_mcast(d);
+       *retry = wil_rxdesc_retry(d);
 }
 
 void wil_init_txrx_ops_legacy_dma(struct wil6210_priv *wil)
@@ -2211,6 +2279,7 @@ void wil_init_txrx_ops_legacy_dma(struct wil6210_priv *wil)
        wil->txrx_ops.get_netif_rx_params =
                wil_get_netif_rx_params;
        wil->txrx_ops.rx_crypto_check = wil_rx_crypto_check;
+       wil->txrx_ops.rx_error_check = wil_rx_error_check;
        wil->txrx_ops.is_rx_idle = wil_is_rx_idle;
        wil->txrx_ops.rx_fini = wil_rx_fini;
 }
index f361423628f54352780aa682e88709c5043d394d..9d83be4818399c8779ab3cafc9da4759d594b316 100644 (file)
@@ -500,6 +500,11 @@ static inline int wil_rxdesc_ext_subtype(struct vring_rx_desc *d)
        return WIL_GET_BITS(d->mac.d0, 28, 31);
 }
 
+static inline int wil_rxdesc_retry(struct vring_rx_desc *d)
+{
+       return WIL_GET_BITS(d->mac.d0, 31, 31);
+}
+
 static inline int wil_rxdesc_key_id(struct vring_rx_desc *d)
 {
        return WIL_GET_BITS(d->mac.d1, 4, 5);
@@ -615,5 +620,7 @@ void wil_tid_ampdu_rx_free(struct wil6210_priv *wil,
                           struct wil_tid_ampdu_rx *r);
 void wil_tx_data_init(struct wil_ring_tx_data *txdata);
 void wil_init_txrx_ops_legacy_dma(struct wil6210_priv *wil);
+void wil_tx_latency_calc(struct wil6210_priv *wil, struct sk_buff *skb,
+                        struct wil_sta_info *sta);
 
 #endif /* WIL6210_TXRX_H */
index 95f38e65d969a6ae86dcaec6ccc94bdb21805dd7..bca61cb44c37542ca43ddc58f1d1b9a7fbfac103 100644 (file)
@@ -27,6 +27,8 @@
 #include "trace.h"
 
 #define WIL_EDMA_MAX_DATA_OFFSET (2)
+/* RX buffer size must be aligned to 4 bytes */
+#define WIL_EDMA_RX_BUF_LEN_DEFAULT (2048)
 
 static void wil_tx_desc_unmap_edma(struct device *dev,
                                   union wil_tx_desc *desc,
@@ -158,8 +160,7 @@ static int wil_ring_alloc_skb_edma(struct wil6210_priv *wil,
                                   struct wil_ring *ring, u32 i)
 {
        struct device *dev = wil_to_dev(wil);
-       unsigned int sz = wil->rx_buf_len + ETH_HLEN +
-               WIL_EDMA_MAX_DATA_OFFSET;
+       unsigned int sz = ALIGN(wil->rx_buf_len, 4);
        dma_addr_t pa;
        u16 buff_id;
        struct list_head *active = &wil->rx_buff_mgmt.active;
@@ -182,6 +183,12 @@ static int wil_ring_alloc_skb_edma(struct wil6210_priv *wil,
 
        skb_put(skb, sz);
 
+       /**
+        * Make sure that the network stack calculates checksum for packets
+        * which failed the HW checksum calculation
+        */
+       skb->ip_summed = CHECKSUM_NONE;
+
        pa = dma_map_single(dev, skb->data, skb->len, DMA_FROM_DEVICE);
        if (unlikely(dma_mapping_error(dev, pa))) {
                kfree_skb(skb);
@@ -503,7 +510,7 @@ static int wil_init_rx_desc_ring(struct wil6210_priv *wil, u16 desc_ring_size,
 static void wil_get_reorder_params_edma(struct wil6210_priv *wil,
                                        struct sk_buff *skb, int *tid,
                                        int *cid, int *mid, u16 *seq,
-                                       int *mcast)
+                                       int *mcast, int *retry)
 {
        struct wil_rx_status_extended *s = wil_skb_rxstatus(skb);
 
@@ -512,6 +519,7 @@ static void wil_get_reorder_params_edma(struct wil6210_priv *wil,
        *mid = wil_rx_status_get_mid(s);
        *seq = le16_to_cpu(wil_rx_status_get_seq(wil, s));
        *mcast = wil_rx_status_get_mcast(s);
+       *retry = wil_rx_status_get_retry(s);
 }
 
 static void wil_get_netif_rx_params_edma(struct sk_buff *skb, int *cid,
@@ -593,7 +601,7 @@ static bool wil_is_rx_idle_edma(struct wil6210_priv *wil)
 static void wil_rx_buf_len_init_edma(struct wil6210_priv *wil)
 {
        wil->rx_buf_len = rx_large_buf ?
-               WIL_MAX_ETH_MTU : TXRX_BUF_LEN_DEFAULT - WIL_MAX_MPDU_OVERHEAD;
+               WIL_MAX_ETH_MTU : WIL_EDMA_RX_BUF_LEN_DEFAULT;
 }
 
 static int wil_rx_init_edma(struct wil6210_priv *wil, u16 desc_ring_size)
@@ -626,8 +634,7 @@ static int wil_rx_init_edma(struct wil6210_priv *wil, u16 desc_ring_size)
 
        wil_rx_buf_len_init_edma(wil);
 
-       max_rx_pl_per_desc = wil->rx_buf_len + ETH_HLEN +
-               WIL_EDMA_MAX_DATA_OFFSET;
+       max_rx_pl_per_desc = ALIGN(wil->rx_buf_len, 4);
 
        /* Use debugfs dbg_num_rx_srings if set, reserve one sring for TX */
        if (wil->num_rx_status_rings > WIL6210_MAX_STATUS_RINGS - 1)
@@ -794,14 +801,15 @@ static int wil_check_bar(struct wil6210_priv *wil, void *msg, int cid,
        return -EAGAIN;
 }
 
-static int wil_rx_edma_check_errors(struct wil6210_priv *wil, void *msg,
-                                   struct wil_net_stats *stats,
-                                   struct sk_buff *skb)
+static int wil_rx_error_check_edma(struct wil6210_priv *wil,
+                                  struct sk_buff *skb,
+                                  struct wil_net_stats *stats)
 {
        int error;
        int l2_rx_status;
        int l3_rx_status;
        int l4_rx_status;
+       void *msg = wil_skb_rxstatus(skb);
 
        error = wil_rx_status_get_error(msg);
        if (!error) {
@@ -845,6 +853,8 @@ static int wil_rx_edma_check_errors(struct wil6210_priv *wil, void *msg,
         * mis-calculates TCP checksum - if it should be 0x0,
         * it writes 0xffff in violation of RFC 1624
         */
+       else
+               stats->rx_csum_err++;
 
        return 0;
 }
@@ -859,12 +869,10 @@ static struct sk_buff *wil_sring_reap_rx_edma(struct wil6210_priv *wil,
        struct sk_buff *skb;
        dma_addr_t pa;
        struct wil_ring_rx_data *rxdata = &sring->rx_data;
-       unsigned int sz = wil->rx_buf_len + ETH_HLEN +
-               WIL_EDMA_MAX_DATA_OFFSET;
+       unsigned int sz = ALIGN(wil->rx_buf_len, 4);
        struct wil_net_stats *stats = NULL;
        u16 dmalen;
        int cid;
-       int rc;
        bool eop, headstolen;
        int delta;
        u8 dr_bit;
@@ -936,13 +944,6 @@ static struct sk_buff *wil_sring_reap_rx_edma(struct wil6210_priv *wil,
                goto skipping;
        }
 
-       /* Check and treat errors reported by HW */
-       rc = wil_rx_edma_check_errors(wil, msg, stats, skb);
-       if (rc) {
-               rxdata->skipping = true;
-               goto skipping;
-       }
-
        if (unlikely(dmalen > sz)) {
                wil_err(wil, "Rx size too large: %d bytes!\n", dmalen);
                stats->rx_large_frame++;
@@ -1223,6 +1224,9 @@ int wil_tx_sring_handler(struct wil6210_priv *wil,
                                        if (stats) {
                                                stats->tx_packets++;
                                                stats->tx_bytes += skb->len;
+
+                                               wil_tx_latency_calc(wil, skb,
+                                                       &wil->sta[cid]);
                                        }
                                } else {
                                        ndev->stats.tx_errors++;
@@ -1473,6 +1477,11 @@ static int __wil_tx_ring_tso_edma(struct wil6210_priv *wil,
         */
        wmb();
 
+       if (wil->tx_latency)
+               *(ktime_t *)&skb->cb = ktime_get();
+       else
+               memset(skb->cb, 0, sizeof(ktime_t));
+
        wil_w(wil, ring->hwtail, ring->swhead);
 
        return 0;
@@ -1592,6 +1601,7 @@ void wil_init_txrx_ops_edma(struct wil6210_priv *wil)
        wil->txrx_ops.get_reorder_params = wil_get_reorder_params_edma;
        wil->txrx_ops.get_netif_rx_params = wil_get_netif_rx_params_edma;
        wil->txrx_ops.rx_crypto_check = wil_rx_crypto_check_edma;
+       wil->txrx_ops.rx_error_check = wil_rx_error_check_edma;
        wil->txrx_ops.is_rx_idle = wil_is_rx_idle_edma;
        wil->txrx_ops.rx_fini = wil_rx_fini_edma;
 }
index e86fc2dc0ce010915d84aa439da441eac44297d5..a7fe9292fda380015cbd8d9b0017f12d4e6f5fd9 100644 (file)
@@ -471,6 +471,12 @@ static inline __le16 wil_rx_status_get_seq(struct wil6210_priv *wil, void *msg)
        return ((struct wil_rx_status_extended *)msg)->ext.seq_num;
 }
 
+static inline u8 wil_rx_status_get_retry(void *msg)
+{
+       /* retry bit is missing in EDMA HW. return 1 to be on the safe side */
+       return 1;
+}
+
 static inline int wil_rx_status_get_mid(void *msg)
 {
        if (!(((struct wil_rx_status_compressed *)msg)->d0 &
index d963c76b679ed95efce68ddc2ffffd8d80da0ccd..17c294b1ead13b3bfdca6328481a0a1a7db1a5ae 100644 (file)
@@ -37,6 +37,7 @@ extern bool rx_align_2;
 extern bool rx_large_buf;
 extern bool debug_fw;
 extern bool disable_ap_sme;
+extern bool ftm_mode;
 
 struct wil6210_priv;
 struct wil6210_vif;
@@ -50,11 +51,17 @@ union wil_tx_desc;
 #define WIL_FW_NAME_SPARROW_PLUS "wil6210_sparrow_plus.fw"
 #define WIL_FW_NAME_FTM_SPARROW_PLUS "wil6210_sparrow_plus_ftm.fw"
 
+#define WIL_FW_NAME_TALYN "wil6436.fw"
+#define WIL_FW_NAME_FTM_TALYN "wil6436_ftm.fw"
+#define WIL_BRD_NAME_TALYN "wil6436.brd"
+
 #define WIL_BOARD_FILE_NAME "wil6210.brd" /* board & radio parameters */
 
 #define WIL_DEFAULT_BUS_REQUEST_KBPS 128000 /* ~1Gbps */
 #define WIL_MAX_BUS_REQUEST_KBPS 800000 /* ~6.1Gbps */
 
+#define WIL_NUM_LATENCY_BINS 200
+
 /* maximum number of virtual interfaces the driver supports
  * (including the main interface)
  */
@@ -85,6 +92,8 @@ static inline u32 WIL_GET_BITS(u32 x, int b0, int b1)
 #define WIL6210_NAPI_BUDGET    (16) /* arbitrary */
 #define WIL_MAX_AMPDU_SIZE     (64 * 1024) /* FW/HW limit */
 #define WIL_MAX_AGG_WSIZE      (32) /* FW/HW limit */
+#define WIL_MAX_AMPDU_SIZE_128 (128 * 1024) /* FW/HW limit */
+#define WIL_MAX_AGG_WSIZE_64   (64) /* FW/HW limit */
 #define WIL6210_MAX_STATUS_RINGS       (8)
 
 /* Hardware offload block adds the following:
@@ -293,6 +302,8 @@ struct RGF_ICR {
        #define BIT_DMA_ITR_RX_IDL_CNT_CTL_FOREVER              BIT(2)
        #define BIT_DMA_ITR_RX_IDL_CNT_CTL_CLR                  BIT(3)
        #define BIT_DMA_ITR_RX_IDL_CNT_CTL_REACHED_TRESH        BIT(4)
+#define RGF_DMA_MISC_CTL                               (0x881d6c)
+       #define BIT_OFUL34_RDY_VALID_BUG_FIX_EN                 BIT(7)
 
 #define RGF_DMA_PSEUDO_CAUSE           (0x881c68)
 #define RGF_DMA_PSEUDO_CAUSE_MASK_SW   (0x881c6c)
@@ -543,6 +554,31 @@ struct wil_status_ring {
        struct wil_ring_rx_data rx_data;
 };
 
+#define WIL_STA_TID_NUM (16)
+#define WIL_MCS_MAX (12) /* Maximum MCS supported */
+
+struct wil_net_stats {
+       unsigned long   rx_packets;
+       unsigned long   tx_packets;
+       unsigned long   rx_bytes;
+       unsigned long   tx_bytes;
+       unsigned long   tx_errors;
+       u32 tx_latency_min_us;
+       u32 tx_latency_max_us;
+       u64 tx_latency_total_us;
+       unsigned long   rx_dropped;
+       unsigned long   rx_non_data_frame;
+       unsigned long   rx_short_frame;
+       unsigned long   rx_large_frame;
+       unsigned long   rx_replay;
+       unsigned long   rx_mic_error;
+       unsigned long   rx_key_error; /* eDMA specific */
+       unsigned long   rx_amsdu_error; /* eDMA specific */
+       unsigned long   rx_csum_err;
+       u16 last_mcs_rx;
+       u64 rx_per_mcs[WIL_MCS_MAX + 1];
+};
+
 /**
  * struct tx_rx_ops - different TX/RX ops for legacy and enhanced
  * DMA flow
@@ -572,10 +608,12 @@ struct wil_txrx_ops {
                                 u16 agg_wsize, u16 timeout);
        void (*get_reorder_params)(struct wil6210_priv *wil,
                                   struct sk_buff *skb, int *tid, int *cid,
-                                  int *mid, u16 *seq, int *mcast);
+                                  int *mid, u16 *seq, int *mcast, int *retry);
        void (*get_netif_rx_params)(struct sk_buff *skb,
                                    int *cid, int *security);
        int (*rx_crypto_check)(struct wil6210_priv *wil, struct sk_buff *skb);
+       int (*rx_error_check)(struct wil6210_priv *wil, struct sk_buff *skb,
+                             struct wil_net_stats *stats);
        bool (*is_rx_idle)(struct wil6210_priv *wil);
        irqreturn_t (*irq_rx)(int irq, void *cookie);
 };
@@ -625,6 +663,8 @@ struct pci_dev;
  * @drop_dup: duplicate frames dropped for this reorder buffer
  * @drop_old: old frames dropped for this reorder buffer
  * @first_time: true when this buffer used 1-st time
+ * @mcast_last_seq: sequence number (SN) of last received multicast packet
+ * @drop_dup_mcast: duplicate multicast frames dropped for this reorder buffer
  */
 struct wil_tid_ampdu_rx {
        struct sk_buff **reorder_buf;
@@ -638,6 +678,8 @@ struct wil_tid_ampdu_rx {
        unsigned long long drop_dup;
        unsigned long long drop_old;
        bool first_time; /* is it 1-st time this buffer used? */
+       u16 mcast_last_seq; /* multicast dup detection */
+       unsigned long long drop_dup_mcast;
 };
 
 /**
@@ -672,27 +714,6 @@ enum wil_sta_status {
        wil_sta_connected = 2,
 };
 
-#define WIL_STA_TID_NUM (16)
-#define WIL_MCS_MAX (12) /* Maximum MCS supported */
-
-struct wil_net_stats {
-       unsigned long   rx_packets;
-       unsigned long   tx_packets;
-       unsigned long   rx_bytes;
-       unsigned long   tx_bytes;
-       unsigned long   tx_errors;
-       unsigned long   rx_dropped;
-       unsigned long   rx_non_data_frame;
-       unsigned long   rx_short_frame;
-       unsigned long   rx_large_frame;
-       unsigned long   rx_replay;
-       unsigned long   rx_mic_error; /* eDMA specific */
-       unsigned long   rx_key_error; /* eDMA specific */
-       unsigned long   rx_amsdu_error; /* eDMA specific */
-       u16 last_mcs_rx;
-       u64 rx_per_mcs[WIL_MCS_MAX + 1];
-};
-
 /**
  * struct wil_sta_info - data for peer
  *
@@ -706,6 +727,14 @@ struct wil_sta_info {
        u8 mid;
        enum wil_sta_status status;
        struct wil_net_stats stats;
+       /**
+        * 20 latency bins. 1st bin counts packets with latency
+        * of 0..tx_latency_res, last bin counts packets with latency
+        * of 19*tx_latency_res and above.
+        * tx_latency_res is configured from "tx_latency" debug-fs.
+        */
+       u64 *tx_latency_bins;
+       struct wmi_link_stats_basic fw_stats_basic;
        /* Rx BACK */
        struct wil_tid_ampdu_rx *tid_rx[WIL_STA_TID_NUM];
        spinlock_t tid_rx_lock; /* guarding tid_rx array */
@@ -820,6 +849,8 @@ struct wil6210_vif {
        struct mutex probe_client_mutex; /* protect @probe_client_pending */
        struct work_struct probe_client_worker;
        int net_queue_stopped; /* netif_tx_stop_all_queues invoked */
+       bool fw_stats_ready; /* per-cid statistics are ready inside sta_info */
+       u64 fw_stats_tsf; /* measurement timestamp */
 };
 
 /**
@@ -847,11 +878,18 @@ struct wil_rx_buff_mgmt {
        unsigned long free_list_empty_cnt; /* statistics */
 };
 
+struct wil_fw_stats_global {
+       bool ready;
+       u64 tsf; /* measurement timestamp */
+       struct wmi_link_stats_global stats;
+};
+
 struct wil6210_priv {
        struct pci_dev *pdev;
        u32 bar_size;
        struct wiphy *wiphy;
        struct net_device *main_ndev;
+       int n_msi;
        void __iomem *csr;
        DECLARE_BITMAP(status, wil_status_last);
        u8 fw_version[ETHTOOL_FWVERS_LEN];
@@ -937,6 +975,8 @@ struct wil6210_priv {
        u8 wakeup_trigger;
        struct wil_suspend_stats suspend_stats;
        struct wil_debugfs_data dbg_data;
+       bool tx_latency; /* collect TX latency measurements */
+       size_t tx_latency_res; /* bin resolution in usec */
 
        void *platform_handle;
        struct wil_platform_ops platform_ops;
@@ -977,6 +1017,11 @@ struct wil6210_priv {
        bool use_rx_hw_reordering;
        bool secured_boot;
        u8 boot_config;
+
+       struct wil_fw_stats_global fw_stats_global;
+
+       u32 max_agg_wsize;
+       u32 max_ampdu_size;
 };
 
 #define wil_to_wiphy(i) (i->wiphy)
@@ -1060,6 +1105,8 @@ static inline void wil_c(struct wil6210_priv *wil, u32 reg, u32 val)
        wil_w(wil, reg, wil_r(wil, reg) & ~val);
 }
 
+void wil_get_board_file(struct wil6210_priv *wil, char *buf, size_t len);
+
 #if defined(CONFIG_DYNAMIC_DEBUG)
 #define wil_hex_dump_txrx(prefix_str, prefix_type, rowsize,    \
                          groupsize, buf, len, ascii)           \
@@ -1176,13 +1223,14 @@ int wmi_new_sta(struct wil6210_vif *vif, const u8 *mac, u8 aid);
 int wmi_port_allocate(struct wil6210_priv *wil, u8 mid,
                      const u8 *mac, enum nl80211_iftype iftype);
 int wmi_port_delete(struct wil6210_priv *wil, u8 mid);
+int wmi_link_stats_cfg(struct wil6210_vif *vif, u32 type, u8 cid, u32 interval);
 int wil_addba_rx_request(struct wil6210_priv *wil, u8 mid,
                         u8 cidxtid, u8 dialog_token, __le16 ba_param_set,
                         __le16 ba_timeout, __le16 ba_seq_ctrl);
 int wil_addba_tx_request(struct wil6210_priv *wil, u8 ringid, u16 wsize);
 
 void wil6210_clear_irq(struct wil6210_priv *wil);
-int wil6210_init_irq(struct wil6210_priv *wil, int irq, bool use_msi);
+int wil6210_init_irq(struct wil6210_priv *wil, int irq);
 void wil6210_fini_irq(struct wil6210_priv *wil, int irq);
 void wil_mask_irq(struct wil6210_priv *wil);
 void wil_unmask_irq(struct wil6210_priv *wil);
@@ -1304,6 +1352,8 @@ int wmi_start_sched_scan(struct wil6210_priv *wil,
                         struct cfg80211_sched_scan_request *request);
 int wmi_stop_sched_scan(struct wil6210_priv *wil);
 int wmi_mgmt_tx(struct wil6210_vif *vif, const u8 *buf, size_t len);
+int wmi_mgmt_tx_ext(struct wil6210_vif *vif, const u8 *buf, size_t len,
+                   u8 channel, u16 duration_ms);
 
 int reverse_memcmp(const void *cs, const void *ct, size_t count);
 
index 177026e5323be420d176aa3aa97895fdf9242b26..bca090611477dcd49ccb5b1144dafa5d0247eeed 100644 (file)
@@ -29,6 +29,7 @@ enum wil_platform_event {
 
 enum wil_platform_features {
        WIL_PLATFORM_FEATURE_FW_EXT_CLK_CONTROL = 0,
+       WIL_PLATFORM_FEATURE_TRIPLE_MSI = 1,
        WIL_PLATFORM_FEATURE_MAX,
 };
 
index 71056c834fff339eb5239134061033dc6b5fa436..42c02a20ec97cafa5336c63ccae019c5735021ae 100644 (file)
@@ -464,6 +464,10 @@ static const char *cmdid2name(u16 cmdid)
                return "WMI_BCAST_DESC_RING_ADD_CMD";
        case WMI_CFG_DEF_RX_OFFLOAD_CMDID:
                return "WMI_CFG_DEF_RX_OFFLOAD_CMD";
+       case WMI_LINK_STATS_CMDID:
+               return "WMI_LINK_STATS_CMD";
+       case WMI_SW_TX_REQ_EXT_CMDID:
+               return "WMI_SW_TX_REQ_EXT_CMDID";
        default:
                return "Untracked CMD";
        }
@@ -598,6 +602,10 @@ static const char *eventid2name(u16 eventid)
                return "WMI_RX_DESC_RING_CFG_DONE_EVENT";
        case WMI_CFG_DEF_RX_OFFLOAD_DONE_EVENTID:
                return "WMI_CFG_DEF_RX_OFFLOAD_DONE_EVENT";
+       case WMI_LINK_STATS_CONFIG_DONE_EVENTID:
+               return "WMI_LINK_STATS_CONFIG_DONE_EVENT";
+       case WMI_LINK_STATS_EVENTID:
+               return "WMI_LINK_STATS_EVENT";
        default:
                return "Untracked EVENT";
        }
@@ -1329,6 +1337,130 @@ wmi_evt_sched_scan_result(struct wil6210_vif *vif, int id, void *d, int len)
        cfg80211_sched_scan_results(wiphy, 0);
 }
 
+static void wil_link_stats_store_basic(struct wil6210_vif *vif,
+                                      struct wmi_link_stats_basic *basic)
+{
+       struct wil6210_priv *wil = vif_to_wil(vif);
+       u8 cid = basic->cid;
+       struct wil_sta_info *sta;
+
+       if (cid < 0 || cid >= WIL6210_MAX_CID) {
+               wil_err(wil, "invalid cid %d\n", cid);
+               return;
+       }
+
+       sta = &wil->sta[cid];
+       sta->fw_stats_basic = *basic;
+}
+
+static void wil_link_stats_store_global(struct wil6210_vif *vif,
+                                       struct wmi_link_stats_global *global)
+{
+       struct wil6210_priv *wil = vif_to_wil(vif);
+
+       wil->fw_stats_global.stats = *global;
+}
+
+static void wmi_link_stats_parse(struct wil6210_vif *vif, u64 tsf,
+                                bool has_next, void *payload,
+                                size_t payload_size)
+{
+       struct wil6210_priv *wil = vif_to_wil(vif);
+       size_t hdr_size = sizeof(struct wmi_link_stats_record);
+       size_t stats_size, record_size, expected_size;
+       struct wmi_link_stats_record *hdr;
+
+       if (payload_size < hdr_size) {
+               wil_err(wil, "link stats wrong event size %zu\n", payload_size);
+               return;
+       }
+
+       while (payload_size >= hdr_size) {
+               hdr = payload;
+               stats_size = le16_to_cpu(hdr->record_size);
+               record_size = hdr_size + stats_size;
+
+               if (payload_size < record_size) {
+                       wil_err(wil, "link stats payload ended unexpectedly, size %zu < %zu\n",
+                               payload_size, record_size);
+                       return;
+               }
+
+               switch (hdr->record_type_id) {
+               case WMI_LINK_STATS_TYPE_BASIC:
+                       expected_size = sizeof(struct wmi_link_stats_basic);
+                       if (stats_size < expected_size) {
+                               wil_err(wil, "link stats invalid basic record size %zu < %zu\n",
+                                       stats_size, expected_size);
+                               return;
+                       }
+                       if (vif->fw_stats_ready) {
+                               /* clean old statistics */
+                               vif->fw_stats_tsf = 0;
+                               vif->fw_stats_ready = 0;
+                       }
+
+                       wil_link_stats_store_basic(vif, payload + hdr_size);
+
+                       if (!has_next) {
+                               vif->fw_stats_tsf = tsf;
+                               vif->fw_stats_ready = 1;
+                       }
+
+                       break;
+               case WMI_LINK_STATS_TYPE_GLOBAL:
+                       expected_size = sizeof(struct wmi_link_stats_global);
+                       if (stats_size < sizeof(struct wmi_link_stats_global)) {
+                               wil_err(wil, "link stats invalid global record size %zu < %zu\n",
+                                       stats_size, expected_size);
+                               return;
+                       }
+
+                       if (wil->fw_stats_global.ready) {
+                               /* clean old statistics */
+                               wil->fw_stats_global.tsf = 0;
+                               wil->fw_stats_global.ready = 0;
+                       }
+
+                       wil_link_stats_store_global(vif, payload + hdr_size);
+
+                       if (!has_next) {
+                               wil->fw_stats_global.tsf = tsf;
+                               wil->fw_stats_global.ready = 1;
+                       }
+
+                       break;
+               default:
+                       break;
+               }
+
+               /* skip to next record */
+               payload += record_size;
+               payload_size -= record_size;
+       }
+}
+
+static void
+wmi_evt_link_stats(struct wil6210_vif *vif, int id, void *d, int len)
+{
+       struct wil6210_priv *wil = vif_to_wil(vif);
+       struct wmi_link_stats_event *evt = d;
+       size_t payload_size;
+
+       if (len < offsetof(struct wmi_link_stats_event, payload)) {
+               wil_err(wil, "stats event way too short %d\n", len);
+               return;
+       }
+       payload_size = le16_to_cpu(evt->payload_size);
+       if (len < sizeof(struct wmi_link_stats_event) + payload_size) {
+               wil_err(wil, "stats event too short %d\n", len);
+               return;
+       }
+
+       wmi_link_stats_parse(vif, le64_to_cpu(evt->tsf), evt->has_next,
+                            evt->payload, payload_size);
+}
+
 /**
  * Some events are ignored for purpose; and need not be interpreted as
  * "unhandled events"
@@ -1359,6 +1491,7 @@ static const struct {
        {WMI_RING_EN_EVENTID,           wmi_evt_ring_en},
        {WMI_DATA_PORT_OPEN_EVENTID,            wmi_evt_ignore},
        {WMI_SCHED_SCAN_RESULT_EVENTID,         wmi_evt_sched_scan_result},
+       {WMI_LINK_STATS_EVENTID,                wmi_evt_link_stats},
 };
 
 /*
@@ -2983,6 +3116,60 @@ int wmi_mgmt_tx(struct wil6210_vif *vif, const u8 *buf, size_t len)
        return rc;
 }
 
+int wmi_mgmt_tx_ext(struct wil6210_vif *vif, const u8 *buf, size_t len,
+                   u8 channel, u16 duration_ms)
+{
+       size_t total;
+       struct wil6210_priv *wil = vif_to_wil(vif);
+       struct ieee80211_mgmt *mgmt_frame = (void *)buf;
+       struct wmi_sw_tx_req_ext_cmd *cmd;
+       struct {
+               struct wmi_cmd_hdr wmi;
+               struct wmi_sw_tx_complete_event evt;
+       } __packed evt = {
+               .evt = {.status = WMI_FW_STATUS_FAILURE},
+       };
+       int rc;
+
+       wil_dbg_wmi(wil, "mgmt_tx_ext mid %d channel %d duration %d\n",
+                   vif->mid, channel, duration_ms);
+       wil_hex_dump_wmi("mgmt_tx_ext frame ", DUMP_PREFIX_OFFSET, 16, 1, buf,
+                        len, true);
+
+       if (len < sizeof(struct ieee80211_hdr_3addr)) {
+               wil_err(wil, "short frame. len %zu\n", len);
+               return -EINVAL;
+       }
+
+       total = sizeof(*cmd) + len;
+       if (total < len) {
+               wil_err(wil, "mgmt_tx_ext invalid len %zu\n", len);
+               return -EINVAL;
+       }
+
+       cmd = kzalloc(total, GFP_KERNEL);
+       if (!cmd)
+               return -ENOMEM;
+
+       memcpy(cmd->dst_mac, mgmt_frame->da, WMI_MAC_LEN);
+       cmd->len = cpu_to_le16(len);
+       memcpy(cmd->payload, buf, len);
+       cmd->channel = channel - 1;
+       cmd->duration_ms = cpu_to_le16(duration_ms);
+
+       rc = wmi_call(wil, WMI_SW_TX_REQ_EXT_CMDID, vif->mid, cmd, total,
+                     WMI_SW_TX_COMPLETE_EVENTID, &evt, sizeof(evt), 2000);
+       if (!rc && evt.evt.status != WMI_FW_STATUS_SUCCESS) {
+               wil_err(wil, "mgmt_tx_ext failed with status %d\n",
+                       evt.evt.status);
+               rc = -EINVAL;
+       }
+
+       kfree(cmd);
+
+       return rc;
+}
+
 int wil_wmi_tx_sring_cfg(struct wil6210_priv *wil, int ring_id)
 {
        int rc;
@@ -3242,3 +3429,37 @@ int wil_wmi_bcast_desc_ring_add(struct wil6210_vif *vif, int ring_id)
 
        return 0;
 }
+
+int wmi_link_stats_cfg(struct wil6210_vif *vif, u32 type, u8 cid, u32 interval)
+{
+       struct wil6210_priv *wil = vif_to_wil(vif);
+       struct wmi_link_stats_cmd cmd = {
+               .record_type_mask = cpu_to_le32(type),
+               .cid = cid,
+               .action = WMI_LINK_STATS_SNAPSHOT,
+               .interval_msec = cpu_to_le32(interval),
+       };
+       struct {
+               struct wmi_cmd_hdr wmi;
+               struct wmi_link_stats_config_done_event evt;
+       } __packed reply = {
+               .evt = {.status = WMI_FW_STATUS_FAILURE},
+       };
+       int rc;
+
+       rc = wmi_call(wil, WMI_LINK_STATS_CMDID, vif->mid, &cmd, sizeof(cmd),
+                     WMI_LINK_STATS_CONFIG_DONE_EVENTID, &reply,
+                     sizeof(reply), WIL_WMI_CALL_GENERAL_TO_MS);
+       if (rc) {
+               wil_err(wil, "WMI_LINK_STATS_CMDID failed, rc %d\n", rc);
+               return rc;
+       }
+
+       if (reply.evt.status != WMI_FW_STATUS_SUCCESS) {
+               wil_err(wil, "Link statistics config failed, status %d\n",
+                       reply.evt.status);
+               return -EINVAL;
+       }
+
+       return 0;
+}
index abf6f05c4801b9d80b2cfd1e974b9e3dcaa605c0..139acb2caf92fcac911d5212f7ac1e26c698dd07 100644 (file)
  * must always be kept equal to (WMI_RF_RX2TX_LENGTH+1)
  */
 #define WMI_RF_RX2TX_CONF_LENGTH       (4)
+/* Qos configuration */
+#define WMI_QOS_NUM_OF_PRIORITY                (4)
+#define WMI_QOS_MIN_DEFAULT_WEIGHT     (10)
+#define WMI_QOS_VRING_SLOT_MIN_MS      (2)
+#define WMI_QOS_VRING_SLOT_MAX_MS      (10)
+/* (WMI_QOS_MIN_DEFAULT_WEIGHT * WMI_QOS_VRING_SLOT_MAX_MS /
+ * WMI_QOS_VRING_SLOT_MIN_MS)
+ */
+#define WMI_QOS_MAX_WEIGHT             50
+#define WMI_QOS_SET_VIF_PRIORITY       (0xFF)
+#define WMI_QOS_DEFAULT_PRIORITY       (WMI_QOS_NUM_OF_PRIORITY)
 
 /* Mailbox interface
  * used for commands and events
@@ -86,7 +97,12 @@ enum wmi_fw_capability {
        WMI_FW_CAPABILITY_PNO                           = 15,
        WMI_FW_CAPABILITY_REF_CLOCK_CONTROL             = 18,
        WMI_FW_CAPABILITY_AP_SME_OFFLOAD_NONE           = 19,
+       WMI_FW_CAPABILITY_MULTI_VIFS                    = 20,
+       WMI_FW_CAPABILITY_FT_ROAMING                    = 21,
+       WMI_FW_CAPABILITY_BACK_WIN_SIZE_64              = 22,
        WMI_FW_CAPABILITY_AMSDU                         = 23,
+       WMI_FW_CAPABILITY_RAW_MODE                      = 24,
+       WMI_FW_CAPABILITY_TX_REQ_EXT                    = 25,
        WMI_FW_CAPABILITY_MAX,
 };
 
@@ -110,6 +126,9 @@ enum wmi_command_id {
        WMI_SET_PROBED_SSID_CMDID                       = 0x0A,
        /* deprecated */
        WMI_SET_LISTEN_INT_CMDID                        = 0x0B,
+       WMI_FT_AUTH_CMDID                               = 0x0C,
+       WMI_FT_REASSOC_CMDID                            = 0x0D,
+       WMI_UPDATE_FT_IES_CMDID                         = 0x0E,
        WMI_BCON_CTRL_CMDID                             = 0x0F,
        WMI_ADD_CIPHER_KEY_CMDID                        = 0x16,
        WMI_DELETE_CIPHER_KEY_CMDID                     = 0x17,
@@ -118,6 +137,12 @@ enum wmi_command_id {
        WMI_SET_WSC_STATUS_CMDID                        = 0x41,
        WMI_PXMT_RANGE_CFG_CMDID                        = 0x42,
        WMI_PXMT_SNR2_RANGE_CFG_CMDID                   = 0x43,
+       WMI_RADAR_GENERAL_CONFIG_CMDID                  = 0x100,
+       WMI_RADAR_CONFIG_SELECT_CMDID                   = 0x101,
+       WMI_RADAR_PARAMS_CONFIG_CMDID                   = 0x102,
+       WMI_RADAR_SET_MODE_CMDID                        = 0x103,
+       WMI_RADAR_CONTROL_CMDID                         = 0x104,
+       WMI_RADAR_PCI_CONTROL_CMDID                     = 0x105,
        WMI_MEM_READ_CMDID                              = 0x800,
        WMI_MEM_WR_CMDID                                = 0x801,
        WMI_ECHO_CMDID                                  = 0x803,
@@ -158,6 +183,10 @@ enum wmi_command_id {
        WMI_SET_PCP_CHANNEL_CMDID                       = 0x829,
        WMI_GET_PCP_CHANNEL_CMDID                       = 0x82A,
        WMI_SW_TX_REQ_CMDID                             = 0x82B,
+       /* Event is shared between WMI_SW_TX_REQ_CMDID and
+        * WMI_SW_TX_REQ_EXT_CMDID
+        */
+       WMI_SW_TX_REQ_EXT_CMDID                         = 0x82C,
        WMI_MLME_PUSH_CMDID                             = 0x835,
        WMI_BEAMFORMING_MGMT_CMDID                      = 0x836,
        WMI_BF_TXSS_MGMT_CMDID                          = 0x837,
@@ -207,7 +236,12 @@ enum wmi_command_id {
        WMI_GET_PCP_FACTOR_CMDID                        = 0x91B,
        /* Power Save Configuration Commands */
        WMI_PS_DEV_PROFILE_CFG_CMDID                    = 0x91C,
+       WMI_RS_ENABLE_CMDID                             = 0x91E,
+       WMI_RS_CFG_EX_CMDID                             = 0x91F,
+       WMI_GET_DETAILED_RS_RES_EX_CMDID                = 0x920,
+       /* deprecated */
        WMI_RS_CFG_CMDID                                = 0x921,
+       /* deprecated */
        WMI_GET_DETAILED_RS_RES_CMDID                   = 0x922,
        WMI_AOA_MEAS_CMDID                              = 0x923,
        WMI_BRP_SET_ANT_LIMIT_CMDID                     = 0x924,
@@ -236,7 +270,9 @@ enum wmi_command_id {
        WMI_PRIO_TX_SECTORS_ORDER_CMDID                 = 0x9A5,
        WMI_PRIO_TX_SECTORS_NUMBER_CMDID                = 0x9A6,
        WMI_PRIO_TX_SECTORS_SET_DEFAULT_CFG_CMDID       = 0x9A7,
+       /* deprecated */
        WMI_BF_CONTROL_CMDID                            = 0x9AA,
+       WMI_BF_CONTROL_EX_CMDID                         = 0x9AB,
        WMI_TX_STATUS_RING_ADD_CMDID                    = 0x9C0,
        WMI_RX_STATUS_RING_ADD_CMDID                    = 0x9C1,
        WMI_TX_DESC_RING_ADD_CMDID                      = 0x9C2,
@@ -252,6 +288,11 @@ enum wmi_command_id {
        WMI_GET_CCA_INDICATIONS_CMDID                   = 0xA07,
        WMI_SET_CCA_INDICATIONS_BI_AVG_NUM_CMDID        = 0xA08,
        WMI_INTERNAL_FW_IOCTL_CMDID                     = 0xA0B,
+       WMI_LINK_STATS_CMDID                            = 0xA0C,
+       WMI_SET_GRANT_MCS_CMDID                         = 0xA0E,
+       WMI_SET_AP_SLOT_SIZE_CMDID                      = 0xA0F,
+       WMI_SET_VRING_PRIORITY_WEIGHT_CMDID             = 0xA10,
+       WMI_SET_VRING_PRIORITY_CMDID                    = 0xA11,
        WMI_SET_MAC_ADDRESS_CMDID                       = 0xF003,
        WMI_ABORT_SCAN_CMDID                            = 0xF007,
        WMI_SET_PROMISCUOUS_MODE_CMDID                  = 0xF041,
@@ -450,6 +491,30 @@ struct wmi_start_sched_scan_cmd {
        struct wmi_sched_scan_plan scan_plans[WMI_MAX_PLANS_NUM];
 } __packed;
 
+/* WMI_FT_AUTH_CMDID */
+struct wmi_ft_auth_cmd {
+       u8 bssid[WMI_MAC_LEN];
+       /* enum wmi_channel */
+       u8 channel;
+       /* enum wmi_channel */
+       u8 edmg_channel;
+       u8 reserved[4];
+} __packed;
+
+/* WMI_FT_REASSOC_CMDID */
+struct wmi_ft_reassoc_cmd {
+       u8 bssid[WMI_MAC_LEN];
+       u8 reserved[2];
+} __packed;
+
+/* WMI_UPDATE_FT_IES_CMDID */
+struct wmi_update_ft_ies_cmd {
+       /* Length of the FT IEs */
+       __le16 ie_len;
+       u8 reserved[2];
+       u8 ie_info[0];
+} __packed;
+
 /* WMI_SET_PROBED_SSID_CMDID */
 #define MAX_PROBED_SSID_INDEX  (3)
 
@@ -506,6 +571,109 @@ struct wmi_pxmt_snr2_range_cfg_cmd {
        s8 snr2range_arr[2];
 } __packed;
 
+/* WMI_RADAR_GENERAL_CONFIG_CMDID */
+struct wmi_radar_general_config_cmd {
+       /* Number of pulses (CIRs) in FW FIFO to initiate pulses transfer
+        * from FW to Host
+        */
+       __le32 fifo_watermark;
+       /* In unit of us, in the range [100, 1000000] */
+       __le32 t_burst;
+       /* Valid in the range [1, 32768], 0xFFFF means infinite */
+       __le32 n_bursts;
+       /* In unit of 330Mhz clk, in the range [4, 2000]*330 */
+       __le32 t_pulse;
+       /* In the range of [1,4096] */
+       __le16 n_pulses;
+       /* Number of taps after cTap per CIR */
+       __le16 n_samples;
+       /* Offset from the main tap (0 = zero-distance). In the range of [0,
+        * 255]
+        */
+       u8 first_sample_offset;
+       /* Number of Pulses to average, 1, 2, 4, 8 */
+       u8 pulses_to_avg;
+       /* Number of adjacent taps to average, 1, 2, 4, 8 */
+       u8 samples_to_avg;
+       /* The index to config general params */
+       u8 general_index;
+       u8 reserved[4];
+} __packed;
+
+/* WMI_RADAR_CONFIG_SELECT_CMDID */
+struct wmi_radar_config_select_cmd {
+       /* Select the general params index to use */
+       u8 general_index;
+       u8 reserved[3];
+       /* 0 means don't update burst_active_vector */
+       __le32 burst_active_vector;
+       /* 0 means don't update pulse_active_vector */
+       __le32 pulse_active_vector;
+} __packed;
+
+/* WMI_RADAR_PARAMS_CONFIG_CMDID */
+struct wmi_radar_params_config_cmd {
+       /* The burst index selected to config */
+       u8 burst_index;
+       /* 0-not active, 1-active */
+       u8 burst_en;
+       /* The pulse index selected to config */
+       u8 pulse_index;
+       /* 0-not active, 1-active */
+       u8 pulse_en;
+       /* TX RF to use on current pulse */
+       u8 tx_rfc_idx;
+       u8 tx_sector;
+       /* Offset from calibrated value.(expected to be 0)(value is row in
+        * Gain-LUT, not dB)
+        */
+       s8 tx_rf_gain_comp;
+       /* expected to be 0 */
+       s8 tx_bb_gain_comp;
+       /* RX RF to use on current pulse */
+       u8 rx_rfc_idx;
+       u8 rx_sector;
+       /* Offset from calibrated value.(expected to be 0)(value is row in
+        * Gain-LUT, not dB)
+        */
+       s8 rx_rf_gain_comp;
+       /* Value in dB.(expected to be 0) */
+       s8 rx_bb_gain_comp;
+       /* Offset from calibrated value.(expected to be 0) */
+       s8 rx_timing_offset;
+       u8 reserved[3];
+} __packed;
+
+/* WMI_RADAR_SET_MODE_CMDID */
+struct wmi_radar_set_mode_cmd {
+       /* 0-disable/1-enable */
+       u8 enable;
+       /* enum wmi_channel */
+       u8 channel;
+       /* In the range of [0,7], 0xff means use default */
+       u8 tx_rfc_idx;
+       /* In the range of [0,7], 0xff means use default */
+       u8 rx_rfc_idx;
+} __packed;
+
+/* WMI_RADAR_CONTROL_CMDID */
+struct wmi_radar_control_cmd {
+       /* 0-stop/1-start */
+       u8 start;
+       u8 reserved[3];
+} __packed;
+
+/* WMI_RADAR_PCI_CONTROL_CMDID */
+struct wmi_radar_pci_control_cmd {
+       /* pcie host buffer start address */
+       __le64 base_addr;
+       /* pcie host control block address */
+       __le64 control_block_addr;
+       /* pcie host buffer size */
+       __le32 buffer_size;
+       __le32 reserved;
+} __packed;
+
 /* WMI_RF_MGMT_CMDID */
 enum wmi_rf_mgmt_type {
        WMI_RF_MGMT_W_DISABLE   = 0x00,
@@ -643,12 +811,18 @@ struct wmi_pcp_start_cmd {
        u8 pcp_max_assoc_sta;
        u8 hidden_ssid;
        u8 is_go;
-       u8 reserved0[5];
+       /* enum wmi_channel WMI_CHANNEL_9..WMI_CHANNEL_12 */
+       u8 edmg_channel;
+       u8 raw_mode;
+       u8 reserved[3];
        /* A-BFT length override if non-0 */
        u8 abft_len;
        /* enum wmi_ap_sme_offload_mode_e */
        u8 ap_sme_offload_mode;
        u8 network_type;
+       /* enum wmi_channel WMI_CHANNEL_1..WMI_CHANNEL_6; for EDMG this is
+        * the primary channel number
+        */
        u8 channel;
        u8 disable_sec_offload;
        u8 disable_sec;
@@ -661,6 +835,17 @@ struct wmi_sw_tx_req_cmd {
        u8 payload[0];
 } __packed;
 
+/* WMI_SW_TX_REQ_EXT_CMDID */
+struct wmi_sw_tx_req_ext_cmd {
+       u8 dst_mac[WMI_MAC_LEN];
+       __le16 len;
+       __le16 duration_ms;
+       /* Channel to use, 0xFF for currently active channel */
+       u8 channel;
+       u8 reserved[5];
+       u8 payload[0];
+} __packed;
+
 /* WMI_VRING_SWITCH_TIMING_CONFIG_CMDID */
 struct wmi_vring_switch_timing_config_cmd {
        /* Set vring timing configuration:
@@ -687,6 +872,7 @@ struct wmi_vring_cfg_schd {
 enum wmi_vring_cfg_encap_trans_type {
        WMI_VRING_ENC_TYPE_802_3        = 0x00,
        WMI_VRING_ENC_TYPE_NATIVE_WIFI  = 0x01,
+       WMI_VRING_ENC_TYPE_NONE         = 0x02,
 };
 
 enum wmi_vring_cfg_ds_cfg {
@@ -744,7 +930,11 @@ struct wmi_vring_cfg {
        u8 cid;
        /* Used when cidxtid = CIDXTID_EXTENDED_CID_TID */
        u8 tid;
-       u8 reserved[2];
+       /* Update the vring's priority for Qos purpose. Set to
+        * WMI_QOS_DEFAULT_PRIORITY to use MID's QoS priority
+        */
+       u8 qos_priority;
+       u8 reserved;
 } __packed;
 
 enum wmi_vring_cfg_cmd_action {
@@ -775,20 +965,6 @@ struct wmi_bcast_vring_cfg_cmd {
        struct wmi_bcast_vring_cfg vring_cfg;
 } __packed;
 
-/* WMI_LO_POWER_CALIB_FROM_OTP_CMDID */
-struct wmi_lo_power_calib_from_otp_cmd {
-       /* index to read from OTP. zero based */
-       u8 index;
-       u8 reserved[3];
-} __packed;
-
-/* WMI_LO_POWER_CALIB_FROM_OTP_EVENTID */
-struct wmi_lo_power_calib_from_otp_event {
-       /* wmi_fw_status */
-       u8 status;
-       u8 reserved[3];
-} __packed;
-
 struct wmi_edma_ring_cfg {
        __le64 ring_mem_base;
        /* size in number of items */
@@ -861,6 +1037,20 @@ struct wmi_bcast_desc_ring_add_cmd {
        u8 reserved[4];
 } __packed;
 
+/* WMI_LO_POWER_CALIB_FROM_OTP_CMDID */
+struct wmi_lo_power_calib_from_otp_cmd {
+       /* index to read from OTP. zero based */
+       u8 index;
+       u8 reserved[3];
+} __packed;
+
+/* WMI_LO_POWER_CALIB_FROM_OTP_EVENTID */
+struct wmi_lo_power_calib_from_otp_event {
+       /* wmi_fw_status */
+       u8 status;
+       u8 reserved[3];
+} __packed;
+
 /* WMI_RING_BA_EN_CMDID */
 struct wmi_ring_ba_en_cmd {
        u8 ring_id;
@@ -1094,8 +1284,8 @@ struct wmi_echo_cmd {
 } __packed;
 
 /* WMI_DEEP_ECHO_CMDID
- * Check FW and ucode are alive
- * Returned event: WMI_ECHO_RSP_EVENTID
+ * Check FW and uCode is alive
+ * Returned event: WMI_DEEP_ECHO_RSP_EVENTID
  */
 struct wmi_deep_echo_cmd {
        __le32 value;
@@ -1419,6 +1609,10 @@ struct wmi_fixed_scheduling_config_complete_event {
        u8 reserved[3];
 } __packed;
 
+/* This value exists for backwards compatibility only.
+ * Do not use it in new commands.
+ * Use dynamic arrays where possible.
+ */
 #define WMI_NUM_MCS    (13)
 
 /* WMI_FIXED_SCHEDULING_CONFIG_CMDID */
@@ -1466,6 +1660,52 @@ struct wmi_set_multi_directed_omnis_config_event {
        u8 reserved[3];
 } __packed;
 
+/* WMI_RADAR_GENERAL_CONFIG_EVENTID */
+struct wmi_radar_general_config_event {
+       /* wmi_fw_status */
+       u8 status;
+       u8 reserved[3];
+} __packed;
+
+/* WMI_RADAR_CONFIG_SELECT_EVENTID */
+struct wmi_radar_config_select_event {
+       /* wmi_fw_status */
+       u8 status;
+       u8 reserved[3];
+       /* In unit of bytes */
+       __le32 fifo_size;
+       /* In unit of bytes */
+       __le32 pulse_size;
+} __packed;
+
+/* WMI_RADAR_PARAMS_CONFIG_EVENTID */
+struct wmi_radar_params_config_event {
+       /* wmi_fw_status */
+       u8 status;
+       u8 reserved[3];
+} __packed;
+
+/* WMI_RADAR_SET_MODE_EVENTID */
+struct wmi_radar_set_mode_event {
+       /* wmi_fw_status */
+       u8 status;
+       u8 reserved[3];
+} __packed;
+
+/* WMI_RADAR_CONTROL_EVENTID */
+struct wmi_radar_control_event {
+       /* wmi_fw_status */
+       u8 status;
+       u8 reserved[3];
+} __packed;
+
+/* WMI_RADAR_PCI_CONTROL_EVENTID */
+struct wmi_radar_pci_control_event {
+       /* wmi_fw_status */
+       u8 status;
+       u8 reserved[3];
+} __packed;
+
 /* WMI_SET_LONG_RANGE_CONFIG_CMDID */
 struct wmi_set_long_range_config_cmd {
        __le32 reserved;
@@ -1478,12 +1718,12 @@ struct wmi_set_long_range_config_complete_event {
        u8 reserved[3];
 } __packed;
 
-/* payload max size is 236 bytes: max event buffer size (256) - WMI headers
+/* payload max size is 1024 bytes: max event buffer size (1044) - WMI headers
  * (16) - prev struct field size (4)
  */
-#define WMI_MAX_IOCTL_PAYLOAD_SIZE             (236)
-#define WMI_MAX_IOCTL_REPLY_PAYLOAD_SIZE       (236)
-#define WMI_MAX_INTERNAL_EVENT_PAYLOAD_SIZE    (236)
+#define WMI_MAX_IOCTL_PAYLOAD_SIZE             (1024)
+#define WMI_MAX_IOCTL_REPLY_PAYLOAD_SIZE       (1024)
+#define WMI_MAX_INTERNAL_EVENT_PAYLOAD_SIZE    (1024)
 
 enum wmi_internal_fw_ioctl_code {
        WMI_INTERNAL_FW_CODE_NONE       = 0x0,
@@ -1523,7 +1763,37 @@ struct wmi_internal_fw_event_event {
        __le32 payload[0];
 } __packed;
 
-/* WMI_BF_CONTROL_CMDID */
+/* WMI_SET_VRING_PRIORITY_WEIGHT_CMDID */
+struct wmi_set_vring_priority_weight_cmd {
+       /* Array of weights. Valid values are
+        * WMI_QOS_MIN_DEFAULT_WEIGHT...WMI_QOS_MAX_WEIGHT. Weight #0 is
+        * hard-coded WMI_QOS_MIN_WEIGHT. This array provide the weights
+        * #1..#3
+        */
+       u8 weight[3];
+       u8 reserved;
+} __packed;
+
+/* WMI_SET_VRING_PRIORITY_CMDID */
+struct wmi_vring_priority {
+       u8 vring_idx;
+       /* Weight index. Valid value is 0-3 */
+       u8 priority;
+       u8 reserved[2];
+} __packed;
+
+/* WMI_SET_VRING_PRIORITY_CMDID */
+struct wmi_set_vring_priority_cmd {
+       /* number of entries in vring_priority. Set to
+        * WMI_QOS_SET_VIF_PRIORITY to update the VIF's priority, and there
+        * will be only one entry in vring_priority
+        */
+       u8 num_of_vrings;
+       u8 reserved[3];
+       struct wmi_vring_priority vring_priority[0];
+} __packed;
+
+/* WMI_BF_CONTROL_CMDID - deprecated */
 struct wmi_bf_control_cmd {
        /* wmi_bf_triggers */
        __le32 triggers;
@@ -1565,6 +1835,95 @@ struct wmi_bf_control_cmd {
        u8 reserved2[2];
 } __packed;
 
+/* BF configuration for each MCS */
+struct wmi_bf_control_ex_mcs {
+       /* Long term throughput threshold [Mbps] */
+       u8 long_term_mbps_th_tbl;
+       u8 reserved;
+       /* Long term timeout threshold table [msec] */
+       __le16 long_term_trig_timeout_per_mcs;
+} __packed;
+
+/* WMI_BF_CONTROL_EX_CMDID */
+struct wmi_bf_control_ex_cmd {
+       /* wmi_bf_triggers */
+       __le32 triggers;
+       /* enum wmi_edmg_tx_mode */
+       u8 tx_mode;
+       /* DISABLED = 0, ENABLED = 1 , DRY_RUN = 2 */
+       u8 txss_mode;
+       /* DISABLED = 0, ENABLED = 1, DRY_RUN = 2 */
+       u8 brp_mode;
+       /* Max cts threshold (correspond to
+        * WMI_BF_TRIGGER_MAX_CTS_FAILURE_IN_TXOP)
+        */
+       u8 bf_trigger_max_cts_failure_thr;
+       /* Max cts threshold in dense (correspond to
+        * WMI_BF_TRIGGER_MAX_CTS_FAILURE_IN_TXOP)
+        */
+       u8 bf_trigger_max_cts_failure_dense_thr;
+       /* Max b-ack threshold (correspond to
+        * WMI_BF_TRIGGER_MAX_BACK_FAILURE)
+        */
+       u8 bf_trigger_max_back_failure_thr;
+       /* Max b-ack threshold in dense (correspond to
+        * WMI_BF_TRIGGER_MAX_BACK_FAILURE)
+        */
+       u8 bf_trigger_max_back_failure_dense_thr;
+       u8 reserved0;
+       /* Wrong sectors threshold */
+       __le32 wrong_sector_bis_thr;
+       /* BOOL to enable/disable long term trigger */
+       u8 long_term_enable;
+       /* 1 = Update long term thresholds from the long_term_mbps_th_tbl and
+        * long_term_trig_timeout_per_mcs arrays, 0 = Ignore
+        */
+       u8 long_term_update_thr;
+       u8 each_mcs_cfg_size;
+       u8 reserved1;
+       /* Configuration for each MCS */
+       struct wmi_bf_control_ex_mcs each_mcs_cfg[0];
+} __packed;
+
+/* WMI_LINK_STATS_CMD */
+enum wmi_link_stats_action {
+       WMI_LINK_STATS_SNAPSHOT         = 0x00,
+       WMI_LINK_STATS_PERIODIC         = 0x01,
+       WMI_LINK_STATS_STOP_PERIODIC    = 0x02,
+};
+
+/* WMI_LINK_STATS_EVENT record identifiers */
+enum wmi_link_stats_record_type {
+       WMI_LINK_STATS_TYPE_BASIC       = 0x01,
+       WMI_LINK_STATS_TYPE_GLOBAL      = 0x02,
+};
+
+/* WMI_LINK_STATS_CMDID */
+struct wmi_link_stats_cmd {
+       /* bitmask of required record types
+        * (wmi_link_stats_record_type_e)
+        */
+       __le32 record_type_mask;
+       /* 0xff for all cids */
+       u8 cid;
+       /* wmi_link_stats_action_e */
+       u8 action;
+       u8 reserved[6];
+       /* range = 100 - 10000 */
+       __le32 interval_msec;
+} __packed;
+
+/* WMI_SET_GRANT_MCS_CMDID */
+struct wmi_set_grant_mcs_cmd {
+       u8 mcs;
+       u8 reserved[3];
+} __packed;
+
+/* WMI_SET_AP_SLOT_SIZE_CMDID */
+struct wmi_set_ap_slot_size_cmd {
+       __le32 slot_size;
+} __packed;
+
 /* WMI Events
  * List of Events (target to host)
  */
@@ -1577,10 +1936,19 @@ enum wmi_event_id {
        WMI_SCHED_SCAN_RESULT_EVENTID                   = 0x1007,
        WMI_SCAN_COMPLETE_EVENTID                       = 0x100A,
        WMI_REPORT_STATISTICS_EVENTID                   = 0x100B,
+       WMI_FT_AUTH_STATUS_EVENTID                      = 0x100C,
+       WMI_FT_REASSOC_STATUS_EVENTID                   = 0x100D,
+       WMI_RADAR_GENERAL_CONFIG_EVENTID                = 0x1100,
+       WMI_RADAR_CONFIG_SELECT_EVENTID                 = 0x1101,
+       WMI_RADAR_PARAMS_CONFIG_EVENTID                 = 0x1102,
+       WMI_RADAR_SET_MODE_EVENTID                      = 0x1103,
+       WMI_RADAR_CONTROL_EVENTID                       = 0x1104,
+       WMI_RADAR_PCI_CONTROL_EVENTID                   = 0x1105,
        WMI_RD_MEM_RSP_EVENTID                          = 0x1800,
        WMI_FW_READY_EVENTID                            = 0x1801,
        WMI_EXIT_FAST_MEM_ACC_MODE_EVENTID              = 0x200,
        WMI_ECHO_RSP_EVENTID                            = 0x1803,
+       WMI_DEEP_ECHO_RSP_EVENTID                       = 0x1804,
        /* deprecated */
        WMI_FS_TUNE_DONE_EVENTID                        = 0x180A,
        /* deprecated */
@@ -1606,6 +1974,9 @@ enum wmi_event_id {
        WMI_DELBA_EVENTID                               = 0x1826,
        WMI_GET_SSID_EVENTID                            = 0x1828,
        WMI_GET_PCP_CHANNEL_EVENTID                     = 0x182A,
+       /* Event is shared between WMI_SW_TX_REQ_CMDID and
+        * WMI_SW_TX_REQ_EXT_CMDID
+        */
        WMI_SW_TX_COMPLETE_EVENTID                      = 0x182B,
        WMI_BEAMFORMING_MGMT_DONE_EVENTID               = 0x1836,
        WMI_BF_TXSS_MGMT_DONE_EVENTID                   = 0x1837,
@@ -1653,7 +2024,12 @@ enum wmi_event_id {
        WMI_PCP_FACTOR_EVENTID                          = 0x191A,
        /* Power Save Configuration Events */
        WMI_PS_DEV_PROFILE_CFG_EVENTID                  = 0x191C,
+       WMI_RS_ENABLE_EVENTID                           = 0x191E,
+       WMI_RS_CFG_EX_EVENTID                           = 0x191F,
+       WMI_GET_DETAILED_RS_RES_EX_EVENTID              = 0x1920,
+       /* deprecated */
        WMI_RS_CFG_DONE_EVENTID                         = 0x1921,
+       /* deprecated */
        WMI_GET_DETAILED_RS_RES_EVENTID                 = 0x1922,
        WMI_AOA_MEAS_EVENTID                            = 0x1923,
        WMI_BRP_SET_ANT_LIMIT_EVENTID                   = 0x1924,
@@ -1681,7 +2057,9 @@ enum wmi_event_id {
        WMI_PRIO_TX_SECTORS_ORDER_EVENTID               = 0x19A5,
        WMI_PRIO_TX_SECTORS_NUMBER_EVENTID              = 0x19A6,
        WMI_PRIO_TX_SECTORS_SET_DEFAULT_CFG_EVENTID     = 0x19A7,
+       /* deprecated */
        WMI_BF_CONTROL_EVENTID                          = 0x19AA,
+       WMI_BF_CONTROL_EX_EVENTID                       = 0x19AB,
        WMI_TX_STATUS_RING_CFG_DONE_EVENTID             = 0x19C0,
        WMI_RX_STATUS_RING_CFG_DONE_EVENTID             = 0x19C1,
        WMI_TX_DESC_RING_CFG_DONE_EVENTID               = 0x19C2,
@@ -1697,6 +2075,12 @@ enum wmi_event_id {
        WMI_SET_CCA_INDICATIONS_BI_AVG_NUM_EVENTID      = 0x1A08,
        WMI_INTERNAL_FW_EVENT_EVENTID                   = 0x1A0A,
        WMI_INTERNAL_FW_IOCTL_EVENTID                   = 0x1A0B,
+       WMI_LINK_STATS_CONFIG_DONE_EVENTID              = 0x1A0C,
+       WMI_LINK_STATS_EVENTID                          = 0x1A0D,
+       WMI_SET_GRANT_MCS_EVENTID                       = 0x1A0E,
+       WMI_SET_AP_SLOT_SIZE_EVENTID                    = 0x1A0F,
+       WMI_SET_VRING_PRIORITY_WEIGHT_EVENTID           = 0x1A10,
+       WMI_SET_VRING_PRIORITY_EVENTID                  = 0x1A11,
        WMI_SET_CHANNEL_EVENTID                         = 0x9000,
        WMI_ASSOC_REQ_EVENTID                           = 0x9001,
        WMI_EAPOL_RX_EVENTID                            = 0x9002,
@@ -1961,6 +2345,33 @@ struct wmi_scan_complete_event {
        __le32 status;
 } __packed;
 
+/* WMI_FT_AUTH_STATUS_EVENTID */
+struct wmi_ft_auth_status_event {
+       /* enum wmi_fw_status */
+       u8 status;
+       u8 reserved[3];
+       u8 mac_addr[WMI_MAC_LEN];
+       __le16 ie_len;
+       u8 ie_info[0];
+} __packed;
+
+/* WMI_FT_REASSOC_STATUS_EVENTID */
+struct wmi_ft_reassoc_status_event {
+       /* enum wmi_fw_status */
+       u8 status;
+       /* association id received from new AP */
+       u8 aid;
+       /* enum wmi_channel */
+       u8 channel;
+       /* enum wmi_channel */
+       u8 edmg_channel;
+       u8 mac_addr[WMI_MAC_LEN];
+       __le16 beacon_ie_len;
+       __le16 reassoc_req_ie_len;
+       __le16 reassoc_resp_ie_len;
+       u8 ie_info[0];
+} __packed;
+
 /* wmi_rx_mgmt_info */
 struct wmi_rx_mgmt_info {
        u8 mcs;
@@ -2317,6 +2728,11 @@ struct wmi_echo_rsp_event {
        __le32 echoed_value;
 } __packed;
 
+/* WMI_DEEP_ECHO_RSP_EVENTID */
+struct wmi_deep_echo_rsp_event {
+       __le32 echoed_value;
+} __packed;
+
 /* WMI_RF_PWR_ON_DELAY_RSP_EVENTID */
 struct wmi_rf_pwr_on_delay_rsp_event {
        /* wmi_fw_status */
@@ -2455,6 +2871,81 @@ struct wmi_rs_cfg {
        __le32 mcs_en_vec;
 } __packed;
 
+enum wmi_edmg_tx_mode {
+       WMI_TX_MODE_DMG                 = 0x0,
+       WMI_TX_MODE_EDMG_CB1            = 0x1,
+       WMI_TX_MODE_EDMG_CB2            = 0x2,
+       WMI_TX_MODE_EDMG_CB1_LONG_LDPC  = 0x3,
+       WMI_TX_MODE_EDMG_CB2_LONG_LDPC  = 0x4,
+       WMI_TX_MODE_MAX,
+};
+
+/* Rate search parameters common configuration */
+struct wmi_rs_cfg_ex_common {
+       /* enum wmi_edmg_tx_mode */
+       u8 mode;
+       /* stop threshold [0-100] */
+       u8 stop_th;
+       /* MCS1 stop threshold [0-100] */
+       u8 mcs1_fail_th;
+       u8 max_back_failure_th;
+       /* Debug feature for disabling internal RS trigger (which is
+        * currently triggered by BF Done)
+        */
+       u8 dbg_disable_internal_trigger;
+       u8 reserved[3];
+       __le32 back_failure_mask;
+} __packed;
+
+/* Rate search parameters configuration per MCS */
+struct wmi_rs_cfg_ex_mcs {
+       /* The maximal allowed PER for each MCS
+        * MCS will be considered as failed if PER during RS is higher
+        */
+       u8 per_threshold;
+       /* Number of MPDUs for each MCS
+        * this is the minimal statistic required to make an educated
+        * decision
+        */
+       u8 min_frame_cnt;
+       u8 reserved[2];
+} __packed;
+
+/* WMI_RS_CFG_EX_CMDID */
+struct wmi_rs_cfg_ex_cmd {
+       /* Configuration for all MCSs */
+       struct wmi_rs_cfg_ex_common common_cfg;
+       u8 each_mcs_cfg_size;
+       u8 reserved[3];
+       /* Configuration for each MCS */
+       struct wmi_rs_cfg_ex_mcs each_mcs_cfg[0];
+} __packed;
+
+/* WMI_RS_CFG_EX_EVENTID */
+struct wmi_rs_cfg_ex_event {
+       /* enum wmi_edmg_tx_mode */
+       u8 mode;
+       /* enum wmi_fw_status */
+       u8 status;
+       u8 reserved[2];
+} __packed;
+
+/* WMI_RS_ENABLE_CMDID */
+struct wmi_rs_enable_cmd {
+       u8 cid;
+       /* enable or disable rate search */
+       u8 rs_enable;
+       u8 reserved[2];
+       __le32 mcs_en_vec;
+} __packed;
+
+/* WMI_RS_ENABLE_EVENTID */
+struct wmi_rs_enable_event {
+       /* enum wmi_fw_status */
+       u8 status;
+       u8 reserved[3];
+} __packed;
+
 /* Slot types */
 enum wmi_sched_scheme_slot_type {
        WMI_SCHED_SLOT_SP               = 0x0,
@@ -2547,7 +3038,7 @@ struct wmi_scheduling_scheme_event {
        u8 reserved[1];
 } __packed;
 
-/* WMI_RS_CFG_CMDID */
+/* WMI_RS_CFG_CMDID - deprecated */
 struct wmi_rs_cfg_cmd {
        /* connection id */
        u8 cid;
@@ -2557,7 +3048,7 @@ struct wmi_rs_cfg_cmd {
        struct wmi_rs_cfg rs_cfg;
 } __packed;
 
-/* WMI_RS_CFG_DONE_EVENTID */
+/* WMI_RS_CFG_DONE_EVENTID - deprecated */
 struct wmi_rs_cfg_done_event {
        u8 cid;
        /* enum wmi_fw_status */
@@ -2565,7 +3056,7 @@ struct wmi_rs_cfg_done_event {
        u8 reserved[2];
 } __packed;
 
-/* WMI_GET_DETAILED_RS_RES_CMDID */
+/* WMI_GET_DETAILED_RS_RES_CMDID - deprecated */
 struct wmi_get_detailed_rs_res_cmd {
        /* connection id */
        u8 cid;
@@ -2590,7 +3081,7 @@ struct wmi_rs_results {
        u8 mcs;
 } __packed;
 
-/* WMI_GET_DETAILED_RS_RES_EVENTID */
+/* WMI_GET_DETAILED_RS_RES_EVENTID - deprecated */
 struct wmi_get_detailed_rs_res_event {
        u8 cid;
        /* enum wmi_rs_results_status */
@@ -2600,6 +3091,45 @@ struct wmi_get_detailed_rs_res_event {
        u8 reserved[3];
 } __packed;
 
+/* WMI_GET_DETAILED_RS_RES_EX_CMDID */
+struct wmi_get_detailed_rs_res_ex_cmd {
+       u8 cid;
+       u8 reserved[3];
+} __packed;
+
+/* Rate search results */
+struct wmi_rs_results_ex_common {
+       /* RS timestamp */
+       __le32 tsf;
+       /* RS selected MCS */
+       u8 mcs;
+       /* enum wmi_edmg_tx_mode */
+       u8 mode;
+       u8 reserved[2];
+} __packed;
+
+/* Rate search results */
+struct wmi_rs_results_ex_mcs {
+       /* number of sent MPDUs */
+       u8 num_of_tx_pkt;
+       /* number of non-acked MPDUs */
+       u8 num_of_non_acked_pkt;
+       u8 reserved[2];
+} __packed;
+
+/* WMI_GET_DETAILED_RS_RES_EX_EVENTID */
+struct wmi_get_detailed_rs_res_ex_event {
+       u8 cid;
+       /* enum wmi_rs_results_status */
+       u8 status;
+       u8 reserved0[2];
+       struct wmi_rs_results_ex_common common_rs_results;
+       u8 each_mcs_results_size;
+       u8 reserved1[3];
+       /* Results for each MCS */
+       struct wmi_rs_results_ex_mcs each_mcs_results[0];
+} __packed;
+
 /* BRP antenna limit mode */
 enum wmi_brp_ant_limit_mode {
        /* Disable BRP force antenna limit */
@@ -3350,13 +3880,20 @@ struct wmi_get_assoc_list_res_event {
        u8 reserved[3];
 } __packed;
 
-/* WMI_BF_CONTROL_EVENTID */
+/* WMI_BF_CONTROL_EVENTID - deprecated */
 struct wmi_bf_control_event {
        /* wmi_fw_status */
        u8 status;
        u8 reserved[3];
 } __packed;
 
+/* WMI_BF_CONTROL_EX_EVENTID */
+struct wmi_bf_control_ex_event {
+       /* wmi_fw_status */
+       u8 status;
+       u8 reserved[3];
+} __packed;
+
 /* WMI_COMMAND_NOT_SUPPORTED_EVENTID */
 struct wmi_command_not_supported_event {
        /* device id */
@@ -3426,4 +3963,96 @@ struct wmi_internal_fw_set_channel_event {
        u8 reserved[3];
 } __packed;
 
+/* WMI_LINK_STATS_CONFIG_DONE_EVENTID */
+struct wmi_link_stats_config_done_event {
+       /* wmi_fw_status_e */
+       u8 status;
+       u8 reserved[3];
+} __packed;
+
+/* WMI_LINK_STATS_EVENTID */
+struct wmi_link_stats_event {
+       __le64 tsf;
+       __le16 payload_size;
+       u8 has_next;
+       u8 reserved[5];
+       /* a stream of wmi_link_stats_record_s */
+       u8 payload[0];
+} __packed;
+
+/* WMI_LINK_STATS_EVENT */
+struct wmi_link_stats_record {
+       /* wmi_link_stats_record_type_e */
+       u8 record_type_id;
+       u8 reserved;
+       __le16 record_size;
+       u8 record[0];
+} __packed;
+
+/* WMI_LINK_STATS_TYPE_BASIC */
+struct wmi_link_stats_basic {
+       u8 cid;
+       s8 rssi;
+       u8 sqi;
+       u8 bf_mcs;
+       u8 per_average;
+       u8 selected_rfc;
+       u8 rx_effective_ant_num;
+       u8 my_rx_sector;
+       u8 my_tx_sector;
+       u8 other_rx_sector;
+       u8 other_tx_sector;
+       u8 reserved[7];
+       /* 1/4 Db units */
+       __le16 snr;
+       __le32 tx_tpt;
+       __le32 tx_goodput;
+       __le32 rx_goodput;
+       __le32 bf_count;
+       __le32 rx_bcast_frames;
+} __packed;
+
+/* WMI_LINK_STATS_TYPE_GLOBAL */
+struct wmi_link_stats_global {
+       /* all ack-able frames */
+       __le32 rx_frames;
+       /* all ack-able frames */
+       __le32 tx_frames;
+       __le32 rx_ba_frames;
+       __le32 tx_ba_frames;
+       __le32 tx_beacons;
+       __le32 rx_mic_errors;
+       __le32 rx_crc_errors;
+       __le32 tx_fail_no_ack;
+       u8 reserved[8];
+} __packed;
+
+/* WMI_SET_GRANT_MCS_EVENTID */
+struct wmi_set_grant_mcs_event {
+       /* wmi_fw_status */
+       u8 status;
+       u8 reserved[3];
+} __packed;
+
+/* WMI_SET_AP_SLOT_SIZE_EVENTID */
+struct wmi_set_ap_slot_size_event {
+       /* wmi_fw_status */
+       u8 status;
+       u8 reserved[3];
+} __packed;
+
+/* WMI_SET_VRING_PRIORITY_WEIGHT_EVENTID */
+struct wmi_set_vring_priority_weight_event {
+       /* wmi_fw_status */
+       u8 status;
+       u8 reserved[3];
+} __packed;
+
+/* WMI_SET_VRING_PRIORITY_EVENTID */
+struct wmi_set_vring_priority_event {
+       /* wmi_fw_status */
+       u8 status;
+       u8 reserved[3];
+} __packed;
+
 #endif /* __WILOCITY_WMI_H__ */
index cb987c2ecc6bf5295684c11da9604c03ef4d2e32..87131f6632929963506d0258d1bc20c97c09e5b8 100644 (file)
@@ -131,7 +131,7 @@ static int b43_register_led(struct b43_wldev *dev, struct b43_led *led,
        led->wl = dev->wl;
        led->index = led_index;
        led->activelow = activelow;
-       strncpy(led->name, name, sizeof(led->name));
+       strlcpy(led->name, name, sizeof(led->name));
        atomic_set(&led->state, 0);
 
        led->led_dev.name = led->name;
index fd4565389c77df59056d239e2bc778164a622496..bc922118b6ac641aa1caa800489f77472542fe60 100644 (file)
@@ -101,7 +101,7 @@ static int b43legacy_register_led(struct b43legacy_wldev *dev,
        led->dev = dev;
        led->index = led_index;
        led->activelow = activelow;
-       strncpy(led->name, name, sizeof(led->name));
+       strlcpy(led->name, name, sizeof(led->name));
 
        led->led_dev.name = led->name;
        led->led_dev.default_trigger = default_trigger;
index 24c4e18e7d806059be23eb7c13bb805bc5f9d0b7..5444e6213d4592e6ac35dd1889c82e4a8d62db97 100644 (file)
@@ -6926,15 +6926,15 @@ struct brcmf_cfg80211_info *brcmf_cfg80211_attach(struct brcmf_pub *drvr,
        cfg->d11inf.io_type = (u8)io_type;
        brcmu_d11_attach(&cfg->d11inf);
 
-       err = brcmf_setup_wiphy(wiphy, ifp);
-       if (err < 0)
-               goto priv_out;
-
        /* regulatory notifer below needs access to cfg so
         * assign it now.
         */
        drvr->config = cfg;
 
+       err = brcmf_setup_wiphy(wiphy, ifp);
+       if (err < 0)
+               goto priv_out;
+
        brcmf_dbg(INFO, "Registering custom regulatory\n");
        wiphy->reg_notifier = brcmf_cfg80211_reg_notifier;
        wiphy->regulatory_flags |= REGULATORY_CUSTOM_REG;
index c1b4441fb8b27a6cc817b1bbba5ce7cc4759117e..a2960032be81252ad97ba82b6209dabcaa75a67c 100644 (file)
@@ -95,7 +95,7 @@ il3945_ucode_rx_stats_read(struct file *file, char __user *user_buf,
        pos +=
            scnprintf(buf + pos, bufsz - pos,
                      "%-32s     current"
-                     "acumulative       delta         max\n",
+                     "accumulative      delta         max\n",
                      "Statistics_Rx - OFDM:");
        pos +=
            scnprintf(buf + pos, bufsz - pos,
index 850611ad347a532612d72d4d300b67379bdb1218..b6c5f17dca30a57395af65a4a828e95a58f75e3b 100644 (file)
@@ -11,6 +11,7 @@ config MT76x2_COMMON
 
 config MT76x0U
        tristate "MediaTek MT76x0U (USB) support"
+       select MT76_CORE
        depends on MAC80211
        depends on USB
        help
index 1ecd018f12b858551b8e681fa207c4ffef9924a2..36da1e6bc21ab5832dc5432996307f099ff99c8a 100644 (file)
@@ -81,15 +81,15 @@ mt76x0_efuse_read(struct mt76x0_dev *dev, u16 addr, u8 *data,
        return 0;
 }
 
+#define MT_MAP_READS   DIV_ROUND_UP(MT_EFUSE_USAGE_MAP_SIZE, 16)
 static int
 mt76x0_efuse_physical_size_check(struct mt76x0_dev *dev)
 {
-       const int map_reads = DIV_ROUND_UP(MT_EFUSE_USAGE_MAP_SIZE, 16);
-       u8 data[map_reads * 16];
+       u8 data[MT_MAP_READS * 16];
        int ret, i;
        u32 start = 0, end = 0, cnt_free;
 
-       for (i = 0; i < map_reads; i++) {
+       for (i = 0; i < MT_MAP_READS; i++) {
                ret = mt76x0_efuse_read(dev, MT_EE_USAGE_MAP_START + i * 16,
                                         data + i * 16, MT_EE_PHYSICAL_READ);
                if (ret)
@@ -241,7 +241,7 @@ mt76x0_set_rf_freq_off(struct mt76x0_dev *dev, u8 *eeprom)
 static void
 mt76x0_set_lna_gain(struct mt76x0_dev *dev, u8 *eeprom)
 {
-       s8 gain;
+       u8 gain;
 
        dev->ee->lna_gain_2ghz = eeprom[MT_EE_LNA_GAIN_2GHZ];
        dev->ee->lna_gain_5ghz[0] = eeprom[MT_EE_LNA_GAIN_5GHZ_0];
index 95f28492a843c4282bd8d08102be456daeee66d1..91a84be36d3bf4d3262aafc983a1a91f6bd8b75d 100644 (file)
@@ -424,8 +424,6 @@ void mt76x0_mac_set_ampdu_factor(struct mt76x0_dev *dev)
        u8 min_factor = 3;
        int i;
 
-       return;
-
        rcu_read_lock();
        for (i = 0; i < ARRAY_SIZE(dev->wcid); i++) {
                wcid = rcu_dereference(dev->wcid[i]);
index df3facc8e5a41cdce7f00418dc7ec5bfbb2a346f..6597f7cb341103ca3d4e46e622dd35f5e4320cd5 100644 (file)
@@ -2818,23 +2818,11 @@ static void btc8723b2ant_action_sco(struct btc_coexist *btcoexist)
 
        /* sw mechanism */
        if (BTC_WIFI_BW_HT40 == wifi_bw) {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, true,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, true,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, true, true,
+                                         false, false);
        } else {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, true,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, true,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, false, true,
+                                         false, false);
        }
 }
 
@@ -2949,23 +2937,11 @@ static void btc8723b2ant_action_a2dp(struct btc_coexist *btcoexist)
        /* sw mechanism */
        btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
        if (BTC_WIFI_BW_HT40 == wifi_bw) {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, false,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, false,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, true, false,
+                                         false, false);
        } else {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, false,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, false,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, false, false,
+                                         false, false);
        }
 }
 
@@ -3008,23 +2984,11 @@ static void btc8723b2ant_action_a2dp_pan_hs(struct btc_coexist *btcoexist)
        /* sw mechanism */
        btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
        if (BTC_WIFI_BW_HT40 == wifi_bw) {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, false,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, false,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, true, false,
+                                         false, false);
        } else {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, false,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, false,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, false, false,
+                                         false, false);
        }
 }
 
@@ -3071,23 +3035,11 @@ static void btc8723b2ant_action_pan_edr(struct btc_coexist *btcoexist)
        /* sw mechanism */
        btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
        if (BTC_WIFI_BW_HT40 == wifi_bw) {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, false,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, false,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, true, false,
+                                         false, false);
        } else {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, false,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, false,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, false, false,
+                                         false, false);
        }
 }
 
@@ -3121,23 +3073,11 @@ static void btc8723b2ant_action_pan_hs(struct btc_coexist *btcoexist)
 
        btcoexist->btc_get(btcoexist, BTC_GET_U4_WIFI_BW, &wifi_bw);
        if (BTC_WIFI_BW_HT40 == wifi_bw) {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, false,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, false,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, true, false,
+                                         false, false);
        } else {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, false,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, false,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, false, false,
+                                         false, false);
        }
 }
 
@@ -3189,23 +3129,11 @@ static void btc8723b2ant_action_pan_edr_a2dp(struct btc_coexist *btcoexist)
 
        /* sw mechanism */
        if (BTC_WIFI_BW_HT40 == wifi_bw) {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, false,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, false,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, true, false,
+                                         false, false);
        } else {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, false,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, false,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, false, false,
+                                         false, false);
        }
 }
 
@@ -3264,23 +3192,11 @@ static void btc8723b2ant_action_pan_edr_hid(struct btc_coexist *btcoexist)
 
        /* sw mechanism */
        if (BTC_WIFI_BW_HT40 == wifi_bw) {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, true,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, true,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, true, true,
+                                         false, false);
        } else {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, true,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, true,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, false, true,
+                                         false, false);
        }
 }
 
@@ -3336,23 +3252,11 @@ static void btc8723b2ant_action_hid_a2dp_pan_edr(struct btc_coexist *btcoexist)
 
        /* sw mechanism */
        if (BTC_WIFI_BW_HT40 == wifi_bw) {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, true,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, true,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, true, true,
+                                         false, false);
        } else {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, true,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, true,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, false, true,
+                                         false, false);
        }
 }
 
@@ -3436,23 +3340,11 @@ static void btc8723b2ant_action_hid_a2dp(struct btc_coexist *btcoexist)
 
        /* sw mechanism */
        if (BTC_WIFI_BW_HT40 == wifi_bw) {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, true,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, true, true,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, true, true,
+                                         false, false);
        } else {
-               if ((wifi_rssi_state == BTC_RSSI_STATE_HIGH) ||
-                   (wifi_rssi_state == BTC_RSSI_STATE_STAY_HIGH)) {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, true,
-                                                 false, false);
-               } else {
-                       btc8723b2ant_sw_mechanism(btcoexist, false, true,
-                                                 false, false);
-               }
+               btc8723b2ant_sw_mechanism(btcoexist, false, true,
+                                         false, false);
        }
 }
 
index 27e6baf12e90aebd7ed7a713373bdf58e7a9fdb6..01edf960ff3cc72f5c4d383acd915525f1bbe333 100644 (file)
@@ -57,7 +57,6 @@ int rsi_prepare_mgmt_desc(struct rsi_common *common, struct sk_buff *skb)
        struct ieee80211_vif *vif;
        struct rsi_mgmt_desc *mgmt_desc;
        struct skb_info *tx_params;
-       struct ieee80211_bss_conf *bss = NULL;
        struct rsi_xtended_desc *xtend_desc = NULL;
        u8 header_size;
        u32 dword_align_bytes = 0;
@@ -91,7 +90,6 @@ int rsi_prepare_mgmt_desc(struct rsi_common *common, struct sk_buff *skb)
 
        tx_params->internal_hdr_size = header_size;
        memset(&skb->data[0], 0, header_size);
-       bss = &vif->bss_conf;
        wh = (struct ieee80211_hdr *)&skb->data[header_size];
 
        mgmt_desc = (struct rsi_mgmt_desc *)skb->data;
@@ -148,7 +146,6 @@ int rsi_prepare_data_desc(struct rsi_common *common, struct sk_buff *skb)
        struct ieee80211_hdr *wh = NULL;
        struct ieee80211_tx_info *info;
        struct skb_info *tx_params;
-       struct ieee80211_bss_conf *bss;
        struct rsi_data_desc *data_desc;
        struct rsi_xtended_desc *xtend_desc;
        u8 ieee80211_size = MIN_802_11_HDR_LEN;
@@ -159,7 +156,6 @@ int rsi_prepare_data_desc(struct rsi_common *common, struct sk_buff *skb)
 
        info = IEEE80211_SKB_CB(skb);
        vif = info->control.vif;
-       bss = &vif->bss_conf;
        tx_params = (struct skb_info *)info->driver_data;
 
        header_size = FRAME_DESC_SZ + sizeof(struct rsi_xtended_desc);
@@ -288,7 +284,6 @@ int rsi_send_data_pkt(struct rsi_common *common, struct sk_buff *skb)
        struct ieee80211_tx_info *info;
        struct skb_info *tx_params;
        struct ieee80211_bss_conf *bss;
-       struct ieee80211_hdr *wh;
        int status = -EINVAL;
        u8 header_size;
 
@@ -304,7 +299,6 @@ int rsi_send_data_pkt(struct rsi_common *common, struct sk_buff *skb)
        bss = &vif->bss_conf;
        tx_params = (struct skb_info *)info->driver_data;
        header_size = tx_params->internal_hdr_size;
-       wh = (struct ieee80211_hdr *)&skb->data[header_size];
 
        if (((vif->type == NL80211_IFTYPE_STATION) ||
             (vif->type == NL80211_IFTYPE_P2P_CLIENT)) &&
@@ -747,13 +741,11 @@ static int ping_pong_write(struct rsi_hw *adapter, u8 cmd, u8 *addr, u32 size)
 static int auto_fw_upgrade(struct rsi_hw *adapter, u8 *flash_content,
                           u32 content_size)
 {
-       u8 cmd, *temp_flash_content;
+       u8 cmd;
        u32 temp_content_size, num_flash, index;
        u32 flash_start_address;
        int status;
 
-       temp_flash_content = flash_content;
-
        if (content_size > MAX_FLASH_FILE_SIZE) {
                rsi_dbg(ERR_ZONE,
                        "%s: Flash Content size is more than 400K %u\n",
index 799cba4624a5e07a15b3b19151ea2094c5ff9a33..2ff874b6ec89abceabc7ca9dbcf7cf3263c129cc 100644 (file)
@@ -895,7 +895,6 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
                                  struct sk_buff *skb,
                                  struct sk_buff_head *list)
 {
-       struct skb_shared_info *shinfo = skb_shinfo(skb);
        RING_IDX cons = queue->rx.rsp_cons;
        struct sk_buff *nskb;
 
@@ -904,15 +903,16 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
                        RING_GET_RESPONSE(&queue->rx, ++cons);
                skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
 
-               if (shinfo->nr_frags == MAX_SKB_FRAGS) {
+               if (skb_shinfo(skb)->nr_frags == MAX_SKB_FRAGS) {
                        unsigned int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
 
                        BUG_ON(pull_to <= skb_headlen(skb));
                        __pskb_pull_tail(skb, pull_to - skb_headlen(skb));
                }
-               BUG_ON(shinfo->nr_frags >= MAX_SKB_FRAGS);
+               BUG_ON(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS);
 
-               skb_add_rx_frag(skb, shinfo->nr_frags, skb_frag_page(nfrag),
+               skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+                               skb_frag_page(nfrag),
                                rx->offset, rx->status, PAGE_SIZE);
 
                skb_shinfo(nskb)->nr_frags = 0;
index 605ec47067734cf0233a0fdf9908b6d49b163939..34e0d476c5c615e2c3421bd47133ca7cd1a6bdda 100644 (file)
@@ -235,6 +235,8 @@ static inline int qeth_is_ipa_enabled(struct qeth_ipa_info *ipa,
 #define QETH_IDX_FUNC_LEVEL_IQD                 0x4108
 
 #define QETH_BUFSIZE           4096
+#define CCW_CMD_WRITE          0x01
+#define CCW_CMD_READ           0x02
 
 /**
  * some more defs
@@ -592,7 +594,7 @@ static inline struct qeth_ipa_cmd *__ipa_cmd(struct qeth_cmd_buffer *iob)
  */
 struct qeth_channel {
        enum qeth_channel_states state;
-       struct ccw1 ccw;
+       struct ccw1 *ccw;
        spinlock_t iob_lock;
        wait_queue_head_t wait_q;
        struct ccw_device *ccwdev;
@@ -998,7 +1000,7 @@ void qeth_tx_timeout(struct net_device *);
 void qeth_prepare_control_data(struct qeth_card *, int,
                                struct qeth_cmd_buffer *);
 void qeth_release_buffer(struct qeth_channel *, struct qeth_cmd_buffer *);
-void qeth_prepare_ipa_cmd(struct qeth_card *, struct qeth_cmd_buffer *, char);
+void qeth_prepare_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob);
 struct qeth_cmd_buffer *qeth_wait_for_buffer(struct qeth_channel *);
 int qeth_query_switch_attributes(struct qeth_card *card,
                                  struct qeth_switch_info *sw_info);
index d09a7110b3813f8fe335880f7e8223b898cbbf37..49f64eb3eab0f759d06a5e3edff6550a06d8af40 100644 (file)
@@ -65,7 +65,6 @@ static struct mutex qeth_mod_mutex;
 static void qeth_send_control_data_cb(struct qeth_channel *,
                        struct qeth_cmd_buffer *);
 static struct qeth_cmd_buffer *qeth_get_buffer(struct qeth_channel *);
-static void qeth_setup_ccw(struct qeth_channel *, unsigned char *, __u32);
 static void qeth_free_buffer_pool(struct qeth_card *);
 static int qeth_qdio_establish(struct qeth_card *);
 static void qeth_free_qdio_buffers(struct qeth_card *);
@@ -532,15 +531,24 @@ static inline int qeth_is_cq(struct qeth_card *card, unsigned int queue)
            queue == card->qdio.no_in_queues - 1;
 }
 
+static void qeth_setup_ccw(struct ccw1 *ccw, u8 cmd_code, u32 len, void *data)
+{
+       ccw->cmd_code = cmd_code;
+       ccw->flags = CCW_FLAG_SLI;
+       ccw->count = len;
+       ccw->cda = (__u32) __pa(data);
+}
+
 static int __qeth_issue_next_read(struct qeth_card *card)
 {
-       int rc;
+       struct qeth_channel *channel = &card->read;
        struct qeth_cmd_buffer *iob;
+       int rc;
 
        QETH_CARD_TEXT(card, 5, "issnxrd");
-       if (card->read.state != CH_STATE_UP)
+       if (channel->state != CH_STATE_UP)
                return -EIO;
-       iob = qeth_get_buffer(&card->read);
+       iob = qeth_get_buffer(channel);
        if (!iob) {
                dev_warn(&card->gdev->dev, "The qeth device driver "
                        "failed to recover an error on the device\n");
@@ -548,14 +556,14 @@ static int __qeth_issue_next_read(struct qeth_card *card)
                        "available\n", dev_name(&card->gdev->dev));
                return -ENOMEM;
        }
-       qeth_setup_ccw(&card->read, iob->data, QETH_BUFSIZE);
+       qeth_setup_ccw(channel->ccw, CCW_CMD_READ, QETH_BUFSIZE, iob->data);
        QETH_CARD_TEXT(card, 6, "noirqpnd");
-       rc = ccw_device_start(card->read.ccwdev, &card->read.ccw,
+       rc = ccw_device_start(channel->ccwdev, channel->ccw,
                              (addr_t) iob, 0, 0);
        if (rc) {
                QETH_DBF_MESSAGE(2, "%s error in starting next read ccw! "
                        "rc=%i\n", dev_name(&card->gdev->dev), rc);
-               atomic_set(&card->read.irq_pending, 0);
+               atomic_set(&channel->irq_pending, 0);
                card->read_or_write_problem = 1;
                qeth_schedule_recovery(card);
                wake_up(&card->wait_q);
@@ -744,21 +752,6 @@ static struct qeth_card *CARD_FROM_CDEV(struct ccw_device *cdev)
        return card;
 }
 
-static void qeth_setup_ccw(struct qeth_channel *channel, unsigned char *iob,
-               __u32 len)
-{
-       struct qeth_card *card;
-
-       card = CARD_FROM_CDEV(channel->ccwdev);
-       QETH_CARD_TEXT(card, 4, "setupccw");
-       if (channel == &card->read)
-               memcpy(&channel->ccw, READ_CCW, sizeof(struct ccw1));
-       else
-               memcpy(&channel->ccw, WRITE_CCW, sizeof(struct ccw1));
-       channel->ccw.count = len;
-       channel->ccw.cda = (__u32) __pa(iob);
-}
-
 static struct qeth_cmd_buffer *__qeth_get_buffer(struct qeth_channel *channel)
 {
        __u8 index;
@@ -906,11 +899,22 @@ static void qeth_send_control_data_cb(struct qeth_channel *channel,
        qeth_release_buffer(channel, iob);
 }
 
-static int qeth_setup_channel(struct qeth_channel *channel)
+static int qeth_setup_channel(struct qeth_channel *channel, bool alloc_buffers)
 {
        int cnt;
 
        QETH_DBF_TEXT(SETUP, 2, "setupch");
+
+       channel->ccw = kmalloc(sizeof(struct ccw1), GFP_KERNEL | GFP_DMA);
+       if (!channel->ccw)
+               return -ENOMEM;
+       channel->state = CH_STATE_DOWN;
+       atomic_set(&channel->irq_pending, 0);
+       init_waitqueue_head(&channel->wait_q);
+
+       if (!alloc_buffers)
+               return 0;
+
        for (cnt = 0; cnt < QETH_CMD_BUFFER_NO; cnt++) {
                channel->iob[cnt].data =
                        kzalloc(QETH_BUFSIZE, GFP_DMA|GFP_KERNEL);
@@ -922,15 +926,14 @@ static int qeth_setup_channel(struct qeth_channel *channel)
                channel->iob[cnt].rc = 0;
        }
        if (cnt < QETH_CMD_BUFFER_NO) {
+               kfree(channel->ccw);
                while (cnt-- > 0)
                        kfree(channel->iob[cnt].data);
                return -ENOMEM;
        }
        channel->io_buf_no = 0;
-       atomic_set(&channel->irq_pending, 0);
        spin_lock_init(&channel->iob_lock);
 
-       init_waitqueue_head(&channel->wait_q);
        return 0;
 }
 
@@ -1337,6 +1340,7 @@ static void qeth_clean_channel(struct qeth_channel *channel)
        QETH_DBF_TEXT(SETUP, 2, "freech");
        for (cnt = 0; cnt < QETH_CMD_BUFFER_NO; cnt++)
                kfree(channel->iob[cnt].data);
+       kfree(channel->ccw);
 }
 
 static void qeth_set_single_write_queues(struct qeth_card *card)
@@ -1393,6 +1397,10 @@ static void qeth_init_qdio_info(struct qeth_card *card)
 {
        QETH_DBF_TEXT(SETUP, 4, "intqdinf");
        atomic_set(&card->qdio.state, QETH_QDIO_UNINITIALIZED);
+       card->qdio.do_prio_queueing = QETH_PRIOQ_DEFAULT;
+       card->qdio.default_out_queue = QETH_DEFAULT_QUEUE;
+       card->qdio.no_out_queues = QETH_MAX_QUEUES;
+
        /* inbound */
        card->qdio.no_in_queues = 1;
        card->qdio.in_buf_size = QETH_IN_BUF_SIZE_DEFAULT;
@@ -1405,12 +1413,10 @@ static void qeth_init_qdio_info(struct qeth_card *card)
        INIT_LIST_HEAD(&card->qdio.init_pool.entry_list);
 }
 
-static void qeth_set_intial_options(struct qeth_card *card)
+static void qeth_set_initial_options(struct qeth_card *card)
 {
        card->options.route4.type = NO_ROUTER;
        card->options.route6.type = NO_ROUTER;
-       card->options.fake_broadcast = 0;
-       card->options.performance_stats = 0;
        card->options.rx_sg_cb = QETH_RX_SG_CB;
        card->options.isolation = ISOLATION_MODE_NONE;
        card->options.cq = QETH_CQ_DISABLED;
@@ -1453,19 +1459,13 @@ static void qeth_start_kernel_thread(struct work_struct *work)
 }
 
 static void qeth_buffer_reclaim_work(struct work_struct *);
-static int qeth_setup_card(struct qeth_card *card)
+static void qeth_setup_card(struct qeth_card *card)
 {
-
        QETH_DBF_TEXT(SETUP, 2, "setupcrd");
        QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *));
 
-       card->read.state  = CH_STATE_DOWN;
-       card->write.state = CH_STATE_DOWN;
-       card->data.state  = CH_STATE_DOWN;
+       card->info.type = CARD_RDEV(card)->id.driver_info;
        card->state = CARD_STATE_DOWN;
-       card->lan_online = 0;
-       card->read_or_write_problem = 0;
-       card->dev = NULL;
        spin_lock_init(&card->mclock);
        spin_lock_init(&card->lock);
        spin_lock_init(&card->ip_lock);
@@ -1473,24 +1473,15 @@ static int qeth_setup_card(struct qeth_card *card)
        mutex_init(&card->conf_mutex);
        mutex_init(&card->discipline_mutex);
        mutex_init(&card->vid_list_mutex);
-       card->thread_start_mask = 0;
-       card->thread_allowed_mask = 0;
-       card->thread_running_mask = 0;
        INIT_WORK(&card->kernel_thread_starter, qeth_start_kernel_thread);
        INIT_LIST_HEAD(&card->cmd_waiter_list);
        init_waitqueue_head(&card->wait_q);
-       /* initial options */
-       qeth_set_intial_options(card);
+       qeth_set_initial_options(card);
        /* IP address takeover */
        INIT_LIST_HEAD(&card->ipato.entries);
-       card->ipato.enabled = false;
-       card->ipato.invert4 = false;
-       card->ipato.invert6 = false;
-       /* init QDIO stuff */
        qeth_init_qdio_info(card);
        INIT_DELAYED_WORK(&card->buffer_reclaim_work, qeth_buffer_reclaim_work);
        INIT_WORK(&card->close_dev_work, qeth_close_dev_handler);
-       return 0;
 }
 
 static void qeth_core_sl_print(struct seq_file *m, struct service_level *slr)
@@ -1507,19 +1498,23 @@ static struct qeth_card *qeth_alloc_card(void)
        struct qeth_card *card;
 
        QETH_DBF_TEXT(SETUP, 2, "alloccrd");
-       card = kzalloc(sizeof(struct qeth_card), GFP_DMA|GFP_KERNEL);
+       card = kzalloc(sizeof(*card), GFP_KERNEL);
        if (!card)
                goto out;
        QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *));
-       if (qeth_setup_channel(&card->read))
+       if (qeth_setup_channel(&card->read, true))
                goto out_ip;
-       if (qeth_setup_channel(&card->write))
+       if (qeth_setup_channel(&card->write, true))
                goto out_channel;
+       if (qeth_setup_channel(&card->data, false))
+               goto out_data;
        card->options.layer2 = -1;
        card->qeth_service_level.seq_print = qeth_core_sl_print;
        register_service_level(&card->qeth_service_level);
        return card;
 
+out_data:
+       qeth_clean_channel(&card->write);
 out_channel:
        qeth_clean_channel(&card->read);
 out_ip:
@@ -1528,17 +1523,6 @@ static struct qeth_card *qeth_alloc_card(void)
        return NULL;
 }
 
-static void qeth_determine_card_type(struct qeth_card *card)
-{
-       QETH_DBF_TEXT(SETUP, 2, "detcdtyp");
-
-       card->qdio.do_prio_queueing = QETH_PRIOQ_DEFAULT;
-       card->qdio.default_out_queue = QETH_DEFAULT_QUEUE;
-       card->info.type = CARD_RDEV(card)->id.driver_info;
-       card->qdio.no_out_queues = QETH_MAX_QUEUES;
-       qeth_update_from_chp_desc(card);
-}
-
 static int qeth_clear_channel(struct qeth_channel *channel)
 {
        unsigned long flags;
@@ -1679,13 +1663,10 @@ static int qeth_read_conf_data(struct qeth_card *card, void **buffer,
        if (!rcd_buf)
                return -ENOMEM;
 
-       channel->ccw.cmd_code = ciw->cmd;
-       channel->ccw.cda = (__u32) __pa(rcd_buf);
-       channel->ccw.count = ciw->count;
-       channel->ccw.flags = CCW_FLAG_SLI;
+       qeth_setup_ccw(channel->ccw, ciw->cmd, ciw->count, rcd_buf);
        channel->state = CH_STATE_RCD;
        spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags);
-       ret = ccw_device_start_timeout(channel->ccwdev, &channel->ccw,
+       ret = ccw_device_start_timeout(channel->ccwdev, channel->ccw,
                                       QETH_RCD_PARM, LPM_ANYPATH, 0,
                                       QETH_RCD_TIMEOUT);
        spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags);
@@ -1856,15 +1837,13 @@ static int qeth_idx_activate_get_answer(struct qeth_channel *channel,
        if (!iob)
                return -ENOMEM;
        iob->callback = idx_reply_cb;
-       memcpy(&channel->ccw, READ_CCW, sizeof(struct ccw1));
-       channel->ccw.count = QETH_BUFSIZE;
-       channel->ccw.cda = (__u32) __pa(iob->data);
+       qeth_setup_ccw(channel->ccw, CCW_CMD_READ, QETH_BUFSIZE, iob->data);
 
        wait_event(card->wait_q,
                   atomic_cmpxchg(&channel->irq_pending, 0, 1) == 0);
        QETH_DBF_TEXT(SETUP, 6, "noirqpnd");
        spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags);
-       rc = ccw_device_start_timeout(channel->ccwdev, &channel->ccw,
+       rc = ccw_device_start_timeout(channel->ccwdev, channel->ccw,
                                      (addr_t) iob, 0, 0, QETH_TIMEOUT);
        spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags);
 
@@ -1907,9 +1886,8 @@ static int qeth_idx_activate_channel(struct qeth_channel *channel,
        if (!iob)
                return -ENOMEM;
        iob->callback = idx_reply_cb;
-       memcpy(&channel->ccw, WRITE_CCW, sizeof(struct ccw1));
-       channel->ccw.count = IDX_ACTIVATE_SIZE;
-       channel->ccw.cda = (__u32) __pa(iob->data);
+       qeth_setup_ccw(channel->ccw, CCW_CMD_WRITE, IDX_ACTIVATE_SIZE,
+                      iob->data);
        if (channel == &card->write) {
                memcpy(iob->data, IDX_ACTIVATE_WRITE, IDX_ACTIVATE_SIZE);
                memcpy(QETH_TRANSPORT_HEADER_SEQ_NO(iob->data),
@@ -1935,7 +1913,7 @@ static int qeth_idx_activate_channel(struct qeth_channel *channel,
                   atomic_cmpxchg(&channel->irq_pending, 0, 1) == 0);
        QETH_DBF_TEXT(SETUP, 6, "noirqpnd");
        spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags);
-       rc = ccw_device_start_timeout(channel->ccwdev, &channel->ccw,
+       rc = ccw_device_start_timeout(channel->ccwdev, channel->ccw,
                                      (addr_t) iob, 0, 0, QETH_TIMEOUT);
        spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags);
 
@@ -1987,20 +1965,20 @@ static void qeth_idx_write_cb(struct qeth_channel *channel,
 
        if (!(QETH_IS_IDX_ACT_POS_REPLY(iob->data))) {
                if (QETH_IDX_ACT_CAUSE_CODE(iob->data) == QETH_IDX_ACT_ERR_EXCL)
-                       dev_err(&card->write.ccwdev->dev,
+                       dev_err(&channel->ccwdev->dev,
                                "The adapter is used exclusively by another "
                                "host\n");
                else
                        QETH_DBF_MESSAGE(2, "%s IDX_ACTIVATE on write channel:"
                                " negative reply\n",
-                               dev_name(&card->write.ccwdev->dev));
+                               dev_name(&channel->ccwdev->dev));
                goto out;
        }
        memcpy(&temp, QETH_IDX_ACT_FUNC_LEVEL(iob->data), 2);
        if ((temp & ~0x0100) != qeth_peer_func_level(card->info.func_level)) {
                QETH_DBF_MESSAGE(2, "%s IDX_ACTIVATE on write channel: "
                        "function level mismatch (sent: 0x%x, received: "
-                       "0x%x)\n", dev_name(&card->write.ccwdev->dev),
+                       "0x%x)\n", dev_name(&channel->ccwdev->dev),
                        card->info.func_level, temp);
                goto out;
        }
@@ -2028,20 +2006,20 @@ static void qeth_idx_read_cb(struct qeth_channel *channel,
        if (!(QETH_IS_IDX_ACT_POS_REPLY(iob->data))) {
                switch (QETH_IDX_ACT_CAUSE_CODE(iob->data)) {
                case QETH_IDX_ACT_ERR_EXCL:
-                       dev_err(&card->write.ccwdev->dev,
+                       dev_err(&channel->ccwdev->dev,
                                "The adapter is used exclusively by another "
                                "host\n");
                        break;
                case QETH_IDX_ACT_ERR_AUTH:
                case QETH_IDX_ACT_ERR_AUTH_USER:
-                       dev_err(&card->read.ccwdev->dev,
+                       dev_err(&channel->ccwdev->dev,
                                "Setting the device online failed because of "
                                "insufficient authorization\n");
                        break;
                default:
                        QETH_DBF_MESSAGE(2, "%s IDX_ACTIVATE on read channel:"
                                " negative reply\n",
-                               dev_name(&card->read.ccwdev->dev));
+                               dev_name(&channel->ccwdev->dev));
                }
                QETH_CARD_TEXT_(card, 2, "idxread%c",
                        QETH_IDX_ACT_CAUSE_CODE(iob->data));
@@ -2052,7 +2030,7 @@ static void qeth_idx_read_cb(struct qeth_channel *channel,
        if (temp != qeth_peer_func_level(card->info.func_level)) {
                QETH_DBF_MESSAGE(2, "%s IDX_ACTIVATE on read channel: function "
                        "level mismatch (sent: 0x%x, received: 0x%x)\n",
-                       dev_name(&card->read.ccwdev->dev),
+                       dev_name(&channel->ccwdev->dev),
                        card->info.func_level, temp);
                goto out;
        }
@@ -2069,7 +2047,7 @@ static void qeth_idx_read_cb(struct qeth_channel *channel,
 void qeth_prepare_control_data(struct qeth_card *card, int len,
                struct qeth_cmd_buffer *iob)
 {
-       qeth_setup_ccw(&card->write, iob->data, len);
+       qeth_setup_ccw(iob->channel->ccw, CCW_CMD_WRITE, len, iob->data);
        iob->callback = qeth_release_buffer;
 
        memcpy(QETH_TRANSPORT_HEADER_SEQ_NO(iob->data),
@@ -2116,6 +2094,7 @@ int qeth_send_control_data(struct qeth_card *card, int len,
                                unsigned long cb_cmd),
                void *reply_param)
 {
+       struct qeth_channel *channel = iob->channel;
        int rc;
        unsigned long flags;
        struct qeth_reply *reply = NULL;
@@ -2125,7 +2104,7 @@ int qeth_send_control_data(struct qeth_card *card, int len,
        QETH_CARD_TEXT(card, 2, "sendctl");
 
        if (card->read_or_write_problem) {
-               qeth_release_buffer(iob->channel, iob);
+               qeth_release_buffer(channel, iob);
                return -EIO;
        }
        reply = qeth_alloc_reply(card);
@@ -2137,7 +2116,7 @@ int qeth_send_control_data(struct qeth_card *card, int len,
 
        init_waitqueue_head(&reply->wait_q);
 
-       while (atomic_cmpxchg(&card->write.irq_pending, 0, 1)) ;
+       while (atomic_cmpxchg(&channel->irq_pending, 0, 1)) ;
 
        if (IS_IPA(iob->data)) {
                cmd = __ipa_cmd(iob);
@@ -2157,21 +2136,21 @@ int qeth_send_control_data(struct qeth_card *card, int len,
        timeout = jiffies + event_timeout;
 
        QETH_CARD_TEXT(card, 6, "noirqpnd");
-       spin_lock_irqsave(get_ccwdev_lock(card->write.ccwdev), flags);
-       rc = ccw_device_start_timeout(CARD_WDEV(card), &card->write.ccw,
+       spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags);
+       rc = ccw_device_start_timeout(channel->ccwdev, channel->ccw,
                                      (addr_t) iob, 0, 0, event_timeout);
-       spin_unlock_irqrestore(get_ccwdev_lock(card->write.ccwdev), flags);
+       spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags);
        if (rc) {
                QETH_DBF_MESSAGE(2, "%s qeth_send_control_data: "
                        "ccw_device_start rc = %i\n",
-                       dev_name(&card->write.ccwdev->dev), rc);
+                       dev_name(&channel->ccwdev->dev), rc);
                QETH_CARD_TEXT_(card, 2, " err%d", rc);
                spin_lock_irqsave(&card->lock, flags);
                list_del_init(&reply->list);
                qeth_put_reply(reply);
                spin_unlock_irqrestore(&card->lock, flags);
-               qeth_release_buffer(iob->channel, iob);
-               atomic_set(&card->write.irq_pending, 0);
+               qeth_release_buffer(channel, iob);
+               atomic_set(&channel->irq_pending, 0);
                wake_up(&card->wait_q);
                return rc;
        }
@@ -2367,12 +2346,19 @@ static int qeth_ulp_enable_cb(struct qeth_card *card, struct qeth_reply *reply,
        return 0;
 }
 
+static u8 qeth_mpc_select_prot_type(struct qeth_card *card)
+{
+       if (IS_OSN(card))
+               return QETH_PROT_OSN2;
+       return (card->options.layer2 == 1) ? QETH_PROT_LAYER2 : QETH_PROT_TCPIP;
+}
+
 static int qeth_ulp_enable(struct qeth_card *card)
 {
-       int rc;
-       char prot_type;
+       u8 prot_type = qeth_mpc_select_prot_type(card);
        struct qeth_cmd_buffer *iob;
        u16 max_mtu;
+       int rc;
 
        /*FIXME: trace view callbacks*/
        QETH_DBF_TEXT(SETUP, 2, "ulpenabl");
@@ -2381,14 +2367,6 @@ static int qeth_ulp_enable(struct qeth_card *card)
        memcpy(iob->data, ULP_ENABLE, ULP_ENABLE_SIZE);
 
        *(QETH_ULP_ENABLE_LINKNUM(iob->data)) = (u8) card->dev->dev_port;
-       if (card->options.layer2)
-               if (card->info.type == QETH_CARD_TYPE_OSN)
-                       prot_type = QETH_PROT_OSN2;
-               else
-                       prot_type = QETH_PROT_LAYER2;
-       else
-               prot_type = QETH_PROT_TCPIP;
-
        memcpy(QETH_ULP_ENABLE_PROT_TYPE(iob->data), &prot_type, 1);
        memcpy(QETH_ULP_ENABLE_DEST_ADDR(iob->data),
               &card->token.cm_connection_r, QETH_MPC_TOKEN_LENGTH);
@@ -2939,9 +2917,10 @@ struct qeth_cmd_buffer *qeth_get_ipacmd_buffer(struct qeth_card *card,
 }
 EXPORT_SYMBOL_GPL(qeth_get_ipacmd_buffer);
 
-void qeth_prepare_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob,
-               char prot_type)
+void qeth_prepare_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob)
 {
+       u8 prot_type = qeth_mpc_select_prot_type(card);
+
        memcpy(iob->data, IPA_PDU_HEADER, IPA_PDU_HEADER_SIZE);
        memcpy(QETH_IPA_CMD_PROT_TYPE(iob->data), &prot_type, 1);
        memcpy(QETH_IPA_CMD_DEST_ADDR(iob->data),
@@ -2961,18 +2940,9 @@ int qeth_send_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob,
                void *reply_param)
 {
        int rc;
-       char prot_type;
 
        QETH_CARD_TEXT(card, 4, "sendipa");
-
-       if (card->options.layer2)
-               if (card->info.type == QETH_CARD_TYPE_OSN)
-                       prot_type = QETH_PROT_OSN2;
-               else
-                       prot_type = QETH_PROT_LAYER2;
-       else
-               prot_type = QETH_PROT_TCPIP;
-       qeth_prepare_ipa_cmd(card, iob, prot_type);
+       qeth_prepare_ipa_cmd(card, iob);
        rc = qeth_send_control_data(card, IPA_CMD_LENGTH,
                                                iob, reply_cb, reply_param);
        if (rc == -ETIME) {
@@ -5066,11 +5036,11 @@ static int qeth_qdio_establish(struct qeth_card *card)
 
 static void qeth_core_free_card(struct qeth_card *card)
 {
-
        QETH_DBF_TEXT(SETUP, 2, "freecrd");
        QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *));
        qeth_clean_channel(&card->read);
        qeth_clean_channel(&card->write);
+       qeth_clean_channel(&card->data);
        qeth_free_qdio_buffers(card);
        unregister_service_level(&card->qeth_service_level);
        kfree(card);
@@ -5794,12 +5764,8 @@ static int qeth_core_probe_device(struct ccwgroup_device *gdev)
        gdev->cdev[1]->handler = qeth_irq;
        gdev->cdev[2]->handler = qeth_irq;
 
-       qeth_determine_card_type(card);
-       rc = qeth_setup_card(card);
-       if (rc) {
-               QETH_DBF_TEXT_(SETUP, 2, "2err%d", rc);
-               goto err_card;
-       }
+       qeth_setup_card(card);
+       qeth_update_from_chp_desc(card);
 
        card->dev = qeth_alloc_netdev(card);
        if (!card->dev)
index 22428b769f9b1743145c846afcdd98ca4c07e58c..5bcb8dafc3ee506f9ff3487bdf8e01cf274e35b2 100644 (file)
@@ -146,17 +146,6 @@ unsigned char IPA_PDU_HEADER[] = {
 };
 EXPORT_SYMBOL_GPL(IPA_PDU_HEADER);
 
-unsigned char WRITE_CCW[] = {
-       0x01, CCW_FLAG_SLI, 0, 0,
-       0, 0, 0, 0
-};
-
-unsigned char READ_CCW[] = {
-       0x02, CCW_FLAG_SLI, 0, 0,
-       0, 0, 0, 0
-};
-
-
 struct ipa_rc_msg {
        enum qeth_ipa_return_codes rc;
        char *msg;
index cf5ad94e960a0b86ca1d70ff7e69edabc8ba6cb3..aa8b9196b089e0c9b2788493d5103dd5b38ad33d 100644 (file)
@@ -818,10 +818,6 @@ extern char *qeth_get_ipa_cmd_name(enum qeth_ipa_cmds cmd);
 /* END OF   IP Assist related definitions                                    */
 /*****************************************************************************/
 
-
-extern unsigned char WRITE_CCW[];
-extern unsigned char READ_CCW[];
-
 extern unsigned char CM_ENABLE[];
 #define CM_ENABLE_SIZE 0x63
 #define QETH_CM_ENABLE_ISSUER_RM_TOKEN(buffer) (buffer + 0x2c)
index c1829a4b955dfe8fcd2c108ac2c4598f713167ad..710fa74892ae55a65f3849147bb3986946c68bf8 100644 (file)
@@ -1239,25 +1239,26 @@ EXPORT_SYMBOL_GPL(qeth_l2_discipline);
 static int qeth_osn_send_control_data(struct qeth_card *card, int len,
                           struct qeth_cmd_buffer *iob)
 {
+       struct qeth_channel *channel = iob->channel;
        unsigned long flags;
        int rc = 0;
 
        QETH_CARD_TEXT(card, 5, "osndctrd");
 
        wait_event(card->wait_q,
-                  atomic_cmpxchg(&card->write.irq_pending, 0, 1) == 0);
+                  atomic_cmpxchg(&channel->irq_pending, 0, 1) == 0);
        qeth_prepare_control_data(card, len, iob);
        QETH_CARD_TEXT(card, 6, "osnoirqp");
-       spin_lock_irqsave(get_ccwdev_lock(card->write.ccwdev), flags);
-       rc = ccw_device_start_timeout(CARD_WDEV(card), &card->write.ccw,
+       spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags);
+       rc = ccw_device_start_timeout(channel->ccwdev, channel->ccw,
                                      (addr_t) iob, 0, 0, QETH_IPA_TIMEOUT);
-       spin_unlock_irqrestore(get_ccwdev_lock(card->write.ccwdev), flags);
+       spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags);
        if (rc) {
                QETH_DBF_MESSAGE(2, "qeth_osn_send_control_data: "
                           "ccw_device_start rc = %i\n", rc);
                QETH_CARD_TEXT_(card, 2, " err%d", rc);
-               qeth_release_buffer(iob->channel, iob);
-               atomic_set(&card->write.irq_pending, 0);
+               qeth_release_buffer(channel, iob);
+               atomic_set(&channel->irq_pending, 0);
                wake_up(&card->wait_q);
        }
        return rc;
@@ -1270,7 +1271,7 @@ static int qeth_osn_send_ipa_cmd(struct qeth_card *card,
 
        QETH_CARD_TEXT(card, 4, "osndipa");
 
-       qeth_prepare_ipa_cmd(card, iob, QETH_PROT_OSN2);
+       qeth_prepare_ipa_cmd(card, iob);
        s1 = (u16)(IPA_PDU_HEADER_SIZE + data_len);
        s2 = (u16)data_len;
        memcpy(QETH_IPA_PDU_LEN_TOTAL(iob->data), &s1, 2);
index 1833e7505aca45fd5659437677c34494538fe2e2..7175086677fb4be85cc41dd2748157e71986fd30 100644 (file)
@@ -117,9 +117,9 @@ static bool qeth_l3_is_addr_covered_by_ipato(struct qeth_card *card,
        int rc = 0;
 
        if (!card->ipato.enabled)
-               return 0;
+               return false;
        if (addr->type != QETH_IP_TYPE_NORMAL)
-               return 0;
+               return false;
 
        qeth_l3_convert_addr_to_bits((u8 *) &addr->u, addr_bits,
                                  (addr->proto == QETH_PROT_IPV4)? 4:16);
index c574dd210500ad836a17cf89d7f2d70a5dc0effb..df30e1323252638a0c92c8dd2c9696503e218ed5 100644 (file)
@@ -89,27 +89,6 @@ config SSB_HOST_SOC
 
          If unsure, say N
 
-config SSB_SILENT
-       bool "No SSB kernel messages"
-       depends on SSB && EXPERT
-       help
-         This option turns off all Sonics Silicon Backplane printks.
-         Note that you won't be able to identify problems, once
-         messages are turned off.
-         This might only be desired for production kernels on
-         embedded devices to reduce the kernel size.
-
-         Say N
-
-config SSB_DEBUG
-       bool "SSB debugging"
-       depends on SSB && !SSB_SILENT
-       help
-         This turns on additional runtime checks and debugging
-         messages. Turn this on for SSB troubleshooting.
-
-         If unsure, say N
-
 config SSB_SERIAL
        bool
        depends on SSB
index bed2fedeb05710f8d17313ee49e5a6e00ce7f471..9c7316b5685fdd6f6a5fc1f87d2754fda251bb80 100644 (file)
  * Licensed under the GNU/GPL. See COPYING for details.
  */
 
+#include "ssb_private.h"
+
 #include <linux/pci.h>
 #include <linux/module.h>
 #include <linux/ssb/ssb.h>
 
-#include "ssb_private.h"
-
 
 static const struct pci_device_id b43_pci_bridge_tbl[] = {
        { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4301) },
index d70568ea02d533654705b9220b555383a843b14e..f51f150307dfbcab4d6ed3ddad9077e62427e37a 100644 (file)
@@ -6,6 +6,8 @@
  * Licensed under the GNU/GPL. See COPYING for details.
  */
 
+#include "ssb_private.h"
+
 #include <linux/ssb/ssb.h>
 #include <linux/slab.h>
 #include <linux/module.h>
@@ -15,8 +17,6 @@
 #include <pcmcia/ds.h>
 #include <pcmcia/cisreg.h>
 
-#include "ssb_private.h"
-
 static const struct pcmcia_device_id ssb_host_pcmcia_tbl[] = {
        PCMCIA_DEVICE_MANF_CARD(0x2D0, 0x448),
        PCMCIA_DEVICE_MANF_CARD(0x2D0, 0x476),
@@ -70,7 +70,7 @@ static int ssb_host_pcmcia_probe(struct pcmcia_device *dev)
 err_kfree_ssb:
        kfree(ssb);
 out_error:
-       ssb_err("Initialization failed (%d, %d)\n", res, err);
+       dev_err(&dev->dev, "Initialization failed (%d, %d)\n", res, err);
        return err;
 }
 
index 7cb7d2c8fd8664d567a52374842f52d65a790309..99a4656d113d5dc67233e3dfd24dc608638da257 100644 (file)
@@ -9,14 +9,14 @@
  * Licensed under the GNU/GPL. See COPYING for details.
  */
 
+#include "ssb_private.h"
+
 #include <linux/ssb/ssb.h>
 #include <linux/ssb/ssb_regs.h>
 #include <linux/export.h>
 #include <linux/pci.h>
 #include <linux/bcm47xx_wdt.h>
 
-#include "ssb_private.h"
-
 
 /* Clock sources */
 enum ssb_clksrc {
@@ -56,7 +56,7 @@ void ssb_chipco_set_clockmode(struct ssb_chipcommon *cc,
 
        if (cc->capabilities & SSB_CHIPCO_CAP_PMU)
                return; /* PMU controls clockmode, separated function needed */
-       SSB_WARN_ON(ccdev->id.revision >= 20);
+       WARN_ON(ccdev->id.revision >= 20);
 
        /* chipcommon cores prior to rev6 don't support dynamic clock control */
        if (ccdev->id.revision < 6)
@@ -111,7 +111,7 @@ void ssb_chipco_set_clockmode(struct ssb_chipcommon *cc,
                }
                break;
        default:
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        }
 }
 
@@ -164,7 +164,7 @@ static int chipco_pctl_clockfreqlimit(struct ssb_chipcommon *cc, int get_max)
                        divisor = 32;
                        break;
                default:
-                       SSB_WARN_ON(1);
+                       WARN_ON(1);
                }
        } else if (cc->dev->id.revision < 10) {
                switch (clocksrc) {
@@ -277,7 +277,7 @@ static void calc_fast_powerup_delay(struct ssb_chipcommon *cc)
        minfreq = chipco_pctl_clockfreqlimit(cc, 0);
        pll_on_delay = chipco_read32(cc, SSB_CHIPCO_PLLONDELAY);
        tmp = (((pll_on_delay + 2) * 1000000) + (minfreq - 1)) / minfreq;
-       SSB_WARN_ON(tmp & ~0xFFFF);
+       WARN_ON(tmp & ~0xFFFF);
 
        cc->fast_pwrup_delay = tmp;
 }
@@ -354,7 +354,7 @@ void ssb_chipcommon_init(struct ssb_chipcommon *cc)
 
        if (cc->dev->id.revision >= 11)
                cc->status = chipco_read32(cc, SSB_CHIPCO_CHIPSTAT);
-       ssb_dbg("chipcommon status is 0x%x\n", cc->status);
+       dev_dbg(cc->dev->dev, "chipcommon status is 0x%x\n", cc->status);
 
        if (cc->dev->id.revision >= 20) {
                chipco_write32(cc, SSB_CHIPCO_GPIOPULLUP, 0);
index c5352ea4821ea0df593c7043ac911ee891f103b0..0f60e90ded267bad7bea899afc6f6f34f8b9f02e 100644 (file)
@@ -8,6 +8,8 @@
  * Licensed under the GNU/GPL. See COPYING for details.
  */
 
+#include "ssb_private.h"
+
 #include <linux/ssb/ssb.h>
 #include <linux/ssb/ssb_regs.h>
 #include <linux/ssb/ssb_driver_chipcommon.h>
@@ -17,8 +19,6 @@
 #include <linux/bcm47xx_nvram.h>
 #endif
 
-#include "ssb_private.h"
-
 static u32 ssb_chipco_pll_read(struct ssb_chipcommon *cc, u32 offset)
 {
        chipco_write32(cc, SSB_CHIPCO_PLLCTL_ADDR, offset);
@@ -110,7 +110,7 @@ static void ssb_pmu0_pllinit_r0(struct ssb_chipcommon *cc,
                return;
        }
 
-       ssb_info("Programming PLL to %u.%03u MHz\n",
+       dev_info(cc->dev->dev, "Programming PLL to %u.%03u MHz\n",
                 crystalfreq / 1000, crystalfreq % 1000);
 
        /* First turn the PLL off. */
@@ -128,7 +128,7 @@ static void ssb_pmu0_pllinit_r0(struct ssb_chipcommon *cc,
                              ~(1 << SSB_PMURES_5354_BB_PLL_PU));
                break;
        default:
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        }
        for (i = 1500; i; i--) {
                tmp = chipco_read32(cc, SSB_CHIPCO_CLKCTLST);
@@ -138,7 +138,7 @@ static void ssb_pmu0_pllinit_r0(struct ssb_chipcommon *cc,
        }
        tmp = chipco_read32(cc, SSB_CHIPCO_CLKCTLST);
        if (tmp & SSB_CHIPCO_CLKCTLST_HAVEHT)
-               ssb_emerg("Failed to turn the PLL off!\n");
+               dev_emerg(cc->dev->dev, "Failed to turn the PLL off!\n");
 
        /* Set PDIV in PLL control 0. */
        pllctl = ssb_chipco_pll_read(cc, SSB_PMU0_PLLCTL0);
@@ -249,7 +249,7 @@ static void ssb_pmu1_pllinit_r0(struct ssb_chipcommon *cc,
                return;
        }
 
-       ssb_info("Programming PLL to %u.%03u MHz\n",
+       dev_info(cc->dev->dev, "Programming PLL to %u.%03u MHz\n",
                 crystalfreq / 1000, crystalfreq % 1000);
 
        /* First turn the PLL off. */
@@ -265,7 +265,7 @@ static void ssb_pmu1_pllinit_r0(struct ssb_chipcommon *cc,
                buffer_strength = 0x222222;
                break;
        default:
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        }
        for (i = 1500; i; i--) {
                tmp = chipco_read32(cc, SSB_CHIPCO_CLKCTLST);
@@ -275,7 +275,7 @@ static void ssb_pmu1_pllinit_r0(struct ssb_chipcommon *cc,
        }
        tmp = chipco_read32(cc, SSB_CHIPCO_CLKCTLST);
        if (tmp & SSB_CHIPCO_CLKCTLST_HAVEHT)
-               ssb_emerg("Failed to turn the PLL off!\n");
+               dev_emerg(cc->dev->dev, "Failed to turn the PLL off!\n");
 
        /* Set p1div and p2div. */
        pllctl = ssb_chipco_pll_read(cc, SSB_PMU1_PLLCTL0);
@@ -349,7 +349,7 @@ static void ssb_pmu_pll_init(struct ssb_chipcommon *cc)
        case 43222:
                break;
        default:
-               ssb_err("ERROR: PLL init unknown for device %04X\n",
+               dev_err(cc->dev->dev, "ERROR: PLL init unknown for device %04X\n",
                        bus->chip_id);
        }
 }
@@ -471,7 +471,7 @@ static void ssb_pmu_resources_init(struct ssb_chipcommon *cc)
                max_msk = 0xFFFFF;
                break;
        default:
-               ssb_err("ERROR: PMU resource config unknown for device %04X\n",
+               dev_err(cc->dev->dev, "ERROR: PMU resource config unknown for device %04X\n",
                        bus->chip_id);
        }
 
@@ -501,7 +501,7 @@ static void ssb_pmu_resources_init(struct ssb_chipcommon *cc)
                                              ~(depend_tab[i].depend));
                                break;
                        default:
-                               SSB_WARN_ON(1);
+                               WARN_ON(1);
                        }
                }
        }
@@ -524,7 +524,7 @@ void ssb_pmu_init(struct ssb_chipcommon *cc)
        pmucap = chipco_read32(cc, SSB_CHIPCO_PMU_CAP);
        cc->pmu.rev = (pmucap & SSB_CHIPCO_PMU_CAP_REVISION);
 
-       ssb_dbg("Found rev %u PMU (capabilities 0x%08X)\n",
+       dev_dbg(cc->dev->dev, "Found rev %u PMU (capabilities 0x%08X)\n",
                cc->pmu.rev, pmucap);
 
        if (cc->pmu.rev == 1)
@@ -568,12 +568,12 @@ void ssb_pmu_set_ldo_voltage(struct ssb_chipcommon *cc,
                        mask = 0x3F;
                        break;
                default:
-                       SSB_WARN_ON(1);
+                       WARN_ON(1);
                        return;
                }
                break;
        case 0x4312:
-               if (SSB_WARN_ON(id != LDO_PAREF))
+               if (WARN_ON(id != LDO_PAREF))
                        return;
                addr = 0;
                shift = 21;
@@ -636,7 +636,7 @@ u32 ssb_pmu_get_alp_clock(struct ssb_chipcommon *cc)
        case 0x5354:
                return ssb_pmu_get_alp_clock_clk0(cc);
        default:
-               ssb_err("ERROR: PMU alp clock unknown for device %04X\n",
+               dev_err(cc->dev->dev, "ERROR: PMU alp clock unknown for device %04X\n",
                        bus->chip_id);
                return 0;
        }
@@ -651,7 +651,7 @@ u32 ssb_pmu_get_cpu_clock(struct ssb_chipcommon *cc)
                /* 5354 chip uses a non programmable PLL of frequency 240MHz */
                return 240000000;
        default:
-               ssb_err("ERROR: PMU cpu clock unknown for device %04X\n",
+               dev_err(cc->dev->dev, "ERROR: PMU cpu clock unknown for device %04X\n",
                        bus->chip_id);
                return 0;
        }
@@ -665,7 +665,7 @@ u32 ssb_pmu_get_controlclock(struct ssb_chipcommon *cc)
        case 0x5354:
                return 120000000;
        default:
-               ssb_err("ERROR: PMU controlclock unknown for device %04X\n",
+               dev_err(cc->dev->dev, "ERROR: PMU controlclock unknown for device %04X\n",
                        bus->chip_id);
                return 0;
        }
@@ -705,9 +705,9 @@ void ssb_pmu_spuravoid_pllupdate(struct ssb_chipcommon *cc, int spuravoid)
                pmu_ctl = SSB_CHIPCO_PMU_CTL_PLL_UPD;
                break;
        default:
-               ssb_printk(KERN_ERR PFX
-                          "Unknown spuravoidance settings for chip 0x%04X, not changing PLL\n",
-                          cc->dev->bus->chip_id);
+               dev_err(cc->dev->dev,
+                       "Unknown spuravoidance settings for chip 0x%04X, not changing PLL\n",
+                       cc->dev->bus->chip_id);
                return;
        }
 
index 937fc31971a785061e9273f849ed515360252063..fac0e6828288c28585ecb0be5c9430c42f78afe3 100644 (file)
@@ -5,10 +5,10 @@
  * Licensed under the GNU/GPL. See COPYING for details.
  */
 
-#include <linux/ssb/ssb.h>
-
 #include "ssb_private.h"
 
+#include <linux/ssb/ssb.h>
+
 static struct resource ssb_sflash_resource = {
        .name   = "ssb_sflash",
        .start  = SSB_FLASH2,
@@ -80,7 +80,7 @@ static void ssb_sflash_cmd(struct ssb_chipcommon *cc, u32 opcode)
                        return;
                cpu_relax();
        }
-       pr_err("SFLASH control command failed (timeout)!\n");
+       dev_err(cc->dev->dev, "SFLASH control command failed (timeout)!\n");
 }
 
 /* Initialize serial flash access */
index 59385fdab5b0d253dd26587e55390b7fac55a94f..06b68dd6e0221fcf71bb0b59ae7ed57c46fd5131 100644 (file)
  * Licensed under the GNU/GPL. See COPYING for details.
  */
 
+#include "ssb_private.h"
+
 #include <linux/serial.h>
 #include <linux/serial_core.h>
 #include <linux/serial_reg.h>
 
-#include "ssb_private.h"
-
 
 static inline u32 extif_read32(struct ssb_extif *extif, u16 offset)
 {
index e9734051e3c4adaedaa4131948355d5dd6fd1e00..ebee6b0e3c346c9dc29add734fa2df4118614939 100644 (file)
@@ -242,7 +242,7 @@ static int ssb_gige_probe(struct ssb_device *sdev,
 bool pdev_is_ssb_gige_core(struct pci_dev *pdev)
 {
        if (!pdev->resource[0].name)
-               return 0;
+               return false;
        return (strcmp(pdev->resource[0].name, SSB_GIGE_MEM_RES_NAME) == 0);
 }
 EXPORT_SYMBOL(pdev_is_ssb_gige_core);
index 796e22037bc42afd75d054808b0acf17386fc9f9..e809dae4c470adde136c4bb60aa1a762bae821a2 100644 (file)
@@ -8,6 +8,8 @@
  * Licensed under the GNU/GPL. See COPYING for details.
  */
 
+#include "ssb_private.h"
+
 #include <linux/gpio/driver.h>
 #include <linux/irq.h>
 #include <linux/interrupt.h>
@@ -15,8 +17,6 @@
 #include <linux/export.h>
 #include <linux/ssb/ssb.h>
 
-#include "ssb_private.h"
-
 
 /**************************************************
  * Shared
@@ -461,7 +461,7 @@ int ssb_gpio_init(struct ssb_bus *bus)
        else if (ssb_extif_available(&bus->extif))
                return ssb_gpio_extif_init(bus);
        else
-               SSB_WARN_ON(1);
+               WARN_ON(1);
 
        return -1;
 }
@@ -473,7 +473,7 @@ int ssb_gpio_unregister(struct ssb_bus *bus)
                gpiochip_remove(&bus->gpio);
                return 0;
        } else {
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        }
 
        return -1;
index f87efef422520159d0ff7665186b2bcc01cd5b9d..1ca2ac5ef2b80f2f2e447f370c26584cd4df256b 100644 (file)
@@ -8,6 +8,8 @@
  * Licensed under the GNU/GPL. See COPYING for details.
  */
 
+#include "ssb_private.h"
+
 #include <linux/ssb/ssb.h>
 
 #include <linux/mtd/physmap.h>
@@ -19,8 +21,6 @@
 #include <linux/bcm47xx_nvram.h>
 #endif
 
-#include "ssb_private.h"
-
 static const char * const part_probes[] = { "bcm47xxpart", NULL };
 
 static struct physmap_flash_data ssb_pflash_data = {
@@ -170,14 +170,15 @@ static void set_irq(struct ssb_device *dev, unsigned int irq)
                irqflag |= (ipsflag & ~ipsflag_irq_mask[irq]);
                ssb_write32(mdev, SSB_IPSFLAG, irqflag);
        }
-       ssb_dbg("set_irq: core 0x%04x, irq %d => %d\n",
+       dev_dbg(dev->dev, "set_irq: core 0x%04x, irq %d => %d\n",
                dev->id.coreid, oldirq+2, irq+2);
 }
 
 static void print_irq(struct ssb_device *dev, unsigned int irq)
 {
        static const char *irq_name[] = {"2(S)", "3", "4", "5", "6", "D", "I"};
-       ssb_dbg("core 0x%04x, irq : %s%s %s%s %s%s %s%s %s%s %s%s %s%s\n",
+       dev_dbg(dev->dev,
+               "core 0x%04x, irq : %s%s %s%s %s%s %s%s %s%s %s%s %s%s\n",
                dev->id.coreid,
                irq_name[0], irq == 0 ? "*" : " ",
                irq_name[1], irq == 1 ? "*" : " ",
@@ -229,11 +230,11 @@ static void ssb_mips_flash_detect(struct ssb_mipscore *mcore)
        switch (bus->chipco.capabilities & SSB_CHIPCO_CAP_FLASHT) {
        case SSB_CHIPCO_FLASHT_STSER:
        case SSB_CHIPCO_FLASHT_ATSER:
-               pr_debug("Found serial flash\n");
+               dev_dbg(mcore->dev->dev, "Found serial flash\n");
                ssb_sflash_init(&bus->chipco);
                break;
        case SSB_CHIPCO_FLASHT_PARA:
-               pr_debug("Found parallel flash\n");
+               dev_dbg(mcore->dev->dev, "Found parallel flash\n");
                pflash->present = true;
                pflash->window = SSB_FLASH2;
                pflash->window_size = SSB_FLASH2_SZ;
@@ -299,7 +300,7 @@ void ssb_mipscore_init(struct ssb_mipscore *mcore)
        if (!mcore->dev)
                return; /* We don't have a MIPS core */
 
-       ssb_dbg("Initializing MIPS core...\n");
+       dev_dbg(mcore->dev->dev, "Initializing MIPS core...\n");
 
        bus = mcore->dev->bus;
        hz = ssb_clockspeed(bus);
@@ -347,7 +348,7 @@ void ssb_mipscore_init(struct ssb_mipscore *mcore)
                        break;
                }
        }
-       ssb_dbg("after irq reconfiguration\n");
+       dev_dbg(mcore->dev->dev, "after irq reconfiguration\n");
        dump_irq(bus);
 
        ssb_mips_serial_init(mcore);
index 5fe1c22e289b881cacebbf081d7c245fad7d2098..6a5622e0ded5443f02a8be85763a326be462be45 100644 (file)
@@ -8,14 +8,14 @@
  * Licensed under the GNU/GPL. See COPYING for details.
  */
 
+#include "ssb_private.h"
+
 #include <linux/ssb/ssb.h>
 #include <linux/pci.h>
 #include <linux/export.h>
 #include <linux/delay.h>
 #include <linux/ssb/ssb_embedded.h>
 
-#include "ssb_private.h"
-
 static u32 ssb_pcie_read(struct ssb_pcicore *pc, u32 address);
 static void ssb_pcie_write(struct ssb_pcicore *pc, u32 address, u32 data);
 static u16 ssb_pcie_mdio_read(struct ssb_pcicore *pc, u8 device, u8 address);
@@ -115,7 +115,7 @@ static int ssb_extpci_read_config(struct ssb_pcicore *pc,
        u32 addr, val;
        void __iomem *mmio;
 
-       SSB_WARN_ON(!pc->hostmode);
+       WARN_ON(!pc->hostmode);
        if (unlikely(len != 1 && len != 2 && len != 4))
                goto out;
        addr = get_cfgspace_addr(pc, bus, dev, func, off);
@@ -161,7 +161,7 @@ static int ssb_extpci_write_config(struct ssb_pcicore *pc,
        u32 addr, val = 0;
        void __iomem *mmio;
 
-       SSB_WARN_ON(!pc->hostmode);
+       WARN_ON(!pc->hostmode);
        if (unlikely(len != 1 && len != 2 && len != 4))
                goto out;
        addr = get_cfgspace_addr(pc, bus, dev, func, off);
@@ -263,7 +263,7 @@ int ssb_pcicore_plat_dev_init(struct pci_dev *d)
                return -ENODEV;
        }
 
-       ssb_info("PCI: Fixing up device %s\n", pci_name(d));
+       dev_info(&d->dev, "PCI: Fixing up device %s\n", pci_name(d));
 
        /* Fix up interrupt lines */
        d->irq = ssb_mips_irq(extpci_core->dev) + 2;
@@ -284,12 +284,12 @@ static void ssb_pcicore_fixup_pcibridge(struct pci_dev *dev)
        if (dev->bus->number != 0 || PCI_SLOT(dev->devfn) != 0)
                return;
 
-       ssb_info("PCI: Fixing up bridge %s\n", pci_name(dev));
+       dev_info(&dev->dev, "PCI: Fixing up bridge %s\n", pci_name(dev));
 
        /* Enable PCI bridge bus mastering and memory space */
        pci_set_master(dev);
        if (pcibios_enable_device(dev, ~0) < 0) {
-               ssb_err("PCI: SSB bridge enable failed\n");
+               dev_err(&dev->dev, "PCI: SSB bridge enable failed\n");
                return;
        }
 
@@ -298,7 +298,8 @@ static void ssb_pcicore_fixup_pcibridge(struct pci_dev *dev)
 
        /* Make sure our latency is high enough to handle the devices behind us */
        lat = 168;
-       ssb_info("PCI: Fixing latency timer of device %s to %u\n",
+       dev_info(&dev->dev,
+                "PCI: Fixing latency timer of device %s to %u\n",
                 pci_name(dev), lat);
        pci_write_config_byte(dev, PCI_LATENCY_TIMER, lat);
 }
@@ -322,7 +323,7 @@ static void ssb_pcicore_init_hostmode(struct ssb_pcicore *pc)
                return;
        extpci_core = pc;
 
-       ssb_dbg("PCIcore in host mode found\n");
+       dev_dbg(pc->dev->dev, "PCIcore in host mode found\n");
        /* Reset devices on the external PCI bus */
        val = SSB_PCICORE_CTL_RST_OE;
        val |= SSB_PCICORE_CTL_CLK_OE;
@@ -337,7 +338,7 @@ static void ssb_pcicore_init_hostmode(struct ssb_pcicore *pc)
        udelay(1); /* Assertion time demanded by the PCI standard */
 
        if (pc->dev->bus->has_cardbus_slot) {
-               ssb_dbg("CardBus slot detected\n");
+               dev_dbg(pc->dev->dev, "CardBus slot detected\n");
                pc->cardbusmode = 1;
                /* GPIO 1 resets the bridge */
                ssb_gpio_out(pc->dev->bus, 1, 1);
@@ -701,7 +702,7 @@ int ssb_pcicore_dev_irqvecs_enable(struct ssb_pcicore *pc,
                /* Calculate the "coremask" for the device. */
                coremask = (1 << dev->core_index);
 
-               SSB_WARN_ON(bus->bustype != SSB_BUSTYPE_PCI);
+               WARN_ON(bus->bustype != SSB_BUSTYPE_PCI);
                err = pci_read_config_dword(bus->host_pci, SSB_PCI_IRQMASK, &tmp);
                if (err)
                        goto out;
index 55e101115038ab22c211e00ef5a2f3cad6807509..8254ed25e0637a1592ca8c6497a1793a3eda288a 100644 (file)
@@ -9,6 +9,8 @@
  * Licensed under the GNU/GPL. See COPYING for details.
  */
 
+#include "ssb_private.h"
+
 #include <linux/export.h>
 #include <linux/platform_device.h>
 #include <linux/ssb/ssb.h>
@@ -17,8 +19,6 @@
 #include <linux/ssb/ssb_driver_gige.h>
 #include <linux/pci.h>
 
-#include "ssb_private.h"
-
 
 int ssb_watchdog_timer_set(struct ssb_bus *bus, u32 ticks)
 {
@@ -57,8 +57,8 @@ int ssb_watchdog_register(struct ssb_bus *bus)
                                             bus->busnumber, &wdt,
                                             sizeof(wdt));
        if (IS_ERR(pdev)) {
-               ssb_dbg("can not register watchdog device, err: %li\n",
-                       PTR_ERR(pdev));
+               pr_debug("can not register watchdog device, err: %li\n",
+                        PTR_ERR(pdev));
                return PTR_ERR(pdev);
        }
 
@@ -77,7 +77,7 @@ u32 ssb_gpio_in(struct ssb_bus *bus, u32 mask)
        else if (ssb_extif_available(&bus->extif))
                res = ssb_extif_gpio_in(&bus->extif, mask);
        else
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        spin_unlock_irqrestore(&bus->gpio_lock, flags);
 
        return res;
@@ -95,7 +95,7 @@ u32 ssb_gpio_out(struct ssb_bus *bus, u32 mask, u32 value)
        else if (ssb_extif_available(&bus->extif))
                res = ssb_extif_gpio_out(&bus->extif, mask, value);
        else
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        spin_unlock_irqrestore(&bus->gpio_lock, flags);
 
        return res;
@@ -113,7 +113,7 @@ u32 ssb_gpio_outen(struct ssb_bus *bus, u32 mask, u32 value)
        else if (ssb_extif_available(&bus->extif))
                res = ssb_extif_gpio_outen(&bus->extif, mask, value);
        else
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        spin_unlock_irqrestore(&bus->gpio_lock, flags);
 
        return res;
@@ -145,7 +145,7 @@ u32 ssb_gpio_intmask(struct ssb_bus *bus, u32 mask, u32 value)
        else if (ssb_extif_available(&bus->extif))
                res = ssb_extif_gpio_intmask(&bus->extif, mask, value);
        else
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        spin_unlock_irqrestore(&bus->gpio_lock, flags);
 
        return res;
@@ -163,7 +163,7 @@ u32 ssb_gpio_polarity(struct ssb_bus *bus, u32 mask, u32 value)
        else if (ssb_extif_available(&bus->extif))
                res = ssb_extif_gpio_polarity(&bus->extif, mask, value);
        else
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        spin_unlock_irqrestore(&bus->gpio_lock, flags);
 
        return res;
index d62992dc08b233caff4e99c124a9acc29b1a5f26..3b438480515cb755bd400388402a90cf261b6aa5 100644 (file)
@@ -8,11 +8,11 @@
  * Licensed under the GNU/GPL. See COPYING for details.
  */
 
+#include "ssb_private.h"
+
 #include <linux/bcm47xx_nvram.h>
 #include <linux/ssb/ssb.h>
 
-#include "ssb_private.h"
-
 static u8 ssb_host_soc_read8(struct ssb_device *dev, u16 offset)
 {
        struct ssb_bus *bus = dev->bus;
@@ -61,7 +61,7 @@ static void ssb_host_soc_block_read(struct ssb_device *dev, void *buffer,
        case sizeof(u16): {
                __le16 *buf = buffer;
 
-               SSB_WARN_ON(count & 1);
+               WARN_ON(count & 1);
                while (count) {
                        *buf = (__force __le16)__raw_readw(addr);
                        buf++;
@@ -72,7 +72,7 @@ static void ssb_host_soc_block_read(struct ssb_device *dev, void *buffer,
        case sizeof(u32): {
                __le32 *buf = buffer;
 
-               SSB_WARN_ON(count & 3);
+               WARN_ON(count & 3);
                while (count) {
                        *buf = (__force __le32)__raw_readl(addr);
                        buf++;
@@ -81,7 +81,7 @@ static void ssb_host_soc_block_read(struct ssb_device *dev, void *buffer,
                break;
        }
        default:
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        }
 }
 #endif /* CONFIG_SSB_BLOCKIO */
@@ -134,7 +134,7 @@ static void ssb_host_soc_block_write(struct ssb_device *dev, const void *buffer,
        case sizeof(u16): {
                const __le16 *buf = buffer;
 
-               SSB_WARN_ON(count & 1);
+               WARN_ON(count & 1);
                while (count) {
                        __raw_writew((__force u16)(*buf), addr);
                        buf++;
@@ -145,7 +145,7 @@ static void ssb_host_soc_block_write(struct ssb_device *dev, const void *buffer,
        case sizeof(u32): {
                const __le32 *buf = buffer;
 
-               SSB_WARN_ON(count & 3);
+               WARN_ON(count & 3);
                while (count) {
                        __raw_writel((__force u32)(*buf), addr);
                        buf++;
@@ -154,7 +154,7 @@ static void ssb_host_soc_block_write(struct ssb_device *dev, const void *buffer,
                break;
        }
        default:
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        }
 }
 #endif /* CONFIG_SSB_BLOCKIO */
index 116594413f66dc093b37e99b1b38c4cca3fe028d..0a26984acb2ca329fe90e01bac4d2cce95c35ebd 100644 (file)
@@ -209,7 +209,7 @@ int ssb_devices_freeze(struct ssb_bus *bus, struct ssb_freeze_context *ctx)
 
        memset(ctx, 0, sizeof(*ctx));
        ctx->bus = bus;
-       SSB_WARN_ON(bus->nr_devices > ARRAY_SIZE(ctx->device_frozen));
+       WARN_ON(bus->nr_devices > ARRAY_SIZE(ctx->device_frozen));
 
        for (i = 0; i < bus->nr_devices; i++) {
                sdev = ssb_device_get(&bus->devices[i]);
@@ -220,7 +220,7 @@ int ssb_devices_freeze(struct ssb_bus *bus, struct ssb_freeze_context *ctx)
                        continue;
                }
                sdrv = drv_to_ssb_drv(sdev->dev->driver);
-               if (SSB_WARN_ON(!sdrv->remove))
+               if (WARN_ON(!sdrv->remove))
                        continue;
                sdrv->remove(sdev);
                ctx->device_frozen[i] = 1;
@@ -248,15 +248,16 @@ int ssb_devices_thaw(struct ssb_freeze_context *ctx)
                        continue;
                sdev = &bus->devices[i];
 
-               if (SSB_WARN_ON(!sdev->dev || !sdev->dev->driver))
+               if (WARN_ON(!sdev->dev || !sdev->dev->driver))
                        continue;
                sdrv = drv_to_ssb_drv(sdev->dev->driver);
-               if (SSB_WARN_ON(!sdrv || !sdrv->probe))
+               if (WARN_ON(!sdrv || !sdrv->probe))
                        continue;
 
                err = sdrv->probe(sdev, &sdev->id);
                if (err) {
-                       ssb_err("Failed to thaw device %s\n",
+                       dev_err(sdev->dev,
+                               "Failed to thaw device %s\n",
                                dev_name(sdev->dev));
                        result = err;
                }
@@ -431,9 +432,9 @@ void ssb_bus_unregister(struct ssb_bus *bus)
 
        err = ssb_gpio_unregister(bus);
        if (err == -EBUSY)
-               ssb_dbg("Some GPIOs are still in use\n");
+               pr_debug("Some GPIOs are still in use\n");
        else if (err)
-               ssb_dbg("Can not unregister GPIO driver: %i\n", err);
+               pr_debug("Can not unregister GPIO driver: %i\n", err);
 
        ssb_buses_lock();
        ssb_devices_unregister(bus);
@@ -518,7 +519,7 @@ static int ssb_devices_register(struct ssb_bus *bus)
                sdev->dev = dev;
                err = device_register(dev);
                if (err) {
-                       ssb_err("Could not register %s\n", dev_name(dev));
+                       pr_err("Could not register %s\n", dev_name(dev));
                        /* Set dev to NULL to not unregister
                         * dev on error unwinding. */
                        sdev->dev = NULL;
@@ -576,9 +577,9 @@ static int ssb_attach_queued_buses(void)
 
                err = ssb_gpio_init(bus);
                if (err == -ENOTSUPP)
-                       ssb_dbg("GPIO driver not activated\n");
+                       pr_debug("GPIO driver not activated\n");
                else if (err)
-                       ssb_dbg("Error registering GPIO driver: %i\n", err);
+                       pr_debug("Error registering GPIO driver: %i\n", err);
 
                ssb_bus_may_powerdown(bus);
 
@@ -707,10 +708,12 @@ int ssb_bus_pcibus_register(struct ssb_bus *bus, struct pci_dev *host_pci)
 
        err = ssb_bus_register(bus, ssb_pci_get_invariants, 0);
        if (!err) {
-               ssb_info("Sonics Silicon Backplane found on PCI device %s\n",
+               dev_info(&host_pci->dev,
+                        "Sonics Silicon Backplane found on PCI device %s\n",
                         dev_name(&host_pci->dev));
        } else {
-               ssb_err("Failed to register PCI version of SSB with error %d\n",
+               dev_err(&host_pci->dev,
+                       "Failed to register PCI version of SSB with error %d\n",
                        err);
        }
 
@@ -731,7 +734,8 @@ int ssb_bus_pcmciabus_register(struct ssb_bus *bus,
 
        err = ssb_bus_register(bus, ssb_pcmcia_get_invariants, baseaddr);
        if (!err) {
-               ssb_info("Sonics Silicon Backplane found on PCMCIA device %s\n",
+               dev_info(&pcmcia_dev->dev,
+                        "Sonics Silicon Backplane found on PCMCIA device %s\n",
                         pcmcia_dev->devname);
        }
 
@@ -752,7 +756,8 @@ int ssb_bus_sdiobus_register(struct ssb_bus *bus, struct sdio_func *func,
 
        err = ssb_bus_register(bus, ssb_sdio_get_invariants, ~0);
        if (!err) {
-               ssb_info("Sonics Silicon Backplane found on SDIO device %s\n",
+               dev_info(&func->dev,
+                        "Sonics Silicon Backplane found on SDIO device %s\n",
                         sdio_func_id(func));
        }
 
@@ -771,8 +776,8 @@ int ssb_bus_host_soc_register(struct ssb_bus *bus, unsigned long baseaddr)
 
        err = ssb_bus_register(bus, ssb_host_soc_get_invariants, baseaddr);
        if (!err) {
-               ssb_info("Sonics Silicon Backplane found at address 0x%08lX\n",
-                        baseaddr);
+               pr_info("Sonics Silicon Backplane found at address 0x%08lX\n",
+                       baseaddr);
        }
 
        return err;
@@ -856,13 +861,13 @@ u32 ssb_calc_clock_rate(u32 plltype, u32 n, u32 m)
        case SSB_PLLTYPE_2: /* 48Mhz, 4 dividers */
                n1 += SSB_CHIPCO_CLK_T2_BIAS;
                n2 += SSB_CHIPCO_CLK_T2_BIAS;
-               SSB_WARN_ON(!((n1 >= 2) && (n1 <= 7)));
-               SSB_WARN_ON(!((n2 >= 5) && (n2 <= 23)));
+               WARN_ON(!((n1 >= 2) && (n1 <= 7)));
+               WARN_ON(!((n2 >= 5) && (n2 <= 23)));
                break;
        case SSB_PLLTYPE_5: /* 25Mhz, 4 dividers */
                return 100000000;
        default:
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        }
 
        switch (plltype) {
@@ -911,9 +916,9 @@ u32 ssb_calc_clock_rate(u32 plltype, u32 n, u32 m)
                m1 += SSB_CHIPCO_CLK_T2_BIAS;
                m2 += SSB_CHIPCO_CLK_T2M2_BIAS;
                m3 += SSB_CHIPCO_CLK_T2_BIAS;
-               SSB_WARN_ON(!((m1 >= 2) && (m1 <= 7)));
-               SSB_WARN_ON(!((m2 >= 3) && (m2 <= 10)));
-               SSB_WARN_ON(!((m3 >= 2) && (m3 <= 7)));
+               WARN_ON(!((m1 >= 2) && (m1 <= 7)));
+               WARN_ON(!((m2 >= 3) && (m2 <= 10)));
+               WARN_ON(!((m3 >= 2) && (m3 <= 7)));
 
                if (!(mc & SSB_CHIPCO_CLK_T2MC_M1BYP))
                        clock /= m1;
@@ -923,7 +928,7 @@ u32 ssb_calc_clock_rate(u32 plltype, u32 n, u32 m)
                        clock /= m3;
                return clock;
        default:
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        }
        return 0;
 }
@@ -1057,9 +1062,9 @@ static int ssb_wait_bits(struct ssb_device *dev, u16 reg, u32 bitmask,
                }
                udelay(10);
        }
-       printk(KERN_ERR PFX "Timeout waiting for bitmask %08X on "
-                           "register %04X to %s.\n",
-              bitmask, reg, (set ? "set" : "clear"));
+       dev_err(dev->dev,
+               "Timeout waiting for bitmask %08X on register %04X to %s\n",
+               bitmask, reg, set ? "set" : "clear");
 
        return -ETIMEDOUT;
 }
@@ -1164,12 +1169,10 @@ int ssb_bus_may_powerdown(struct ssb_bus *bus)
        if (err)
                goto error;
 out:
-#ifdef CONFIG_SSB_DEBUG
        bus->powered_up = 0;
-#endif
        return err;
 error:
-       ssb_err("Bus powerdown failed\n");
+       pr_err("Bus powerdown failed\n");
        goto out;
 }
 EXPORT_SYMBOL(ssb_bus_may_powerdown);
@@ -1183,16 +1186,14 @@ int ssb_bus_powerup(struct ssb_bus *bus, bool dynamic_pctl)
        if (err)
                goto error;
 
-#ifdef CONFIG_SSB_DEBUG
        bus->powered_up = 1;
-#endif
 
        mode = dynamic_pctl ? SSB_CLKMODE_DYNAMIC : SSB_CLKMODE_FAST;
        ssb_chipco_set_clockmode(&bus->chipco, mode);
 
        return 0;
 error:
-       ssb_err("Bus powerup failed\n");
+       pr_err("Bus powerup failed\n");
        return err;
 }
 EXPORT_SYMBOL(ssb_bus_powerup);
@@ -1237,15 +1238,15 @@ u32 ssb_admatch_base(u32 adm)
                base = (adm & SSB_ADM_BASE0);
                break;
        case SSB_ADM_TYPE1:
-               SSB_WARN_ON(adm & SSB_ADM_NEG); /* unsupported */
+               WARN_ON(adm & SSB_ADM_NEG); /* unsupported */
                base = (adm & SSB_ADM_BASE1);
                break;
        case SSB_ADM_TYPE2:
-               SSB_WARN_ON(adm & SSB_ADM_NEG); /* unsupported */
+               WARN_ON(adm & SSB_ADM_NEG); /* unsupported */
                base = (adm & SSB_ADM_BASE2);
                break;
        default:
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        }
 
        return base;
@@ -1261,15 +1262,15 @@ u32 ssb_admatch_size(u32 adm)
                size = ((adm & SSB_ADM_SZ0) >> SSB_ADM_SZ0_SHIFT);
                break;
        case SSB_ADM_TYPE1:
-               SSB_WARN_ON(adm & SSB_ADM_NEG); /* unsupported */
+               WARN_ON(adm & SSB_ADM_NEG); /* unsupported */
                size = ((adm & SSB_ADM_SZ1) >> SSB_ADM_SZ1_SHIFT);
                break;
        case SSB_ADM_TYPE2:
-               SSB_WARN_ON(adm & SSB_ADM_NEG); /* unsupported */
+               WARN_ON(adm & SSB_ADM_NEG); /* unsupported */
                size = ((adm & SSB_ADM_SZ2) >> SSB_ADM_SZ2_SHIFT);
                break;
        default:
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        }
        size = (1 << (size + 1));
 
@@ -1300,19 +1301,19 @@ static int __init ssb_modinit(void)
 
        err = b43_pci_ssb_bridge_init();
        if (err) {
-               ssb_err("Broadcom 43xx PCI-SSB-bridge initialization failed\n");
+               pr_err("Broadcom 43xx PCI-SSB-bridge initialization failed\n");
                /* don't fail SSB init because of this */
                err = 0;
        }
        err = ssb_host_pcmcia_init();
        if (err) {
-               ssb_err("PCMCIA host initialization failed\n");
+               pr_err("PCMCIA host initialization failed\n");
                /* don't fail SSB init because of this */
                err = 0;
        }
        err = ssb_gige_init();
        if (err) {
-               ssb_err("SSB Broadcom Gigabit Ethernet driver initialization failed\n");
+               pr_err("SSB Broadcom Gigabit Ethernet driver initialization failed\n");
                /* don't fail SSB init because of this */
                err = 0;
        }
index 77b551da57288c11d21d6ff40b65b1d8b01ed887..84807a9b4b13abbc58e5449b486ee1e3a6d58892 100644 (file)
  * Licensed under the GNU/GPL. See COPYING for details.
  */
 
+#include "ssb_private.h"
+
 #include <linux/ssb/ssb.h>
 #include <linux/ssb/ssb_regs.h>
 #include <linux/slab.h>
 #include <linux/pci.h>
 #include <linux/delay.h>
 
-#include "ssb_private.h"
-
 
 /* Define the following to 1 to enable a printk on each coreswitch. */
 #define SSB_VERBOSE_PCICORESWITCH_DEBUG                0
@@ -56,7 +56,7 @@ int ssb_pci_switch_coreidx(struct ssb_bus *bus, u8 coreidx)
        }
        return 0;
 error:
-       ssb_err("Failed to switch to core %u\n", coreidx);
+       pr_err("Failed to switch to core %u\n", coreidx);
        return -ENODEV;
 }
 
@@ -67,9 +67,8 @@ int ssb_pci_switch_core(struct ssb_bus *bus,
        unsigned long flags;
 
 #if SSB_VERBOSE_PCICORESWITCH_DEBUG
-       ssb_info("Switching to %s core, index %d\n",
-                ssb_core_name(dev->id.coreid),
-                dev->core_index);
+       pr_info("Switching to %s core, index %d\n",
+               ssb_core_name(dev->id.coreid), dev->core_index);
 #endif
 
        spin_lock_irqsave(&bus->bar_lock, flags);
@@ -161,7 +160,7 @@ int ssb_pci_xtal(struct ssb_bus *bus, u32 what, int turn_on)
        return err;
 
 err_pci:
-       printk(KERN_ERR PFX "Error: ssb_pci_xtal() could not access PCI config space!\n");
+       pr_err("Error: ssb_pci_xtal() could not access PCI config space!\n");
        err = -EBUSY;
        goto out;
 }
@@ -286,7 +285,7 @@ static int sprom_do_write(struct ssb_bus *bus, const u16 *sprom)
        u32 spromctl;
        u16 size = bus->sprom_size;
 
-       ssb_notice("Writing SPROM. Do NOT turn off the power! Please stand by...\n");
+       pr_notice("Writing SPROM. Do NOT turn off the power! Please stand by...\n");
        err = pci_read_config_dword(pdev, SSB_SPROMCTL, &spromctl);
        if (err)
                goto err_ctlreg;
@@ -294,17 +293,17 @@ static int sprom_do_write(struct ssb_bus *bus, const u16 *sprom)
        err = pci_write_config_dword(pdev, SSB_SPROMCTL, spromctl);
        if (err)
                goto err_ctlreg;
-       ssb_notice("[ 0%%");
+       pr_notice("[ 0%%");
        msleep(500);
        for (i = 0; i < size; i++) {
                if (i == size / 4)
-                       ssb_cont("25%%");
+                       pr_cont("25%%");
                else if (i == size / 2)
-                       ssb_cont("50%%");
+                       pr_cont("50%%");
                else if (i == (size * 3) / 4)
-                       ssb_cont("75%%");
+                       pr_cont("75%%");
                else if (i % 2)
-                       ssb_cont(".");
+                       pr_cont(".");
                writew(sprom[i], bus->mmio + bus->sprom_offset + (i * 2));
                mmiowb();
                msleep(20);
@@ -317,12 +316,12 @@ static int sprom_do_write(struct ssb_bus *bus, const u16 *sprom)
        if (err)
                goto err_ctlreg;
        msleep(500);
-       ssb_cont("100%% ]\n");
-       ssb_notice("SPROM written\n");
+       pr_cont("100%% ]\n");
+       pr_notice("SPROM written\n");
 
        return 0;
 err_ctlreg:
-       ssb_err("Could not access SPROM control register.\n");
+       pr_err("Could not access SPROM control register.\n");
        return err;
 }
 
@@ -816,7 +815,7 @@ static int sprom_extract(struct ssb_bus *bus, struct ssb_sprom *out,
        memset(out, 0, sizeof(*out));
 
        out->revision = in[size - 1] & 0x00FF;
-       ssb_dbg("SPROM revision %d detected\n", out->revision);
+       pr_debug("SPROM revision %d detected\n", out->revision);
        memset(out->et0mac, 0xFF, 6);           /* preset et0 and et1 mac */
        memset(out->et1mac, 0xFF, 6);
 
@@ -825,7 +824,7 @@ static int sprom_extract(struct ssb_bus *bus, struct ssb_sprom *out,
                 * number stored in the SPROM.
                 * Always extract r1. */
                out->revision = 1;
-               ssb_dbg("SPROM treated as revision %d\n", out->revision);
+               pr_debug("SPROM treated as revision %d\n", out->revision);
        }
 
        switch (out->revision) {
@@ -842,8 +841,8 @@ static int sprom_extract(struct ssb_bus *bus, struct ssb_sprom *out,
                sprom_extract_r8(out, in);
                break;
        default:
-               ssb_warn("Unsupported SPROM revision %d detected. Will extract v1\n",
-                        out->revision);
+               pr_warn("Unsupported SPROM revision %d detected. Will extract v1\n",
+                       out->revision);
                out->revision = 1;
                sprom_extract_r123(out, in);
        }
@@ -863,7 +862,7 @@ static int ssb_pci_sprom_get(struct ssb_bus *bus,
        u16 *buf;
 
        if (!ssb_is_sprom_available(bus)) {
-               ssb_err("No SPROM available!\n");
+               pr_err("No SPROM available!\n");
                return -ENODEV;
        }
        if (bus->chipco.dev) {  /* can be unavailable! */
@@ -882,7 +881,7 @@ static int ssb_pci_sprom_get(struct ssb_bus *bus,
        } else {
                bus->sprom_offset = SSB_SPROM_BASE1;
        }
-       ssb_dbg("SPROM offset is 0x%x\n", bus->sprom_offset);
+       pr_debug("SPROM offset is 0x%x\n", bus->sprom_offset);
 
        buf = kcalloc(SSB_SPROMSIZE_WORDS_R123, sizeof(u16), GFP_KERNEL);
        if (!buf)
@@ -907,16 +906,16 @@ static int ssb_pci_sprom_get(struct ssb_bus *bus,
                         * available for this device in some other storage */
                        err = ssb_fill_sprom_with_fallback(bus, sprom);
                        if (err) {
-                               ssb_warn("WARNING: Using fallback SPROM failed (err %d)\n",
-                                        err);
+                               pr_warn("WARNING: Using fallback SPROM failed (err %d)\n",
+                                       err);
                                goto out_free;
                        } else {
-                               ssb_dbg("Using SPROM revision %d provided by platform\n",
-                                       sprom->revision);
+                               pr_debug("Using SPROM revision %d provided by platform\n",
+                                        sprom->revision);
                                err = 0;
                                goto out_free;
                        }
-                       ssb_warn("WARNING: Invalid SPROM CRC (corrupt SPROM)\n");
+                       pr_warn("WARNING: Invalid SPROM CRC (corrupt SPROM)\n");
                }
        }
        err = sprom_extract(bus, sprom, buf, bus->sprom_size);
@@ -947,14 +946,12 @@ int ssb_pci_get_invariants(struct ssb_bus *bus,
        return err;
 }
 
-#ifdef CONFIG_SSB_DEBUG
 static int ssb_pci_assert_buspower(struct ssb_bus *bus)
 {
        if (likely(bus->powered_up))
                return 0;
 
-       printk(KERN_ERR PFX "FATAL ERROR: Bus powered down "
-              "while accessing PCI MMIO space\n");
+       pr_err("FATAL ERROR: Bus powered down while accessing PCI MMIO space\n");
        if (bus->power_warn_count <= 10) {
                bus->power_warn_count++;
                dump_stack();
@@ -962,12 +959,6 @@ static int ssb_pci_assert_buspower(struct ssb_bus *bus)
 
        return -ENODEV;
 }
-#else /* DEBUG */
-static inline int ssb_pci_assert_buspower(struct ssb_bus *bus)
-{
-       return 0;
-}
-#endif /* DEBUG */
 
 static u8 ssb_pci_read8(struct ssb_device *dev, u16 offset)
 {
@@ -1026,15 +1017,15 @@ static void ssb_pci_block_read(struct ssb_device *dev, void *buffer,
                ioread8_rep(addr, buffer, count);
                break;
        case sizeof(u16):
-               SSB_WARN_ON(count & 1);
+               WARN_ON(count & 1);
                ioread16_rep(addr, buffer, count >> 1);
                break;
        case sizeof(u32):
-               SSB_WARN_ON(count & 3);
+               WARN_ON(count & 3);
                ioread32_rep(addr, buffer, count >> 2);
                break;
        default:
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        }
 
        return;
@@ -1100,15 +1091,15 @@ static void ssb_pci_block_write(struct ssb_device *dev, const void *buffer,
                iowrite8_rep(addr, buffer, count);
                break;
        case sizeof(u16):
-               SSB_WARN_ON(count & 1);
+               WARN_ON(count & 1);
                iowrite16_rep(addr, buffer, count >> 1);
                break;
        case sizeof(u32):
-               SSB_WARN_ON(count & 3);
+               WARN_ON(count & 3);
                iowrite32_rep(addr, buffer, count >> 2);
                break;
        default:
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        }
 }
 #endif /* CONFIG_SSB_BLOCKIO */
index f03422bbf087f6cf7d0fbfae41db15d0d334cfaa..567013f8a8be9bde6b4fc1f68d8a53171d91f939 100644 (file)
@@ -8,6 +8,8 @@
  * Licensed under the GNU/GPL. See COPYING for details.
  */
 
+#include "ssb_private.h"
+
 #include <linux/ssb/ssb.h>
 #include <linux/delay.h>
 #include <linux/io.h>
@@ -18,8 +20,6 @@
 #include <pcmcia/ds.h>
 #include <pcmcia/cisreg.h>
 
-#include "ssb_private.h"
-
 
 /* Define the following to 1 to enable a printk on each coreswitch. */
 #define SSB_VERBOSE_PCMCIACORESWITCH_DEBUG             0
@@ -143,7 +143,7 @@ int ssb_pcmcia_switch_coreidx(struct ssb_bus *bus,
 
        return 0;
 error:
-       ssb_err("Failed to switch to core %u\n", coreidx);
+       pr_err("Failed to switch to core %u\n", coreidx);
        return err;
 }
 
@@ -152,9 +152,8 @@ static int ssb_pcmcia_switch_core(struct ssb_bus *bus, struct ssb_device *dev)
        int err;
 
 #if SSB_VERBOSE_PCMCIACORESWITCH_DEBUG
-       ssb_info("Switching to %s core, index %d\n",
-                ssb_core_name(dev->id.coreid),
-                dev->core_index);
+       pr_info("Switching to %s core, index %d\n",
+               ssb_core_name(dev->id.coreid), dev->core_index);
 #endif
 
        err = ssb_pcmcia_switch_coreidx(bus, dev->core_index);
@@ -170,7 +169,7 @@ int ssb_pcmcia_switch_segment(struct ssb_bus *bus, u8 seg)
        int err;
        u8 val;
 
-       SSB_WARN_ON((seg != 0) && (seg != 1));
+       WARN_ON((seg != 0) && (seg != 1));
        while (1) {
                err = ssb_pcmcia_cfg_write(bus, SSB_PCMCIA_MEMSEG, seg);
                if (err)
@@ -190,7 +189,7 @@ int ssb_pcmcia_switch_segment(struct ssb_bus *bus, u8 seg)
 
        return 0;
 error:
-       ssb_err("Failed to switch pcmcia segment\n");
+       pr_err("Failed to switch pcmcia segment\n");
        return err;
 }
 
@@ -300,7 +299,7 @@ static void ssb_pcmcia_block_read(struct ssb_device *dev, void *buffer,
        case sizeof(u16): {
                __le16 *buf = buffer;
 
-               SSB_WARN_ON(count & 1);
+               WARN_ON(count & 1);
                while (count) {
                        *buf = (__force __le16)__raw_readw(addr);
                        buf++;
@@ -311,7 +310,7 @@ static void ssb_pcmcia_block_read(struct ssb_device *dev, void *buffer,
        case sizeof(u32): {
                __le16 *buf = buffer;
 
-               SSB_WARN_ON(count & 3);
+               WARN_ON(count & 3);
                while (count) {
                        *buf = (__force __le16)__raw_readw(addr);
                        buf++;
@@ -322,7 +321,7 @@ static void ssb_pcmcia_block_read(struct ssb_device *dev, void *buffer,
                break;
        }
        default:
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        }
 unlock:
        spin_unlock_irqrestore(&bus->bar_lock, flags);
@@ -400,7 +399,7 @@ static void ssb_pcmcia_block_write(struct ssb_device *dev, const void *buffer,
        case sizeof(u16): {
                const __le16 *buf = buffer;
 
-               SSB_WARN_ON(count & 1);
+               WARN_ON(count & 1);
                while (count) {
                        __raw_writew((__force u16)(*buf), addr);
                        buf++;
@@ -411,7 +410,7 @@ static void ssb_pcmcia_block_write(struct ssb_device *dev, const void *buffer,
        case sizeof(u32): {
                const __le16 *buf = buffer;
 
-               SSB_WARN_ON(count & 3);
+               WARN_ON(count & 3);
                while (count) {
                        __raw_writew((__force u16)(*buf), addr);
                        buf++;
@@ -422,7 +421,7 @@ static void ssb_pcmcia_block_write(struct ssb_device *dev, const void *buffer,
                break;
        }
        default:
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        }
 unlock:
        mmiowb();
@@ -547,39 +546,39 @@ static int ssb_pcmcia_sprom_write_all(struct ssb_bus *bus, const u16 *sprom)
        bool failed = 0;
        size_t size = SSB_PCMCIA_SPROM_SIZE;
 
-       ssb_notice("Writing SPROM. Do NOT turn off the power! Please stand by...\n");
+       pr_notice("Writing SPROM. Do NOT turn off the power! Please stand by...\n");
        err = ssb_pcmcia_sprom_command(bus, SSB_PCMCIA_SPROMCTL_WRITEEN);
        if (err) {
-               ssb_notice("Could not enable SPROM write access\n");
+               pr_notice("Could not enable SPROM write access\n");
                return -EBUSY;
        }
-       ssb_notice("[ 0%%");
+       pr_notice("[ 0%%");
        msleep(500);
        for (i = 0; i < size; i++) {
                if (i == size / 4)
-                       ssb_cont("25%%");
+                       pr_cont("25%%");
                else if (i == size / 2)
-                       ssb_cont("50%%");
+                       pr_cont("50%%");
                else if (i == (size * 3) / 4)
-                       ssb_cont("75%%");
+                       pr_cont("75%%");
                else if (i % 2)
-                       ssb_cont(".");
+                       pr_cont(".");
                err = ssb_pcmcia_sprom_write(bus, i, sprom[i]);
                if (err) {
-                       ssb_notice("Failed to write to SPROM\n");
+                       pr_notice("Failed to write to SPROM\n");
                        failed = 1;
                        break;
                }
        }
        err = ssb_pcmcia_sprom_command(bus, SSB_PCMCIA_SPROMCTL_WRITEDIS);
        if (err) {
-               ssb_notice("Could not disable SPROM write access\n");
+               pr_notice("Could not disable SPROM write access\n");
                failed = 1;
        }
        msleep(500);
        if (!failed) {
-               ssb_cont("100%% ]\n");
-               ssb_notice("SPROM written\n");
+               pr_cont("100%% ]\n");
+               pr_notice("SPROM written\n");
        }
 
        return failed ? -EBUSY : 0;
@@ -693,9 +692,8 @@ static int ssb_pcmcia_do_get_invariants(struct pcmcia_device *p_dev,
        return -ENOSPC; /* continue with next entry */
 
 error:
-       ssb_err(
-                  "PCMCIA: Failed to fetch device invariants: %s\n",
-                  error_description);
+       pr_err("PCMCIA: Failed to fetch device invariants: %s\n",
+              error_description);
        return -ENODEV;
 }
 
@@ -715,8 +713,7 @@ int ssb_pcmcia_get_invariants(struct ssb_bus *bus,
        res = pcmcia_loop_tuple(bus->host_pcmcia, CISTPL_FUNCE,
                                ssb_pcmcia_get_mac, sprom);
        if (res != 0) {
-               ssb_err(
-                       "PCMCIA: Failed to fetch MAC address\n");
+               pr_err("PCMCIA: Failed to fetch MAC address\n");
                return -ENODEV;
        }
 
@@ -726,8 +723,7 @@ int ssb_pcmcia_get_invariants(struct ssb_bus *bus,
        if ((res == 0) || (res == -ENOSPC))
                return 0;
 
-       ssb_err(
-                       "PCMCIA: Failed to fetch device invariants\n");
+       pr_err("PCMCIA: Failed to fetch device invariants\n");
        return -ENODEV;
 }
 
@@ -836,6 +832,6 @@ int ssb_pcmcia_init(struct ssb_bus *bus)
 
        return 0;
 error:
-       ssb_err("Failed to initialize PCMCIA host device\n");
+       pr_err("Failed to initialize PCMCIA host device\n");
        return err;
 }
index b9429df583eb7f744a1e3a9c68207294ccdb8764..6ceee98ed6ff2df6fad51458c765bbd76ab51918 100644 (file)
@@ -12,6 +12,8 @@
  * Licensed under the GNU/GPL. See COPYING for details.
  */
 
+#include "ssb_private.h"
+
 #include <linux/ssb/ssb.h>
 #include <linux/ssb/ssb_regs.h>
 #include <linux/pci.h>
@@ -20,8 +22,6 @@
 #include <pcmcia/cistpl.h>
 #include <pcmcia/ds.h>
 
-#include "ssb_private.h"
-
 
 const char *ssb_core_name(u16 coreid)
 {
@@ -125,7 +125,7 @@ static u16 pcidev_to_chipid(struct pci_dev *pci_dev)
                chipid_fallback = 0x4401;
                break;
        default:
-               ssb_err("PCI-ID not in fallback list\n");
+               dev_err(&pci_dev->dev, "PCI-ID not in fallback list\n");
        }
 
        return chipid_fallback;
@@ -151,7 +151,7 @@ static u8 chipid_to_nrcores(u16 chipid)
        case 0x4704:
                return 9;
        default:
-               ssb_err("CHIPID not in nrcores fallback list\n");
+               pr_err("CHIPID not in nrcores fallback list\n");
        }
 
        return 1;
@@ -210,7 +210,7 @@ void ssb_iounmap(struct ssb_bus *bus)
 #ifdef CONFIG_SSB_PCIHOST
                pci_iounmap(bus->host_pci, bus->mmio);
 #else
-               SSB_BUG_ON(1); /* Can't reach this code. */
+               WARN_ON(1); /* Can't reach this code. */
 #endif
                break;
        case SSB_BUSTYPE_SDIO:
@@ -236,7 +236,7 @@ static void __iomem *ssb_ioremap(struct ssb_bus *bus,
 #ifdef CONFIG_SSB_PCIHOST
                mmio = pci_iomap(bus->host_pci, 0, ~0UL);
 #else
-               SSB_BUG_ON(1); /* Can't reach this code. */
+               WARN_ON(1); /* Can't reach this code. */
 #endif
                break;
        case SSB_BUSTYPE_SDIO:
@@ -318,13 +318,13 @@ int ssb_bus_scan(struct ssb_bus *bus,
                        bus->chip_package = 0;
                }
        }
-       ssb_info("Found chip with id 0x%04X, rev 0x%02X and package 0x%02X\n",
-                bus->chip_id, bus->chip_rev, bus->chip_package);
+       pr_info("Found chip with id 0x%04X, rev 0x%02X and package 0x%02X\n",
+               bus->chip_id, bus->chip_rev, bus->chip_package);
        if (!bus->nr_devices)
                bus->nr_devices = chipid_to_nrcores(bus->chip_id);
        if (bus->nr_devices > ARRAY_SIZE(bus->devices)) {
-               ssb_err("More than %d ssb cores found (%d)\n",
-                       SSB_MAX_NR_CORES, bus->nr_devices);
+               pr_err("More than %d ssb cores found (%d)\n",
+                      SSB_MAX_NR_CORES, bus->nr_devices);
                goto err_unmap;
        }
        if (bus->bustype == SSB_BUSTYPE_SSB) {
@@ -355,18 +355,16 @@ int ssb_bus_scan(struct ssb_bus *bus,
                dev->bus = bus;
                dev->ops = bus->ops;
 
-               printk(KERN_DEBUG PFX
-                           "Core %d found: %s "
-                           "(cc 0x%03X, rev 0x%02X, vendor 0x%04X)\n",
-                           i, ssb_core_name(dev->id.coreid),
-                           dev->id.coreid, dev->id.revision, dev->id.vendor);
+               pr_debug("Core %d found: %s (cc 0x%03X, rev 0x%02X, vendor 0x%04X)\n",
+                        i, ssb_core_name(dev->id.coreid),
+                        dev->id.coreid, dev->id.revision, dev->id.vendor);
 
                switch (dev->id.coreid) {
                case SSB_DEV_80211:
                        nr_80211_cores++;
                        if (nr_80211_cores > 1) {
                                if (!we_support_multiple_80211_cores(bus)) {
-                                       ssb_dbg("Ignoring additional 802.11 core\n");
+                                       pr_debug("Ignoring additional 802.11 core\n");
                                        continue;
                                }
                        }
@@ -374,7 +372,7 @@ int ssb_bus_scan(struct ssb_bus *bus,
                case SSB_DEV_EXTIF:
 #ifdef CONFIG_SSB_DRIVER_EXTIF
                        if (bus->extif.dev) {
-                               ssb_warn("WARNING: Multiple EXTIFs found\n");
+                               pr_warn("WARNING: Multiple EXTIFs found\n");
                                break;
                        }
                        bus->extif.dev = dev;
@@ -382,7 +380,7 @@ int ssb_bus_scan(struct ssb_bus *bus,
                        break;
                case SSB_DEV_CHIPCOMMON:
                        if (bus->chipco.dev) {
-                               ssb_warn("WARNING: Multiple ChipCommon found\n");
+                               pr_warn("WARNING: Multiple ChipCommon found\n");
                                break;
                        }
                        bus->chipco.dev = dev;
@@ -391,7 +389,7 @@ int ssb_bus_scan(struct ssb_bus *bus,
                case SSB_DEV_MIPS_3302:
 #ifdef CONFIG_SSB_DRIVER_MIPS
                        if (bus->mipscore.dev) {
-                               ssb_warn("WARNING: Multiple MIPS cores found\n");
+                               pr_warn("WARNING: Multiple MIPS cores found\n");
                                break;
                        }
                        bus->mipscore.dev = dev;
@@ -412,7 +410,7 @@ int ssb_bus_scan(struct ssb_bus *bus,
                                }
                        }
                        if (bus->pcicore.dev) {
-                               ssb_warn("WARNING: Multiple PCI(E) cores found\n");
+                               pr_warn("WARNING: Multiple PCI(E) cores found\n");
                                break;
                        }
                        bus->pcicore.dev = dev;
index 2278e43614bd13c3c0c34afaa81f9a20f944f586..7fe0afb42234ff2ecd8db6b876fd5d7d72853064 100644 (file)
  *
  */
 
+#include "ssb_private.h"
+
 #include <linux/ssb/ssb.h>
 #include <linux/delay.h>
 #include <linux/io.h>
 #include <linux/etherdevice.h>
 #include <linux/mmc/sdio_func.h>
 
-#include "ssb_private.h"
-
 /* Define the following to 1 to enable a printk on each coreswitch. */
 #define SSB_VERBOSE_SDIOCORESWITCH_DEBUG               0
 
@@ -316,18 +316,18 @@ static void ssb_sdio_block_read(struct ssb_device *dev, void *buffer,
                break;
        }
        case sizeof(u16): {
-               SSB_WARN_ON(count & 1);
+               WARN_ON(count & 1);
                error = sdio_readsb(bus->host_sdio, buffer, offset, count);
                break;
        }
        case sizeof(u32): {
-               SSB_WARN_ON(count & 3);
+               WARN_ON(count & 3);
                offset |= SBSDIO_SB_ACCESS_2_4B_FLAG;   /* 32 bit data access */
                error = sdio_readsb(bus->host_sdio, buffer, offset, count);
                break;
        }
        default:
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        }
        if (!error)
                goto out;
@@ -423,18 +423,18 @@ static void ssb_sdio_block_write(struct ssb_device *dev, const void *buffer,
                                     (void *)buffer, count);
                break;
        case sizeof(u16):
-               SSB_WARN_ON(count & 1);
+               WARN_ON(count & 1);
                error = sdio_writesb(bus->host_sdio, offset,
                                     (void *)buffer, count);
                break;
        case sizeof(u32):
-               SSB_WARN_ON(count & 3);
+               WARN_ON(count & 3);
                offset |= SBSDIO_SB_ACCESS_2_4B_FLAG;   /* 32 bit data access */
                error = sdio_writesb(bus->host_sdio, offset,
                                     (void *)buffer, count);
                break;
        default:
-               SSB_WARN_ON(1);
+               WARN_ON(1);
        }
        if (!error)
                goto out;
index e753fbe302a750ffa3aa13d3f5b402ad9240a06d..4f028a80d6c4b30db3767f0ec4349d6a9d5ba254 100644 (file)
@@ -127,13 +127,13 @@ ssize_t ssb_attr_sprom_store(struct ssb_bus *bus,
                goto out_kfree;
        err = ssb_devices_freeze(bus, &freeze);
        if (err) {
-               ssb_err("SPROM write: Could not freeze all devices\n");
+               pr_err("SPROM write: Could not freeze all devices\n");
                goto out_unlock;
        }
        res = sprom_write(bus, sprom);
        err = ssb_devices_thaw(&freeze);
        if (err)
-               ssb_err("SPROM write: Could not thaw all devices\n");
+               pr_err("SPROM write: Could not thaw all devices\n");
 out_unlock:
        mutex_unlock(&bus->sprom_mutex);
 out_kfree:
index ef9ac8efcab492ab811a9e0c7e2c13b3ef8b0098..5f31bdfbe77f7748da236151f4197da0e10be5d7 100644 (file)
@@ -2,47 +2,14 @@
 #ifndef LINUX_SSB_PRIVATE_H_
 #define LINUX_SSB_PRIVATE_H_
 
+#define PFX            "ssb: "
+#define pr_fmt(fmt)    PFX fmt
+
 #include <linux/ssb/ssb.h>
 #include <linux/types.h>
 #include <linux/bcm47xx_wdt.h>
 
 
-#define PFX    "ssb: "
-
-#ifdef CONFIG_SSB_SILENT
-# define ssb_printk(fmt, ...)                                  \
-       do { if (0) printk(fmt, ##__VA_ARGS__); } while (0)
-#else
-# define ssb_printk(fmt, ...)                                  \
-       printk(fmt, ##__VA_ARGS__)
-#endif /* CONFIG_SSB_SILENT */
-
-#define ssb_emerg(fmt, ...)    ssb_printk(KERN_EMERG PFX fmt, ##__VA_ARGS__)
-#define ssb_err(fmt, ...)      ssb_printk(KERN_ERR PFX fmt, ##__VA_ARGS__)
-#define ssb_warn(fmt, ...)     ssb_printk(KERN_WARNING PFX fmt, ##__VA_ARGS__)
-#define ssb_notice(fmt, ...)   ssb_printk(KERN_NOTICE PFX fmt, ##__VA_ARGS__)
-#define ssb_info(fmt, ...)     ssb_printk(KERN_INFO PFX fmt, ##__VA_ARGS__)
-#define ssb_cont(fmt, ...)     ssb_printk(KERN_CONT fmt, ##__VA_ARGS__)
-
-/* dprintk: Debugging printk; vanishes for non-debug compilation */
-#ifdef CONFIG_SSB_DEBUG
-# define ssb_dbg(fmt, ...)                                     \
-       ssb_printk(KERN_DEBUG PFX fmt, ##__VA_ARGS__)
-#else
-# define ssb_dbg(fmt, ...)                                     \
-       do { if (0) printk(KERN_DEBUG PFX fmt, ##__VA_ARGS__); } while (0)
-#endif
-
-#ifdef CONFIG_SSB_DEBUG
-# define SSB_WARN_ON(x)                WARN_ON(x)
-# define SSB_BUG_ON(x)         BUG_ON(x)
-#else
-static inline int __ssb_do_nothing(int x) { return x; }
-# define SSB_WARN_ON(x)                __ssb_do_nothing(unlikely(!!(x)))
-# define SSB_BUG_ON(x)         __ssb_do_nothing(unlikely(!!(x)))
-#endif
-
-
 /* pci.c */
 #ifdef CONFIG_SSB_PCIHOST
 extern int ssb_pci_switch_core(struct ssb_bus *bus,
index 705aebd74e560cd8d125cee28537e9734dabe518..465a6f5142cc56500ddbc84bf9c6f37fce058bee 100644 (file)
@@ -421,7 +421,7 @@ const char *vp_bus_name(struct virtio_device *vdev)
  * - OR over all affinities for shared MSI
  * - ignore the affinity request if we're using INTX
  */
-int vp_set_vq_affinity(struct virtqueue *vq, int cpu)
+int vp_set_vq_affinity(struct virtqueue *vq, const struct cpumask *cpu_mask)
 {
        struct virtio_device *vdev = vq->vdev;
        struct virtio_pci_device *vp_dev = to_vp_device(vdev);
@@ -435,11 +435,10 @@ int vp_set_vq_affinity(struct virtqueue *vq, int cpu)
        if (vp_dev->msix_enabled) {
                mask = vp_dev->msix_affinity_masks[info->msix_vector];
                irq = pci_irq_vector(vp_dev->pci_dev, info->msix_vector);
-               if (cpu == -1)
+               if (!cpu_mask)
                        irq_set_affinity_hint(irq, NULL);
                else {
-                       cpumask_clear(mask);
-                       cpumask_set_cpu(cpu, mask);
+                       cpumask_copy(mask, cpu_mask);
                        irq_set_affinity_hint(irq, mask);
                }
        }
index 135ee3cf7175881a8259a192ba102978196687c7..02271002c2f379077da1f3448096bd22057e0577 100644 (file)
@@ -141,7 +141,7 @@ const char *vp_bus_name(struct virtio_device *vdev);
  * - OR over all affinities for shared MSI
  * - ignore the affinity request if we're using INTX
  */
-int vp_set_vq_affinity(struct virtqueue *vq, int cpu);
+int vp_set_vq_affinity(struct virtqueue *vq, const struct cpumask *cpu_mask);
 
 const struct cpumask *vp_get_vq_affinity(struct virtio_device *vdev, int index);
 
index 57f20a0a7794908b47fdb151e530d53e3a598b54..147bdec42215d54f6f17686492bbdcc6162f1c8b 100644 (file)
@@ -159,6 +159,13 @@ static inline unsigned int cpumask_next_and(int n,
        return n+1;
 }
 
+static inline unsigned int cpumask_next_wrap(int n, const struct cpumask *mask,
+                                            int start, bool wrap)
+{
+       /* cpu0 unless stop condition, wrap and at cpu0, then nr_cpumask_bits */
+       return (wrap && n == 0);
+}
+
 /* cpu must be a valid cpu, ie 0, so there's no other choice. */
 static inline unsigned int cpumask_any_but(const struct cpumask *mask,
                                           unsigned int cpu)
index 282e2e95ad5b13b3c70fafa444b356e10e764a5e..ca5ab98053c8d48312d9f479a861c6c426b33e54 100644 (file)
@@ -3412,6 +3412,13 @@ static inline int netif_set_xps_queue(struct net_device *dev,
 {
        return 0;
 }
+
+static inline int __netif_set_xps_queue(struct net_device *dev,
+                                       const unsigned long *mask,
+                                       u16 index, bool is_rxqs_map)
+{
+       return 0;
+}
 #endif
 
 /**
index a7311bc03d3a948affbef615c1540d6d3d3efd10..ecf7dab81e9e810cf920b3e6647dd5e1bcc2f715 100644 (file)
@@ -4,18 +4,6 @@
 
 #include <uapi/linux/netfilter/nfnetlink_osf.h>
 
-/* Initial window size option state machine: multiple of mss, mtu or
- * plain numeric value. Can also be made as plain numeric value which
- * is not a multiple of specified value.
- */
-enum nf_osf_window_size_options {
-       OSF_WSS_PLAIN   = 0,
-       OSF_WSS_MSS,
-       OSF_WSS_MTU,
-       OSF_WSS_MODULO,
-       OSF_WSS_MAX,
-};
-
 enum osf_fmatch_states {
        /* Packet does not match the fingerprint */
        FMATCH_WRONG = 0,
index 2978fa4add4202d39a34e85c1237cac990ff60df..a1310482c4ede5625ce92033a8a2790d87acc1b4 100644 (file)
 #include <linux/qed/qed_if.h>
 #include <linux/qed/qed_iov_if.h>
 
+/* 64 max queues * (1 rx + 4 tx-cos + 1 xdp) */
+#define QED_MIN_L2_CONS (2 + NUM_PHYS_TCS_4PORT_K2)
+#define QED_MAX_L2_CONS (64 * (QED_MIN_L2_CONS))
+
 struct qed_queue_start_common_params {
        /* Should always be relative to entity sending this. */
        u8 vport_id;
@@ -49,6 +53,8 @@ struct qed_queue_start_common_params {
 
        struct qed_sb_info *p_sb;
        u8 sb_idx;
+
+       u8 tc;
 };
 
 struct qed_rxq_start_ret_params {
index e93b157f526ce15f808fda65e12b5936e4ee363e..17a13e4785fc3c15594c8aaee5132c44449a954a 100644 (file)
@@ -2363,7 +2363,7 @@ static inline void skb_probe_transport_header(struct sk_buff *skb,
        if (skb_transport_header_was_set(skb))
                return;
 
-       if (skb_flow_dissect_flow_keys_basic(skb, &keys, 0, 0, 0, 0, 0))
+       if (skb_flow_dissect_flow_keys_basic(skb, &keys, NULL, 0, 0, 0, 0))
                skb_set_transport_header(skb, keys.control.thoff);
        else
                skb_set_transport_header(skb, offset_hint);
index 3b43655cabe69991be5bf5a8353f88f6442ece4f..0d5a2691e7e9d6089c05524d91ff61124b010b4d 100644 (file)
@@ -499,11 +499,9 @@ struct ssb_bus {
 
        /* Internal-only stuff follows. Do not touch. */
        struct list_head list;
-#ifdef CONFIG_SSB_DEBUG
        /* Is the bus already powered up? */
        bool powered_up;
        int power_warn_count;
-#endif /* DEBUG */
 };
 
 enum ssb_quirks {
index 5559a2d31c46af9c936f5103f752c35509d0c44b..32baf8e26735d5fd9c693105b681c557cab049e2 100644 (file)
@@ -79,7 +79,8 @@ struct virtio_config_ops {
        u64 (*get_features)(struct virtio_device *vdev);
        int (*finalize_features)(struct virtio_device *vdev);
        const char *(*bus_name)(struct virtio_device *vdev);
-       int (*set_vq_affinity)(struct virtqueue *vq, int cpu);
+       int (*set_vq_affinity)(struct virtqueue *vq,
+                              const struct cpumask *cpu_mask);
        const struct cpumask *(*get_vq_affinity)(struct virtio_device *vdev,
                        int index);
 };
@@ -236,11 +237,11 @@ const char *virtio_bus_name(struct virtio_device *vdev)
  *
  */
 static inline
-int virtqueue_set_affinity(struct virtqueue *vq, int cpu)
+int virtqueue_set_affinity(struct virtqueue *vq, const struct cpumask *cpu_mask)
 {
        struct virtio_device *vdev = vq->vdev;
        if (vdev->config->set_vq_affinity)
-               return vdev->config->set_vq_affinity(vq, cpu);
+               return vdev->config->set_vq_affinity(vq, cpu_mask);
        return 0;
 }
 
index 8c9bc02d05e1c9997d17f6b0ecaaabc716e4abe9..1ad5b19e83a95d0a4ba17181f1f55134022791f1 100644 (file)
@@ -101,6 +101,7 @@ struct tc_action_ops {
        void    (*stats_update)(struct tc_action *, u64, u32, u64);
        size_t  (*get_fill_size)(const struct tc_action *act);
        struct net_device *(*get_dev)(const struct tc_action *a);
+       void    (*put_dev)(struct net_device *dev);
        int     (*delete)(struct net *net, u32 index);
 };
 
index 7f008097552ea86a7a1b180be3b480db92302d39..cdd9f1fe7cfa903ea79f54ac0b17b1d946d81d92 100644 (file)
@@ -269,6 +269,7 @@ enum {
        HCI_VENDOR_DIAG,
        HCI_FORCE_BREDR_SMP,
        HCI_FORCE_STATIC_ADDR,
+       HCI_LL_RPA_RESOLUTION,
 
        __HCI_NUM_FLAGS,
 };
@@ -1524,6 +1525,8 @@ struct hci_rp_le_read_resolv_list_size {
        __u8    size;
 } __packed;
 
+#define HCI_OP_LE_SET_ADDR_RESOLV_ENABLE 0x202d
+
 #define HCI_OP_LE_READ_MAX_DATA_LEN    0x202f
 struct hci_rp_le_read_max_data_len {
        __u8    status;
@@ -2176,6 +2179,8 @@ struct hci_evt_le_ext_adv_set_term {
        __u8    num_evts;
 } __packed;
 
+#define HCI_EV_VENDOR                  0xff
+
 /* Internal events generated by Bluetooth stack */
 #define HCI_EV_STACK_INTERNAL  0xfd
 struct hci_ev_stack_internal {
index 0304ba2ae3536c1f4117daea639d8e0955b051be..883bb9085f1589b84196fd289bd1c70df9ab957f 100644 (file)
@@ -59,13 +59,13 @@ int gnet_stats_finish_copy(struct gnet_dump *d);
 int gen_new_estimator(struct gnet_stats_basic_packed *bstats,
                      struct gnet_stats_basic_cpu __percpu *cpu_bstats,
                      struct net_rate_estimator __rcu **rate_est,
-                     spinlock_t *stats_lock,
+                     spinlock_t *lock,
                      seqcount_t *running, struct nlattr *opt);
 void gen_kill_estimator(struct net_rate_estimator __rcu **ptr);
 int gen_replace_estimator(struct gnet_stats_basic_packed *bstats,
                          struct gnet_stats_basic_cpu __percpu *cpu_bstats,
                          struct net_rate_estimator __rcu **ptr,
-                         spinlock_t *stats_lock,
+                         spinlock_t *lock,
                          seqcount_t *running, struct nlattr *opt);
 bool gen_estimator_active(struct net_rate_estimator __rcu **ptr);
 bool gen_estimator_read(struct net_rate_estimator __rcu **ptr,
index 0a6c9e0f2b5a35b8dd04cb222147664cec67218c..fa43b82607d9cb70d1e9ce283850ff6e850660e7 100644 (file)
@@ -167,7 +167,8 @@ enum inet_csk_ack_state_t {
        ICSK_ACK_SCHED  = 1,
        ICSK_ACK_TIMER  = 2,
        ICSK_ACK_PUSHED = 4,
-       ICSK_ACK_PUSHED2 = 8
+       ICSK_ACK_PUSHED2 = 8,
+       ICSK_ACK_NOW = 16       /* Send the next ACK immediately (once) */
 };
 
 void inet_csk_init_xmit_timers(struct sock *sk,
index b86d1452818818d21e02fbbbf884989e7a21a8b1..1662cbc0b46b45296a367ecbdaf03c68854fdce7 100644 (file)
@@ -57,7 +57,9 @@ struct frag_v6_compare_key {
  * @lock: spinlock protecting this frag
  * @refcnt: reference count of the queue
  * @fragments: received fragments head
+ * @rb_fragments: received fragments rb-tree root
  * @fragments_tail: received fragments tail
+ * @last_run_head: the head of the last "run". see ip_fragment.c
  * @stamp: timestamp of the last received fragment
  * @len: total length of the original datagram
  * @meat: length of received fragments so far
@@ -78,6 +80,7 @@ struct inet_frag_queue {
        struct sk_buff          *fragments;  /* Used in IPv6. */
        struct rb_root          rb_fragments; /* Used in IPv4. */
        struct sk_buff          *fragments_tail;
+       struct sk_buff          *last_run_head;
        ktime_t                 stamp;
        int                     len;
        int                     meat;
@@ -113,6 +116,9 @@ void inet_frag_kill(struct inet_frag_queue *q);
 void inet_frag_destroy(struct inet_frag_queue *q);
 struct inet_frag_queue *inet_frag_find(struct netns_frags *nf, void *key);
 
+/* Free all skbs in the queue; return the sum of their truesizes. */
+unsigned int inet_frag_rbtree_purge(struct rb_root *root);
+
 static inline void inet_frag_put(struct inet_frag_queue *q)
 {
        if (refcount_dec_and_test(&q->refcnt))
index 6068c6da3eaca03703f3fe08f36b8649ae4e0cc4..8465263b297d70921678fa4c9bf3281c18aaa4ff 100644 (file)
@@ -77,7 +77,6 @@ struct nf_conntrack_l4proto {
                               struct nf_conntrack_tuple *t);
        const struct nla_policy *nla_policy;
 
-#if IS_ENABLED(CONFIG_NF_CT_NETLINK_TIMEOUT)
        struct {
                int (*nlattr_to_obj)(struct nlattr *tb[],
                                     struct net *net, void *data);
@@ -87,7 +86,6 @@ struct nf_conntrack_l4proto {
                u16 nlattr_max;
                const struct nla_policy *nla_policy;
        } ctnl_timeout;
-#endif
 #ifdef CONFIG_NF_CONNTRACK_PROCFS
        /* Print out the private part of the conntrack. */
        void (*print_conntrack)(struct seq_file *s, struct nf_conn *);
index 80ceb3d0291dfb0769a0814474d55cceda522ee7..d5f62cc6c2ae44473b6233182d7825ee8e5d1ef5 100644 (file)
 
 #define CTNL_TIMEOUT_NAME_MAX  32
 
+struct nf_ct_timeout {
+       __u16                   l3num;
+       const struct nf_conntrack_l4proto *l4proto;
+       char                    data[0];
+};
+
 struct ctnl_timeout {
        struct list_head        head;
        struct rcu_head         rcu_head;
        refcount_t              refcnt;
        char                    name[CTNL_TIMEOUT_NAME_MAX];
-       __u16                   l3num;
-       const struct nf_conntrack_l4proto *l4proto;
-       char                    data[0];
+       struct nf_ct_timeout    timeout;
 };
 
 struct nf_conn_timeout {
-       struct ctnl_timeout __rcu *timeout;
+       struct nf_ct_timeout __rcu *timeout;
 };
 
 static inline unsigned int *
 nf_ct_timeout_data(struct nf_conn_timeout *t)
 {
-       struct ctnl_timeout *timeout;
+       struct nf_ct_timeout *timeout;
 
        timeout = rcu_dereference(t->timeout);
        if (timeout == NULL)
@@ -49,7 +53,7 @@ struct nf_conn_timeout *nf_ct_timeout_find(const struct nf_conn *ct)
 
 static inline
 struct nf_conn_timeout *nf_ct_timeout_ext_add(struct nf_conn *ct,
-                                             struct ctnl_timeout *timeout,
+                                             struct nf_ct_timeout *timeout,
                                              gfp_t gfp)
 {
 #ifdef CONFIG_NF_CONNTRACK_TIMEOUT
@@ -83,6 +87,7 @@ static inline unsigned int *nf_ct_timeout_lookup(const struct nf_conn *ct)
 #ifdef CONFIG_NF_CONNTRACK_TIMEOUT
 int nf_conntrack_timeout_init(void);
 void nf_conntrack_timeout_fini(void);
+void nf_ct_untimeout(struct net *net, struct nf_ct_timeout *timeout);
 #else
 static inline int nf_conntrack_timeout_init(void)
 {
@@ -96,8 +101,8 @@ static inline void nf_conntrack_timeout_fini(void)
 #endif /* CONFIG_NF_CONNTRACK_TIMEOUT */
 
 #ifdef CONFIG_NF_CONNTRACK_TIMEOUT
-extern struct ctnl_timeout *(*nf_ct_timeout_find_get_hook)(struct net *net, const char *name);
-extern void (*nf_ct_timeout_put_hook)(struct ctnl_timeout *timeout);
+extern struct nf_ct_timeout *(*nf_ct_timeout_find_get_hook)(struct net *net, const char *name);
+extern void (*nf_ct_timeout_put_hook)(struct nf_ct_timeout *timeout);
 #endif
 
 #endif /* _NF_CONNTRACK_TIMEOUT_H */
index 762ac9931b6251152b6ee0e5780df0f7b073f3e6..f0e396ab9bec538b2cb6a19a61ff5b038585b4fb 100644 (file)
@@ -32,6 +32,7 @@ struct netns_sysctl_ipv6 {
        int flowlabel_consistency;
        int auto_flowlabels;
        int icmpv6_time;
+       int icmpv6_echo_ignore_all;
        int anycast_src_echo_reply;
        int ip_nonlocal_bind;
        int fwmark_reflect;
index ab869e0d83267b730d7b0758dbf90f233962c714..28a7c8e446369e644370e5e4b4c9850465c1893f 100644 (file)
@@ -57,6 +57,7 @@
 #include <linux/atomic.h>              /* This gets us atomic counters.  */
 #include <linux/skbuff.h>      /* We need sk_buff_head. */
 #include <linux/workqueue.h>   /* We need tq_struct.    */
+#include <linux/flex_array.h>  /* We need flex_array.   */
 #include <linux/sctp.h>                /* We need sctp* header structs.  */
 #include <net/sctp/auth.h>     /* We need auth specific structs */
 #include <net/ip.h>            /* For inet_skb_parm */
@@ -398,37 +399,35 @@ void sctp_stream_update(struct sctp_stream *stream, struct sctp_stream *new);
 
 /* What is the current SSN number for this stream? */
 #define sctp_ssn_peek(stream, type, sid) \
-       ((stream)->type[sid].ssn)
+       (sctp_stream_##type((stream), (sid))->ssn)
 
 /* Return the next SSN number for this stream. */
 #define sctp_ssn_next(stream, type, sid) \
-       ((stream)->type[sid].ssn++)
+       (sctp_stream_##type((stream), (sid))->ssn++)
 
 /* Skip over this ssn and all below. */
 #define sctp_ssn_skip(stream, type, sid, ssn) \
-       ((stream)->type[sid].ssn = ssn + 1)
+       (sctp_stream_##type((stream), (sid))->ssn = ssn + 1)
 
 /* What is the current MID number for this stream? */
 #define sctp_mid_peek(stream, type, sid) \
-       ((stream)->type[sid].mid)
+       (sctp_stream_##type((stream), (sid))->mid)
 
 /* Return the next MID number for this stream.  */
 #define sctp_mid_next(stream, type, sid) \
-       ((stream)->type[sid].mid++)
+       (sctp_stream_##type((stream), (sid))->mid++)
 
 /* Skip over this mid and all below. */
 #define sctp_mid_skip(stream, type, sid, mid) \
-       ((stream)->type[sid].mid = mid + 1)
-
-#define sctp_stream_in(asoc, sid) (&(asoc)->stream.in[sid])
+       (sctp_stream_##type((stream), (sid))->mid = mid + 1)
 
 /* What is the current MID_uo number for this stream? */
 #define sctp_mid_uo_peek(stream, type, sid) \
-       ((stream)->type[sid].mid_uo)
+       (sctp_stream_##type((stream), (sid))->mid_uo)
 
 /* Return the next MID_uo number for this stream.  */
 #define sctp_mid_uo_next(stream, type, sid) \
-       ((stream)->type[sid].mid_uo++)
+       (sctp_stream_##type((stream), (sid))->mid_uo++)
 
 /*
  * Pointers to address related SCTP functions.
@@ -1440,8 +1439,8 @@ struct sctp_stream_in {
 };
 
 struct sctp_stream {
-       struct sctp_stream_out *out;
-       struct sctp_stream_in *in;
+       struct flex_array *out;
+       struct flex_array *in;
        __u16 outcnt;
        __u16 incnt;
        /* Current stream being sent, if any */
@@ -1463,6 +1462,23 @@ struct sctp_stream {
        struct sctp_stream_interleave *si;
 };
 
+static inline struct sctp_stream_out *sctp_stream_out(
+       const struct sctp_stream *stream,
+       __u16 sid)
+{
+       return flex_array_get(stream->out, sid);
+}
+
+static inline struct sctp_stream_in *sctp_stream_in(
+       const struct sctp_stream *stream,
+       __u16 sid)
+{
+       return flex_array_get(stream->in, sid);
+}
+
+#define SCTP_SO(s, i) sctp_stream_out((s), (i))
+#define SCTP_SI(s, i) sctp_stream_in((s), (i))
+
 #define SCTP_STREAM_CLOSED             0x00
 #define SCTP_STREAM_OPEN               0x01
 
index d8b3b6578c010e46972af88b52d9bee0bc109140..d5c683e8bb22adf9654e07d791d2ad782d2dec02 100644 (file)
@@ -124,10 +124,6 @@ struct tls_sw_context_rx {
        struct sk_buff *recv_pkt;
        u8 control;
        bool decrypted;
-
-       char rx_aad_ciphertext[TLS_AAD_SPACE_SIZE];
-       char rx_aad_plaintext[TLS_AAD_SPACE_SIZE];
-
 };
 
 struct tls_record_info {
index 9763cddd0594709e263aada3baaca5510030294b..6271bab63bfb99558ca2895a16c5f3061a9f5b75 100644 (file)
@@ -22,6 +22,7 @@ TRACE_EVENT(fib_table_lookup,
                __field(        int,    err             )
                __field(        int,    oif             )
                __field(        int,    iif             )
+               __field(        u8,     proto           )
                __field(        __u8,   tos             )
                __field(        __u8,   scope           )
                __field(        __u8,   flags           )
@@ -31,7 +32,6 @@ TRACE_EVENT(fib_table_lookup,
                __array(        __u8,   saddr,  4       )
                __field(        u16,    sport           )
                __field(        u16,    dport           )
-               __field(        u8,     proto           )
                __dynamic_array(char,  name,   IFNAMSIZ )
        ),
 
index 357862d948de1d5adc4eb644af7294dd0c248fbd..e23290ffdc77d7d44395e22614264988a5381f69 100644 (file)
@@ -8,6 +8,7 @@
 #define NFT_SET_MAXNAMELEN     NFT_NAME_MAXLEN
 #define NFT_OBJ_MAXNAMELEN     NFT_NAME_MAXLEN
 #define NFT_USERDATA_MAXLEN    256
+#define NFT_OSF_MAXGENRELEN    16
 
 /**
  * enum nft_registers - nf_tables registers
@@ -957,6 +958,7 @@ enum nft_socket_keys {
  * @NFT_CT_DST_IP: conntrack layer 3 protocol destination (IPv4 address)
  * @NFT_CT_SRC_IP6: conntrack layer 3 protocol source (IPv6 address)
  * @NFT_CT_DST_IP6: conntrack layer 3 protocol destination (IPv6 address)
+ * @NFT_CT_TIMEOUT: connection tracking timeout policy assigned to conntrack
  */
 enum nft_ct_keys {
        NFT_CT_STATE,
@@ -982,6 +984,7 @@ enum nft_ct_keys {
        NFT_CT_DST_IP,
        NFT_CT_SRC_IP6,
        NFT_CT_DST_IP6,
+       NFT_CT_TIMEOUT,
        __NFT_CT_MAX
 };
 #define NFT_CT_MAX             (__NFT_CT_MAX - 1)
@@ -1410,6 +1413,15 @@ enum nft_ct_helper_attributes {
 };
 #define NFTA_CT_HELPER_MAX     (__NFTA_CT_HELPER_MAX - 1)
 
+enum nft_ct_timeout_timeout_attributes {
+       NFTA_CT_TIMEOUT_UNSPEC,
+       NFTA_CT_TIMEOUT_L3PROTO,
+       NFTA_CT_TIMEOUT_L4PROTO,
+       NFTA_CT_TIMEOUT_DATA,
+       __NFTA_CT_TIMEOUT_MAX,
+};
+#define NFTA_CT_TIMEOUT_MAX    (__NFTA_CT_TIMEOUT_MAX - 1)
+
 #define NFT_OBJECT_UNSPEC      0
 #define NFT_OBJECT_COUNTER     1
 #define NFT_OBJECT_QUOTA       2
@@ -1417,7 +1429,8 @@ enum nft_ct_helper_attributes {
 #define NFT_OBJECT_LIMIT       4
 #define NFT_OBJECT_CONNLIMIT   5
 #define NFT_OBJECT_TUNNEL      6
-#define __NFT_OBJECT_MAX       7
+#define NFT_OBJECT_CT_TIMEOUT  7
+#define __NFT_OBJECT_MAX       8
 #define NFT_OBJECT_MAX         (__NFT_OBJECT_MAX - 1)
 
 /**
index 3b93fbb9fc24cf6a1e0b0337433704f54fbcc4fa..76a3527df5dda83e8bb6967834932a527cad1739 100644 (file)
@@ -88,6 +88,18 @@ enum iana_options {
        OSFOPT_EMPTY = 255,
 };
 
+/* Initial window size option state machine: multiple of mss, mtu or
+ * plain numeric value. Can also be made as plain numeric value which
+ * is not a multiple of specified value.
+ */
+enum nf_osf_window_size_options {
+       OSF_WSS_PLAIN   = 0,
+       OSF_WSS_MSS,
+       OSF_WSS_MTU,
+       OSF_WSS_MODULO,
+       OSF_WSS_MAX,
+};
+
 enum nf_osf_attr_type {
        OSF_ATTR_UNSPEC,
        OSF_ATTR_FINGER,
index c56c59605c2b5eb1d4953d4e8e73bfe58ad73fc4..24102b5286ecbd23336a14fed0291fb68f105fc4 100644 (file)
@@ -46,6 +46,7 @@
 #define xt_osf_finger          nf_osf_finger
 #define xt_osf_nlmsg           nf_osf_nlmsg
 
+#define xt_osf_window_size_options     nf_osf_window_size_options
 #define xt_osf_attr_type       nf_osf_attr_type
 #define xt_osf_msg_types       nf_osf_msg_types
 
index 784c2e3e572e013a24aeabf96ef400f453ffb72e..88b5f99903201fa6ca2f0f62cdd2b0e65540952c 100644 (file)
@@ -68,7 +68,7 @@ struct ppp_option_data {
 struct pppol2tp_ioc_stats {
        __u16           tunnel_id;      /* redundant */
        __u16           session_id;     /* if zero, get tunnel stats */
-       __u32           using_ipsec:1;  /* valid only for session_id == 0 */
+       __u32           using_ipsec:1;
        __aligned_u64   tx_packets;
        __aligned_u64   tx_bytes;
        __aligned_u64   tx_errors;
index 6b58371b1f0d5a1405328c5e4c43c15bc9588f9a..d71013fffaf65b520420a4d04d2347f450c2f598 100644 (file)
@@ -575,7 +575,8 @@ enum {
 
 /* /proc/sys/net/ipv6/icmp */
 enum {
-       NET_IPV6_ICMP_RATELIMIT=1
+       NET_IPV6_ICMP_RATELIMIT = 1,
+       NET_IPV6_ICMP_ECHO_IGNORE_ALL = 2
 };
 
 /* /proc/sys/net/<protocol>/neigh/<dev> */
index 3b494941a44a42600e8cc10862c3ae121796672b..620bc5024d7ddac02c62d28b7937b4d7a58809a2 100644 (file)
@@ -69,7 +69,7 @@ struct bpf_cpu_map {
 };
 
 static int bq_flush_to_queue(struct bpf_cpu_map_entry *rcpu,
-                            struct xdp_bulk_queue *bq);
+                            struct xdp_bulk_queue *bq, bool in_napi_ctx);
 
 static u64 cpu_map_bitmap_size(const union bpf_attr *attr)
 {
@@ -375,7 +375,7 @@ static void __cpu_map_entry_free(struct rcu_head *rcu)
                struct xdp_bulk_queue *bq = per_cpu_ptr(rcpu->bulkq, cpu);
 
                /* No concurrent bq_enqueue can run at this point */
-               bq_flush_to_queue(rcpu, bq);
+               bq_flush_to_queue(rcpu, bq, false);
        }
        free_percpu(rcpu->bulkq);
        /* Cannot kthread_stop() here, last put free rcpu resources */
@@ -559,7 +559,7 @@ const struct bpf_map_ops cpu_map_ops = {
 };
 
 static int bq_flush_to_queue(struct bpf_cpu_map_entry *rcpu,
-                            struct xdp_bulk_queue *bq)
+                            struct xdp_bulk_queue *bq, bool in_napi_ctx)
 {
        unsigned int processed = 0, drops = 0;
        const int to_cpu = rcpu->cpu;
@@ -579,7 +579,10 @@ static int bq_flush_to_queue(struct bpf_cpu_map_entry *rcpu,
                err = __ptr_ring_produce(q, xdpf);
                if (err) {
                        drops++;
-                       xdp_return_frame_rx_napi(xdpf);
+                       if (likely(in_napi_ctx))
+                               xdp_return_frame_rx_napi(xdpf);
+                       else
+                               xdp_return_frame(xdpf);
                }
                processed++;
        }
@@ -599,7 +602,7 @@ static int bq_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_frame *xdpf)
        struct xdp_bulk_queue *bq = this_cpu_ptr(rcpu->bulkq);
 
        if (unlikely(bq->count == CPU_MAP_BULK_SIZE))
-               bq_flush_to_queue(rcpu, bq);
+               bq_flush_to_queue(rcpu, bq, true);
 
        /* Notice, xdp_buff/page MUST be queued here, long enough for
         * driver to code invoking us to finished, due to driver
@@ -662,7 +665,7 @@ void __cpu_map_flush(struct bpf_map *map)
 
                /* Flush all frames in bulkq to real queue */
                bq = this_cpu_ptr(rcpu->bulkq);
-               bq_flush_to_queue(rcpu, bq);
+               bq_flush_to_queue(rcpu, bq, true);
 
                /* If already running, costs spin_lock_irqsave + smb_mb */
                wake_up_process(rcpu->kthread);
index a7c6620d8389ae0e2ac1b57e7739e34a774bf9a9..ac1df79f378827a737e816432c292064094cda74 100644 (file)
@@ -217,7 +217,8 @@ void __dev_map_insert_ctx(struct bpf_map *map, u32 bit)
 }
 
 static int bq_xmit_all(struct bpf_dtab_netdev *obj,
-                      struct xdp_bulk_queue *bq, u32 flags)
+                      struct xdp_bulk_queue *bq, u32 flags,
+                      bool in_napi_ctx)
 {
        struct net_device *dev = obj->dev;
        int sent = 0, drops = 0, err = 0;
@@ -254,7 +255,10 @@ static int bq_xmit_all(struct bpf_dtab_netdev *obj,
                struct xdp_frame *xdpf = bq->q[i];
 
                /* RX path under NAPI protection, can return frames faster */
-               xdp_return_frame_rx_napi(xdpf);
+               if (likely(in_napi_ctx))
+                       xdp_return_frame_rx_napi(xdpf);
+               else
+                       xdp_return_frame(xdpf);
                drops++;
        }
        goto out;
@@ -286,7 +290,7 @@ void __dev_map_flush(struct bpf_map *map)
                __clear_bit(bit, bitmap);
 
                bq = this_cpu_ptr(dev->bulkq);
-               bq_xmit_all(dev, bq, XDP_XMIT_FLUSH);
+               bq_xmit_all(dev, bq, XDP_XMIT_FLUSH, true);
        }
 }
 
@@ -316,7 +320,7 @@ static int bq_enqueue(struct bpf_dtab_netdev *obj, struct xdp_frame *xdpf,
        struct xdp_bulk_queue *bq = this_cpu_ptr(obj->bulkq);
 
        if (unlikely(bq->count == DEV_MAP_BULK_SIZE))
-               bq_xmit_all(obj, bq, 0);
+               bq_xmit_all(obj, bq, 0, true);
 
        /* Ingress dev_rx will be the same for all xdp_frame's in
         * bulk_queue, because bq stored per-CPU and must be flushed
@@ -385,7 +389,7 @@ static void dev_map_flush_old(struct bpf_dtab_netdev *dev)
                        __clear_bit(dev->bit, bitmap);
 
                        bq = per_cpu_ptr(dev->bulkq, cpu);
-                       bq_xmit_all(dev, bq, XDP_XMIT_FLUSH);
+                       bq_xmit_all(dev, bq, XDP_XMIT_FLUSH, false);
                }
        }
 }
index e376ffffebce7481c9a7d7245c57df4f963f60fd..0c1a696b041bab662dc2336d493585da240ce3fa 100644 (file)
@@ -1045,12 +1045,12 @@ static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
        timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
 
        while (msg_data_left(msg)) {
-               struct sk_msg_buff *m;
+               struct sk_msg_buff *m = NULL;
                bool enospc = false;
                int copy;
 
                if (sk->sk_err) {
-                       err = sk->sk_err;
+                       err = -sk->sk_err;
                        goto out_err;
                }
 
@@ -1113,8 +1113,11 @@ static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
                set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
 wait_for_memory:
                err = sk_stream_wait_memory(sk, &timeo);
-               if (err)
+               if (err) {
+                       if (m && m != psock->cork)
+                               free_start_sg(sk, m);
                        goto out_err;
+               }
        }
 out_err:
        if (err < 0)
index 19d42ea75ec225d385734e5212680df0ef9609c6..98fa559ebd808698ecc3774bf88c5bdea1988f27 100644 (file)
@@ -1,9 +1,6 @@
 config ARCH_HAS_UBSAN_SANITIZE_ALL
        bool
 
-config ARCH_WANTS_UBSAN_NO_NULL
-       def_bool n
-
 config UBSAN
        bool "Undefined behaviour sanity checker"
        help
@@ -39,14 +36,6 @@ config UBSAN_ALIGNMENT
          Enabling this option on architectures that support unaligned
          accesses may produce a lot of false positives.
 
-config UBSAN_NULL
-       bool "Enable checking of null pointers"
-       depends on UBSAN
-       default y if !ARCH_WANTS_UBSAN_NO_NULL
-       help
-         This option enables detection of memory accesses via a
-         null pointer.
-
 config TEST_UBSAN
        tristate "Module for testing for undefined behavior detection"
        depends on m && UBSAN
index dab1511294add14ba1290ded1dcd65408e274a4c..c5e87a3a82ba340254ff66b446c9b853fc211dde 100644 (file)
@@ -4395,6 +4395,9 @@ int generic_access_phys(struct vm_area_struct *vma, unsigned long addr,
                return -EINVAL;
 
        maddr = ioremap_prot(phys_addr, PAGE_ALIGN(len + offset), prot);
+       if (!maddr)
+               return -ENOMEM;
+
        if (write)
                memcpy_toio(maddr + offset, buf, len);
        else
index 754714c8d75288bc90c459bf4a659ac2e719ed7a..f12555f23a49a025563a6f11f174a0be71556a75 100644 (file)
@@ -1132,7 +1132,6 @@ static void hci_cc_le_set_ext_adv_enable(struct hci_dev *hdev,
                                         struct sk_buff *skb)
 {
        struct hci_cp_le_set_ext_adv_enable *cp;
-       struct hci_cp_ext_adv_set *adv_set;
        __u8 status = *((__u8 *) skb->data);
 
        BT_DBG("%s status 0x%2.2x", hdev->name, status);
@@ -1144,8 +1143,6 @@ static void hci_cc_le_set_ext_adv_enable(struct hci_dev *hdev,
        if (!cp)
                return;
 
-       adv_set = (void *) cp->data;
-
        hci_dev_lock(hdev);
 
        if (cp->enable) {
@@ -1483,6 +1480,30 @@ static void hci_cc_le_read_resolv_list_size(struct hci_dev *hdev,
        hdev->le_resolv_list_size = rp->size;
 }
 
+static void hci_cc_le_set_addr_resolution_enable(struct hci_dev *hdev,
+                                               struct sk_buff *skb)
+{
+       __u8 *sent, status = *((__u8 *) skb->data);
+
+       BT_DBG("%s status 0x%2.2x", hdev->name, status);
+
+       if (status)
+               return;
+
+       sent = hci_sent_cmd_data(hdev, HCI_OP_LE_SET_ADDR_RESOLV_ENABLE);
+       if (!sent)
+               return;
+
+       hci_dev_lock(hdev);
+
+       if (*sent)
+               hci_dev_set_flag(hdev, HCI_LL_RPA_RESOLUTION);
+       else
+               hci_dev_clear_flag(hdev, HCI_LL_RPA_RESOLUTION);
+
+       hci_dev_unlock(hdev);
+}
+
 static void hci_cc_le_read_max_data_len(struct hci_dev *hdev,
                                        struct sk_buff *skb)
 {
@@ -3266,6 +3287,10 @@ static void hci_cmd_complete_evt(struct hci_dev *hdev, struct sk_buff *skb,
                hci_cc_le_read_resolv_list_size(hdev, skb);
                break;
 
+       case HCI_OP_LE_SET_ADDR_RESOLV_ENABLE:
+               hci_cc_le_set_addr_resolution_enable(hdev, skb);
+               break;
+
        case HCI_OP_LE_READ_MAX_DATA_LEN:
                hci_cc_le_read_max_data_len(hdev, skb);
                break;
index 231602f7cb664bc10a1e5e039e55e6240236ee20..3bdc8f3ca259ed2d82bb9861033814d65591a51c 100644 (file)
@@ -3356,7 +3356,6 @@ int mgmt_phy_configuration_changed(struct hci_dev *hdev, struct sock *skip)
 static void set_default_phy_complete(struct hci_dev *hdev, u8 status,
                                     u16 opcode, struct sk_buff *skb)
 {
-       struct mgmt_cp_set_phy_confguration *cp;
        struct mgmt_pending_cmd *cmd;
 
        BT_DBG("status 0x%02x", status);
@@ -3367,8 +3366,6 @@ static void set_default_phy_complete(struct hci_dev *hdev, u8 status,
        if (!cmd)
                goto unlock;
 
-       cp = cmd->param;
-
        if (status) {
                mgmt_cmd_status(cmd->sk, hdev->id,
                                MGMT_OP_SET_PHY_CONFIGURATION,
index f68122f0ab026c5aa7868643384b35f6f02d09d5..325fc5088370b5b0f06daaaf990c5cee86dfecce 100644 (file)
@@ -2176,6 +2176,7 @@ static void netif_reset_xps_queues(struct net_device *dev, u16 offset,
        if (!static_key_false(&xps_needed))
                return;
 
+       cpus_read_lock();
        mutex_lock(&xps_map_mutex);
 
        if (static_key_false(&xps_rxqs_needed)) {
@@ -2199,10 +2200,11 @@ static void netif_reset_xps_queues(struct net_device *dev, u16 offset,
 
 out_no_maps:
        if (static_key_enabled(&xps_rxqs_needed))
-               static_key_slow_dec(&xps_rxqs_needed);
+               static_key_slow_dec_cpuslocked(&xps_rxqs_needed);
 
-       static_key_slow_dec(&xps_needed);
+       static_key_slow_dec_cpuslocked(&xps_needed);
        mutex_unlock(&xps_map_mutex);
+       cpus_read_unlock();
 }
 
 static void netif_reset_xps_queues_gt(struct net_device *dev, u16 index)
@@ -2250,6 +2252,7 @@ static struct xps_map *expand_xps_map(struct xps_map *map, int attr_index,
        return new_map;
 }
 
+/* Must be called under cpus_read_lock */
 int __netif_set_xps_queue(struct net_device *dev, const unsigned long *mask,
                          u16 index, bool is_rxqs_map)
 {
@@ -2317,9 +2320,9 @@ int __netif_set_xps_queue(struct net_device *dev, const unsigned long *mask,
        if (!new_dev_maps)
                goto out_no_new_maps;
 
-       static_key_slow_inc(&xps_needed);
+       static_key_slow_inc_cpuslocked(&xps_needed);
        if (is_rxqs_map)
-               static_key_slow_inc(&xps_rxqs_needed);
+               static_key_slow_inc_cpuslocked(&xps_rxqs_needed);
 
        for (j = -1; j = netif_attrmask_next(j, possible_mask, nr_ids),
             j < nr_ids;) {
@@ -2448,11 +2451,18 @@ int __netif_set_xps_queue(struct net_device *dev, const unsigned long *mask,
        kfree(new_dev_maps);
        return -ENOMEM;
 }
+EXPORT_SYMBOL_GPL(__netif_set_xps_queue);
 
 int netif_set_xps_queue(struct net_device *dev, const struct cpumask *mask,
                        u16 index)
 {
-       return __netif_set_xps_queue(dev, cpumask_bits(mask), index, false);
+       int ret;
+
+       cpus_read_lock();
+       ret =  __netif_set_xps_queue(dev, cpumask_bits(mask), index, false);
+       cpus_read_unlock();
+
+       return ret;
 }
 EXPORT_SYMBOL(netif_set_xps_queue);
 
index 98fd12721221e4aa26e4d11be9de6c0305fb6dd9..e4e442d70c2d032c9038bb03dbe2312ec4dcadf0 100644 (file)
@@ -112,7 +112,7 @@ static void est_timer(struct timer_list *t)
  * @bstats: basic statistics
  * @cpu_bstats: bstats per cpu
  * @rate_est: rate estimator statistics
- * @stats_lock: statistics lock
+ * @lock: lock for statistics and control path
  * @running: qdisc running seqcount
  * @opt: rate estimator configuration TLV
  *
@@ -128,7 +128,7 @@ static void est_timer(struct timer_list *t)
 int gen_new_estimator(struct gnet_stats_basic_packed *bstats,
                      struct gnet_stats_basic_cpu __percpu *cpu_bstats,
                      struct net_rate_estimator __rcu **rate_est,
-                     spinlock_t *stats_lock,
+                     spinlock_t *lock,
                      seqcount_t *running,
                      struct nlattr *opt)
 {
@@ -154,19 +154,22 @@ int gen_new_estimator(struct gnet_stats_basic_packed *bstats,
        seqcount_init(&est->seq);
        intvl_log = parm->interval + 2;
        est->bstats = bstats;
-       est->stats_lock = stats_lock;
+       est->stats_lock = lock;
        est->running  = running;
        est->ewma_log = parm->ewma_log;
        est->intvl_log = intvl_log;
        est->cpu_bstats = cpu_bstats;
 
-       if (stats_lock)
+       if (lock)
                local_bh_disable();
        est_fetch_counters(est, &b);
-       if (stats_lock)
+       if (lock)
                local_bh_enable();
        est->last_bytes = b.bytes;
        est->last_packets = b.packets;
+
+       if (lock)
+               spin_lock_bh(lock);
        old = rcu_dereference_protected(*rate_est, 1);
        if (old) {
                del_timer_sync(&old->timer);
@@ -179,6 +182,8 @@ int gen_new_estimator(struct gnet_stats_basic_packed *bstats,
        mod_timer(&est->timer, est->next_jiffies);
 
        rcu_assign_pointer(*rate_est, est);
+       if (lock)
+               spin_unlock_bh(lock);
        if (old)
                kfree_rcu(old, rcu);
        return 0;
@@ -209,7 +214,7 @@ EXPORT_SYMBOL(gen_kill_estimator);
  * @bstats: basic statistics
  * @cpu_bstats: bstats per cpu
  * @rate_est: rate estimator statistics
- * @stats_lock: statistics lock
+ * @lock: lock for statistics and control path
  * @running: qdisc running seqcount (might be NULL)
  * @opt: rate estimator configuration TLV
  *
@@ -221,11 +226,11 @@ EXPORT_SYMBOL(gen_kill_estimator);
 int gen_replace_estimator(struct gnet_stats_basic_packed *bstats,
                          struct gnet_stats_basic_cpu __percpu *cpu_bstats,
                          struct net_rate_estimator __rcu **rate_est,
-                         spinlock_t *stats_lock,
+                         spinlock_t *lock,
                          seqcount_t *running, struct nlattr *opt)
 {
        return gen_new_estimator(bstats, cpu_bstats, rate_est,
-                                stats_lock, running, opt);
+                                lock, running, opt);
 }
 EXPORT_SYMBOL(gen_replace_estimator);
 
index 0a95bcf64cdc3168ca28ce1b6d68b85e3b693312..bd67c4d0fcfdf6522ce16143523169d506e09d82 100644 (file)
@@ -26,6 +26,7 @@
 #include <linux/pm_runtime.h>
 #include <linux/of.h>
 #include <linux/of_net.h>
+#include <linux/cpu.h>
 
 #include "net-sysfs.h"
 
@@ -1400,7 +1401,10 @@ static ssize_t xps_rxqs_store(struct netdev_queue *queue, const char *buf,
                return err;
        }
 
+       cpus_read_lock();
        err = __netif_set_xps_queue(dev, mask, index, true);
+       cpus_read_unlock();
+
        kfree(mask);
        return err ? : len;
 }
index 3107a2e24e6b2e76a7aaa9bf58c2a7b103bec617..1c002c0fb712a45e20b4e3daa72da4b6d19e935a 100644 (file)
@@ -404,7 +404,7 @@ void dn_rt_cache_flush(int delay)
 
        if (delay <= 0) {
                spin_unlock_bh(&dn_rt_flush_lock);
-               dn_run_flush(0);
+               dn_run_flush(NULL);
                return;
        }
 
@@ -1920,7 +1920,7 @@ void __init dn_route_init(void)
 void __exit dn_route_cleanup(void)
 {
        del_timer(&dn_route_timer);
-       dn_run_flush(0);
+       dn_run_flush(NULL);
 
        remove_proc_entry("decnet_cache", init_net.proc_net);
        dst_entries_destroy(&dn_dst_ops);
index 6d258a5669e7926186270bc97aaee7e0fc4a4174..bcb11f3a27c0c34115af05034a5a20f57842eb0a 100644 (file)
@@ -146,7 +146,7 @@ void inet_frag_destroy(struct inet_frag_queue *q)
                        fp = xp;
                } while (fp);
        } else {
-               sum_truesize = skb_rbtree_purge(&q->rb_fragments);
+               sum_truesize = inet_frag_rbtree_purge(&q->rb_fragments);
        }
        sum = sum_truesize + f->qsize;
 
index 7cb7ed761d8cfce186dc3b753ecc6ca4e45ef88f..88281fbce88ce8f1062b99594665766c2a5f5b74 100644 (file)
  */
 static const char ip_frag_cache_name[] = "ip4-frags";
 
+/* Use skb->cb to track consecutive/adjacent fragments coming at
+ * the end of the queue. Nodes in the rb-tree queue will
+ * contain "runs" of one or more adjacent fragments.
+ *
+ * Invariants:
+ * - next_frag is NULL at the tail of a "run";
+ * - the head of a "run" has the sum of all fragment lengths in frag_run_len.
+ */
+struct ipfrag_skb_cb {
+       struct inet_skb_parm    h;
+       struct sk_buff          *next_frag;
+       int                     frag_run_len;
+};
+
+#define FRAG_CB(skb)           ((struct ipfrag_skb_cb *)((skb)->cb))
+
+static void ip4_frag_init_run(struct sk_buff *skb)
+{
+       BUILD_BUG_ON(sizeof(struct ipfrag_skb_cb) > sizeof(skb->cb));
+
+       FRAG_CB(skb)->next_frag = NULL;
+       FRAG_CB(skb)->frag_run_len = skb->len;
+}
+
+/* Append skb to the last "run". */
+static void ip4_frag_append_to_last_run(struct inet_frag_queue *q,
+                                       struct sk_buff *skb)
+{
+       RB_CLEAR_NODE(&skb->rbnode);
+       FRAG_CB(skb)->next_frag = NULL;
+
+       FRAG_CB(q->last_run_head)->frag_run_len += skb->len;
+       FRAG_CB(q->fragments_tail)->next_frag = skb;
+       q->fragments_tail = skb;
+}
+
+/* Create a new "run" with the skb. */
+static void ip4_frag_create_run(struct inet_frag_queue *q, struct sk_buff *skb)
+{
+       if (q->last_run_head)
+               rb_link_node(&skb->rbnode, &q->last_run_head->rbnode,
+                            &q->last_run_head->rbnode.rb_right);
+       else
+               rb_link_node(&skb->rbnode, NULL, &q->rb_fragments.rb_node);
+       rb_insert_color(&skb->rbnode, &q->rb_fragments);
+
+       ip4_frag_init_run(skb);
+       q->fragments_tail = skb;
+       q->last_run_head = skb;
+}
+
 /* Describe an entry in the "incomplete datagrams" queue. */
 struct ipq {
        struct inet_frag_queue q;
@@ -75,8 +126,8 @@ static u8 ip4_frag_ecn(u8 tos)
 
 static struct inet_frags ip4_frags;
 
-static int ip_frag_reasm(struct ipq *qp, struct sk_buff *prev,
-                        struct net_device *dev);
+static int ip_frag_reasm(struct ipq *qp, struct sk_buff *skb,
+                        struct sk_buff *prev_tail, struct net_device *dev);
 
 
 static void ip4_frag_init(struct inet_frag_queue *q, const void *a)
@@ -168,7 +219,12 @@ static void ip_expire(struct timer_list *t)
                head = skb_rb_first(&qp->q.rb_fragments);
                if (!head)
                        goto out;
-               rb_erase(&head->rbnode, &qp->q.rb_fragments);
+               if (FRAG_CB(head)->next_frag)
+                       rb_replace_node(&head->rbnode,
+                                       &FRAG_CB(head)->next_frag->rbnode,
+                                       &qp->q.rb_fragments);
+               else
+                       rb_erase(&head->rbnode, &qp->q.rb_fragments);
                memset(&head->rbnode, 0, sizeof(head->rbnode));
                barrier();
        }
@@ -269,7 +325,7 @@ static int ip_frag_reinit(struct ipq *qp)
                return -ETIMEDOUT;
        }
 
-       sum_truesize = skb_rbtree_purge(&qp->q.rb_fragments);
+       sum_truesize = inet_frag_rbtree_purge(&qp->q.rb_fragments);
        sub_frag_mem_limit(qp->q.net, sum_truesize);
 
        qp->q.flags = 0;
@@ -278,6 +334,7 @@ static int ip_frag_reinit(struct ipq *qp)
        qp->q.fragments = NULL;
        qp->q.rb_fragments = RB_ROOT;
        qp->q.fragments_tail = NULL;
+       qp->q.last_run_head = NULL;
        qp->iif = 0;
        qp->ecn = 0;
 
@@ -289,7 +346,7 @@ static int ip_frag_queue(struct ipq *qp, struct sk_buff *skb)
 {
        struct net *net = container_of(qp->q.net, struct net, ipv4.frags);
        struct rb_node **rbn, *parent;
-       struct sk_buff *skb1;
+       struct sk_buff *skb1, *prev_tail;
        struct net_device *dev;
        unsigned int fragsize;
        int flags, offset;
@@ -367,38 +424,41 @@ static int ip_frag_queue(struct ipq *qp, struct sk_buff *skb)
         */
 
        /* Find out where to put this fragment.  */
-       skb1 = qp->q.fragments_tail;
-       if (!skb1) {
-               /* This is the first fragment we've received. */
-               rb_link_node(&skb->rbnode, NULL, &qp->q.rb_fragments.rb_node);
-               qp->q.fragments_tail = skb;
-       } else if ((skb1->ip_defrag_offset + skb1->len) < end) {
-               /* This is the common/special case: skb goes to the end. */
+       prev_tail = qp->q.fragments_tail;
+       if (!prev_tail)
+               ip4_frag_create_run(&qp->q, skb);  /* First fragment. */
+       else if (prev_tail->ip_defrag_offset + prev_tail->len < end) {
+               /* This is the common case: skb goes to the end. */
                /* Detect and discard overlaps. */
-               if (offset < (skb1->ip_defrag_offset + skb1->len))
+               if (offset < prev_tail->ip_defrag_offset + prev_tail->len)
                        goto discard_qp;
-               /* Insert after skb1. */
-               rb_link_node(&skb->rbnode, &skb1->rbnode, &skb1->rbnode.rb_right);
-               qp->q.fragments_tail = skb;
+               if (offset == prev_tail->ip_defrag_offset + prev_tail->len)
+                       ip4_frag_append_to_last_run(&qp->q, skb);
+               else
+                       ip4_frag_create_run(&qp->q, skb);
        } else {
-               /* Binary search. Note that skb can become the first fragment, but
-                * not the last (covered above). */
+               /* Binary search. Note that skb can become the first fragment,
+                * but not the last (covered above).
+                */
                rbn = &qp->q.rb_fragments.rb_node;
                do {
                        parent = *rbn;
                        skb1 = rb_to_skb(parent);
                        if (end <= skb1->ip_defrag_offset)
                                rbn = &parent->rb_left;
-                       else if (offset >= skb1->ip_defrag_offset + skb1->len)
+                       else if (offset >= skb1->ip_defrag_offset +
+                                               FRAG_CB(skb1)->frag_run_len)
                                rbn = &parent->rb_right;
                        else /* Found an overlap with skb1. */
                                goto discard_qp;
                } while (*rbn);
                /* Here we have parent properly set, and rbn pointing to
-                * one of its NULL left/right children. Insert skb. */
+                * one of its NULL left/right children. Insert skb.
+                */
+               ip4_frag_init_run(skb);
                rb_link_node(&skb->rbnode, parent, rbn);
+               rb_insert_color(&skb->rbnode, &qp->q.rb_fragments);
        }
-       rb_insert_color(&skb->rbnode, &qp->q.rb_fragments);
 
        if (dev)
                qp->iif = dev->ifindex;
@@ -425,7 +485,7 @@ static int ip_frag_queue(struct ipq *qp, struct sk_buff *skb)
                unsigned long orefdst = skb->_skb_refdst;
 
                skb->_skb_refdst = 0UL;
-               err = ip_frag_reasm(qp, skb, dev);
+               err = ip_frag_reasm(qp, skb, prev_tail, dev);
                skb->_skb_refdst = orefdst;
                return err;
        }
@@ -444,7 +504,7 @@ static int ip_frag_queue(struct ipq *qp, struct sk_buff *skb)
 
 /* Build a new IP datagram from all its fragments. */
 static int ip_frag_reasm(struct ipq *qp, struct sk_buff *skb,
-                        struct net_device *dev)
+                        struct sk_buff *prev_tail, struct net_device *dev)
 {
        struct net *net = container_of(qp->q.net, struct net, ipv4.frags);
        struct iphdr *iph;
@@ -468,10 +528,16 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *skb,
                fp = skb_clone(skb, GFP_ATOMIC);
                if (!fp)
                        goto out_nomem;
-               rb_replace_node(&skb->rbnode, &fp->rbnode, &qp->q.rb_fragments);
+               FRAG_CB(fp)->next_frag = FRAG_CB(skb)->next_frag;
+               if (RB_EMPTY_NODE(&skb->rbnode))
+                       FRAG_CB(prev_tail)->next_frag = fp;
+               else
+                       rb_replace_node(&skb->rbnode, &fp->rbnode,
+                                       &qp->q.rb_fragments);
                if (qp->q.fragments_tail == skb)
                        qp->q.fragments_tail = fp;
                skb_morph(skb, head);
+               FRAG_CB(skb)->next_frag = FRAG_CB(head)->next_frag;
                rb_replace_node(&head->rbnode, &skb->rbnode,
                                &qp->q.rb_fragments);
                consume_skb(head);
@@ -507,7 +573,7 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *skb,
                for (i = 0; i < skb_shinfo(head)->nr_frags; i++)
                        plen += skb_frag_size(&skb_shinfo(head)->frags[i]);
                clone->len = clone->data_len = head->data_len - plen;
-               skb->truesize += clone->truesize;
+               head->truesize += clone->truesize;
                clone->csum = 0;
                clone->ip_summed = head->ip_summed;
                add_frag_mem_limit(qp->q.net, clone->truesize);
@@ -520,24 +586,36 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *skb,
        skb_push(head, head->data - skb_network_header(head));
 
        /* Traverse the tree in order, to build frag_list. */
+       fp = FRAG_CB(head)->next_frag;
        rbn = rb_next(&head->rbnode);
        rb_erase(&head->rbnode, &qp->q.rb_fragments);
-       while (rbn) {
-               struct rb_node *rbnext = rb_next(rbn);
-               fp = rb_to_skb(rbn);
-               rb_erase(rbn, &qp->q.rb_fragments);
-               rbn = rbnext;
-               *nextp = fp;
-               nextp = &fp->next;
-               fp->prev = NULL;
-               memset(&fp->rbnode, 0, sizeof(fp->rbnode));
-               head->data_len += fp->len;
-               head->len += fp->len;
-               if (head->ip_summed != fp->ip_summed)
-                       head->ip_summed = CHECKSUM_NONE;
-               else if (head->ip_summed == CHECKSUM_COMPLETE)
-                       head->csum = csum_add(head->csum, fp->csum);
-               head->truesize += fp->truesize;
+       while (rbn || fp) {
+               /* fp points to the next sk_buff in the current run;
+                * rbn points to the next run.
+                */
+               /* Go through the current run. */
+               while (fp) {
+                       *nextp = fp;
+                       nextp = &fp->next;
+                       fp->prev = NULL;
+                       memset(&fp->rbnode, 0, sizeof(fp->rbnode));
+                       head->data_len += fp->len;
+                       head->len += fp->len;
+                       if (head->ip_summed != fp->ip_summed)
+                               head->ip_summed = CHECKSUM_NONE;
+                       else if (head->ip_summed == CHECKSUM_COMPLETE)
+                               head->csum = csum_add(head->csum, fp->csum);
+                       head->truesize += fp->truesize;
+                       fp = FRAG_CB(fp)->next_frag;
+               }
+               /* Move to the next run. */
+               if (rbn) {
+                       struct rb_node *rbnext = rb_next(rbn);
+
+                       fp = rb_to_skb(rbn);
+                       rb_erase(rbn, &qp->q.rb_fragments);
+                       rbn = rbnext;
+               }
        }
        sub_frag_mem_limit(qp->q.net, head->truesize);
 
@@ -573,6 +651,7 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *skb,
        qp->q.fragments = NULL;
        qp->q.rb_fragments = RB_ROOT;
        qp->q.fragments_tail = NULL;
+       qp->q.last_run_head = NULL;
        return 0;
 
 out_nomem:
@@ -654,6 +733,28 @@ struct sk_buff *ip_check_defrag(struct net *net, struct sk_buff *skb, u32 user)
 }
 EXPORT_SYMBOL(ip_check_defrag);
 
+unsigned int inet_frag_rbtree_purge(struct rb_root *root)
+{
+       struct rb_node *p = rb_first(root);
+       unsigned int sum = 0;
+
+       while (p) {
+               struct sk_buff *skb = rb_entry(p, struct sk_buff, rbnode);
+
+               p = rb_next(p);
+               rb_erase(&skb->rbnode, root);
+               while (skb) {
+                       struct sk_buff *next = FRAG_CB(skb)->next_frag;
+
+                       sum += skb->truesize;
+                       kfree_skb(skb);
+                       skb = next;
+               }
+       }
+       return sum;
+}
+EXPORT_SYMBOL(inet_frag_rbtree_purge);
+
 #ifdef CONFIG_SYSCTL
 static int dist_min;
 
index 8b637f9f23a232a137f4a7f2d685a599cc063c1b..ca61e2a659e7453082affd64dcbfed53f24992a7 100644 (file)
@@ -136,7 +136,7 @@ static void dctcp_ce_state_0_to_1(struct sock *sk)
                 */
                if (inet_csk(sk)->icsk_ack.pending & ICSK_ACK_TIMER)
                        __tcp_send_ack(sk, ca->prior_rcv_nxt);
-               tcp_enter_quickack_mode(sk, 1);
+               inet_csk(sk)->icsk_ack.pending |= ICSK_ACK_NOW;
        }
 
        ca->prior_rcv_nxt = tp->rcv_nxt;
@@ -157,7 +157,7 @@ static void dctcp_ce_state_1_to_0(struct sock *sk)
                 */
                if (inet_csk(sk)->icsk_ack.pending & ICSK_ACK_TIMER)
                        __tcp_send_ack(sk, ca->prior_rcv_nxt);
-               tcp_enter_quickack_mode(sk, 1);
+               inet_csk(sk)->icsk_ack.pending |= ICSK_ACK_NOW;
        }
 
        ca->prior_rcv_nxt = tp->rcv_nxt;
index 715d541b52ddc4f4876dae2bf003863dfbdfe23e..4c2dd9f863f76eb6e55202b19e1acebf4fbac887 100644 (file)
@@ -245,16 +245,16 @@ static void tcp_ecn_queue_cwr(struct tcp_sock *tp)
                tp->ecn_flags |= TCP_ECN_QUEUE_CWR;
 }
 
-static void tcp_ecn_accept_cwr(struct tcp_sock *tp, const struct sk_buff *skb)
+static void tcp_ecn_accept_cwr(struct sock *sk, const struct sk_buff *skb)
 {
        if (tcp_hdr(skb)->cwr) {
-               tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR;
+               tcp_sk(sk)->ecn_flags &= ~TCP_ECN_DEMAND_CWR;
 
                /* If the sender is telling us it has entered CWR, then its
                 * cwnd may be very low (even just 1 packet), so we should ACK
                 * immediately.
                 */
-               tcp_enter_quickack_mode((struct sock *)tp, 2);
+               inet_csk(sk)->icsk_ack.pending |= ICSK_ACK_NOW;
        }
 }
 
@@ -4703,7 +4703,7 @@ static void tcp_data_queue(struct sock *sk, struct sk_buff *skb)
        skb_dst_drop(skb);
        __skb_pull(skb, tcp_hdr(skb)->doff * 4);
 
-       tcp_ecn_accept_cwr(tp, skb);
+       tcp_ecn_accept_cwr(sk, skb);
 
        tp->rx_opt.dsack = 0;
 
@@ -4735,11 +4735,11 @@ static void tcp_data_queue(struct sock *sk, struct sk_buff *skb)
                if (!RB_EMPTY_ROOT(&tp->out_of_order_queue)) {
                        tcp_ofo_queue(sk);
 
-                       /* RFC2581. 4.2. SHOULD send immediate ACK, when
+                       /* RFC5681. 4.2. SHOULD send immediate ACK, when
                         * gap in queue is filled.
                         */
                        if (RB_EMPTY_ROOT(&tp->out_of_order_queue))
-                               inet_csk(sk)->icsk_ack.pingpong = 0;
+                               inet_csk(sk)->icsk_ack.pending |= ICSK_ACK_NOW;
                }
 
                if (tp->rx_opt.num_sacks)
@@ -5179,7 +5179,9 @@ static void __tcp_ack_snd_check(struct sock *sk, int ofo_possible)
            (tp->rcv_nxt - tp->copied_seq < sk->sk_rcvlowat ||
             __tcp_select_window(sk) >= tp->rcv_wnd)) ||
            /* We ACK each frame or... */
-           tcp_in_quickack_mode(sk)) {
+           tcp_in_quickack_mode(sk) ||
+           /* Protocol state mandates a one-time immediate ACK */
+           inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOW) {
 send_now:
                tcp_send_ack(sk);
                return;
index 020f6e14a7afe130e6b016fa8b55d230e6a04559..673bba31eb1807eb04ee0d0c072333000bde4305 100644 (file)
@@ -832,6 +832,7 @@ static int __net_init inet6_net_init(struct net *net)
 
        net->ipv6.sysctl.bindv6only = 0;
        net->ipv6.sysctl.icmpv6_time = 1*HZ;
+       net->ipv6.sysctl.icmpv6_echo_ignore_all = 0;
        net->ipv6.sysctl.flowlabel_consistency = 1;
        net->ipv6.sysctl.auto_flowlabels = IP6_DEFAULT_AUTO_FLOW_LABELS;
        net->ipv6.sysctl.idgen_retries = 3;
index 7f6b1f81c20074b09f32edcbb056b86c1ac22c0a..c9c53ade55c3cee53ffed639aed8eb0f3e5a0f76 100644 (file)
@@ -794,6 +794,7 @@ void icmpv6_notify(struct sk_buff *skb, u8 type, u8 code, __be32 info)
 
 static int icmpv6_rcv(struct sk_buff *skb)
 {
+       struct net *net = dev_net(skb->dev);
        struct net_device *dev = skb->dev;
        struct inet6_dev *idev = __in6_dev_get(dev);
        const struct in6_addr *saddr, *daddr;
@@ -843,7 +844,8 @@ static int icmpv6_rcv(struct sk_buff *skb)
 
        switch (type) {
        case ICMPV6_ECHO_REQUEST:
-               icmpv6_echo_reply(skb);
+               if (!net->ipv6.sysctl.icmpv6_echo_ignore_all)
+                       icmpv6_echo_reply(skb);
                break;
 
        case ICMPV6_ECHO_REPLY:
@@ -1104,6 +1106,13 @@ static struct ctl_table ipv6_icmp_table_template[] = {
                .mode           = 0644,
                .proc_handler   = proc_dointvec_ms_jiffies,
        },
+       {
+               .procname       = "echo_ignore_all",
+               .data           = &init_net.ipv6.sysctl.icmpv6_echo_ignore_all,
+               .maxlen         = sizeof(int),
+               .mode           = 0644,
+               .proc_handler = proc_dointvec,
+       },
        { },
 };
 
@@ -1115,9 +1124,10 @@ struct ctl_table * __net_init ipv6_icmp_sysctl_init(struct net *net)
                        sizeof(ipv6_icmp_table_template),
                        GFP_KERNEL);
 
-       if (table)
+       if (table) {
                table[0].data = &net->ipv6.sysctl.icmpv6_time;
-
+               table[1].data = &net->ipv6.sysctl.icmpv6_echo_ignore_all;
+       }
        return table;
 }
 #endif
index fc7dd3a043607e9ba069a64a712b1ab853ef6018..18a3794b0f52e1dd7b8bf5179bcdcfdbd882f158 100644 (file)
@@ -1129,7 +1129,7 @@ static void ip6gre_tnl_link_config_route(struct ip6_tnl *t, int set_mtu,
                        return;
 
                if (rt->dst.dev) {
-                       dev->hard_header_len = rt->dst.dev->hard_header_len +
+                       dev->needed_headroom = rt->dst.dev->hard_header_len +
                                               t_hlen;
 
                        if (set_mtu) {
@@ -1155,7 +1155,7 @@ static int ip6gre_calc_hlen(struct ip6_tnl *tunnel)
        tunnel->hlen = tunnel->tun_hlen + tunnel->encap_hlen;
 
        t_hlen = tunnel->hlen + sizeof(struct ipv6hdr);
-       tunnel->dev->hard_header_len = LL_MAX_HEADER + t_hlen;
+       tunnel->dev->needed_headroom = LL_MAX_HEADER + t_hlen;
        return t_hlen;
 }
 
@@ -1825,7 +1825,7 @@ static int ip6erspan_calc_hlen(struct ip6_tnl *tunnel)
                       erspan_hdr_len(tunnel->parms.erspan_ver);
 
        t_hlen = tunnel->hlen + sizeof(struct ipv6hdr);
-       tunnel->dev->hard_header_len = LL_MAX_HEADER + t_hlen;
+       tunnel->dev->needed_headroom = LL_MAX_HEADER + t_hlen;
        return t_hlen;
 }
 
index ac6a00bcec717237e8b6b5f6b46de0a0afffc57d..82cdf9020b53921c1ed86cd6122ebe6007ea263c 100644 (file)
@@ -203,44 +203,44 @@ struct l2tp_tunnel *l2tp_tunnel_get_nth(const struct net *net, int nth)
 }
 EXPORT_SYMBOL_GPL(l2tp_tunnel_get_nth);
 
-/* Lookup a session. A new reference is held on the returned session. */
-struct l2tp_session *l2tp_session_get(const struct net *net,
-                                     struct l2tp_tunnel *tunnel,
-                                     u32 session_id)
+struct l2tp_session *l2tp_tunnel_get_session(struct l2tp_tunnel *tunnel,
+                                            u32 session_id)
 {
        struct hlist_head *session_list;
        struct l2tp_session *session;
 
-       if (!tunnel) {
-               struct l2tp_net *pn = l2tp_pernet(net);
-
-               session_list = l2tp_session_id_hash_2(pn, session_id);
+       session_list = l2tp_session_id_hash(tunnel, session_id);
 
-               rcu_read_lock_bh();
-               hlist_for_each_entry_rcu(session, session_list, global_hlist) {
-                       if (session->session_id == session_id) {
-                               l2tp_session_inc_refcount(session);
-                               rcu_read_unlock_bh();
+       read_lock_bh(&tunnel->hlist_lock);
+       hlist_for_each_entry(session, session_list, hlist)
+               if (session->session_id == session_id) {
+                       l2tp_session_inc_refcount(session);
+                       read_unlock_bh(&tunnel->hlist_lock);
 
-                               return session;
-                       }
+                       return session;
                }
-               rcu_read_unlock_bh();
+       read_unlock_bh(&tunnel->hlist_lock);
 
-               return NULL;
-       }
+       return NULL;
+}
+EXPORT_SYMBOL_GPL(l2tp_tunnel_get_session);
 
-       session_list = l2tp_session_id_hash(tunnel, session_id);
-       read_lock_bh(&tunnel->hlist_lock);
-       hlist_for_each_entry(session, session_list, hlist) {
+struct l2tp_session *l2tp_session_get(const struct net *net, u32 session_id)
+{
+       struct hlist_head *session_list;
+       struct l2tp_session *session;
+
+       session_list = l2tp_session_id_hash_2(l2tp_pernet(net), session_id);
+
+       rcu_read_lock_bh();
+       hlist_for_each_entry_rcu(session, session_list, global_hlist)
                if (session->session_id == session_id) {
                        l2tp_session_inc_refcount(session);
-                       read_unlock_bh(&tunnel->hlist_lock);
+                       rcu_read_unlock_bh();
 
                        return session;
                }
-       }
-       read_unlock_bh(&tunnel->hlist_lock);
+       rcu_read_unlock_bh();
 
        return NULL;
 }
@@ -872,7 +872,7 @@ static int l2tp_udp_recv_core(struct l2tp_tunnel *tunnel, struct sk_buff *skb)
        }
 
        /* Find the session context */
-       session = l2tp_session_get(tunnel->l2tp_net, tunnel, session_id);
+       session = l2tp_tunnel_get_session(tunnel, session_id);
        if (!session || !session->recv_skb) {
                if (session)
                        l2tp_session_dec_refcount(session);
@@ -1098,7 +1098,7 @@ int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len
 
        /* Get routing info from the tunnel socket */
        skb_dst_drop(skb);
-       skb_dst_set(skb, dst_clone(__sk_dst_check(sk, 0)));
+       skb_dst_set(skb, sk_dst_check(sk, 0));
 
        inet = inet_sk(sk);
        fl = &inet->cork.fl;
index 5804065dfbfb0f4a57ef8793e7f79514cf4f11cd..8480a0af973e257ee3903409ea20872db36cc1ed 100644 (file)
 #include <net/dst.h>
 #include <net/sock.h>
 
+#ifdef CONFIG_XFRM
+#include <net/xfrm.h>
+#endif
+
 /* Just some random numbers */
 #define L2TP_TUNNEL_MAGIC      0x42114DDA
 #define L2TP_SESSION_MAGIC     0x0C04EB7D
@@ -192,12 +196,12 @@ static inline void *l2tp_session_priv(struct l2tp_session *session)
 
 struct l2tp_tunnel *l2tp_tunnel_get(const struct net *net, u32 tunnel_id);
 struct l2tp_tunnel *l2tp_tunnel_get_nth(const struct net *net, int nth);
+struct l2tp_session *l2tp_tunnel_get_session(struct l2tp_tunnel *tunnel,
+                                            u32 session_id);
 
 void l2tp_tunnel_free(struct l2tp_tunnel *tunnel);
 
-struct l2tp_session *l2tp_session_get(const struct net *net,
-                                     struct l2tp_tunnel *tunnel,
-                                     u32 session_id);
+struct l2tp_session *l2tp_session_get(const struct net *net, u32 session_id);
 struct l2tp_session *l2tp_session_get_nth(struct l2tp_tunnel *tunnel, int nth);
 struct l2tp_session *l2tp_session_get_by_ifname(const struct net *net,
                                                const char *ifname);
@@ -284,6 +288,21 @@ static inline u32 l2tp_tunnel_dst_mtu(const struct l2tp_tunnel *tunnel)
        return mtu;
 }
 
+#ifdef CONFIG_XFRM
+static inline bool l2tp_tunnel_uses_xfrm(const struct l2tp_tunnel *tunnel)
+{
+       struct sock *sk = tunnel->sock;
+
+       return sk && (rcu_access_pointer(sk->sk_policy[0]) ||
+                     rcu_access_pointer(sk->sk_policy[1]));
+}
+#else
+static inline bool l2tp_tunnel_uses_xfrm(const struct l2tp_tunnel *tunnel)
+{
+       return false;
+}
+#endif
+
 #define l2tp_printk(ptr, type, func, fmt, ...)                         \
 do {                                                                   \
        if (((ptr)->debug) & (type))                                    \
index 0bc39cc20a3fcf8f5e532c137145747f40a7644c..35f6f86d4dcce898f0deacbeba695cfe438cdd91 100644 (file)
@@ -144,7 +144,7 @@ static int l2tp_ip_recv(struct sk_buff *skb)
        }
 
        /* Ok, this is a data packet. Lookup the session. */
-       session = l2tp_session_get(net, NULL, session_id);
+       session = l2tp_session_get(net, session_id);
        if (!session)
                goto discard;
 
index 42f828cf62fbb38fea121c270a57dd9d55a95cde..237f1a4a0b0c8b7fdc76cd67e7c682dee7bdaf79 100644 (file)
@@ -157,7 +157,7 @@ static int l2tp_ip6_recv(struct sk_buff *skb)
        }
 
        /* Ok, this is a data packet. Lookup the session. */
-       session = l2tp_session_get(net, NULL, session_id);
+       session = l2tp_session_get(net, session_id);
        if (!session)
                goto discard;
 
index 2e1e92651545fd6cdc2b3ddbb4e75ea886e9452f..edbd5d1fbcde63fd9de6fef070b2787fcd5d993c 100644 (file)
@@ -66,7 +66,7 @@ static struct l2tp_session *l2tp_nl_session_get(struct genl_info *info)
                session_id = nla_get_u32(info->attrs[L2TP_ATTR_SESSION_ID]);
                tunnel = l2tp_tunnel_get(net, tunnel_id);
                if (tunnel) {
-                       session = l2tp_session_get(net, tunnel, session_id);
+                       session = l2tp_tunnel_get_session(tunnel, session_id);
                        l2tp_tunnel_dec_refcount(tunnel);
                }
        }
@@ -627,7 +627,7 @@ static int l2tp_nl_cmd_session_create(struct sk_buff *skb, struct genl_info *inf
                                                           &cfg);
 
        if (ret >= 0) {
-               session = l2tp_session_get(net, tunnel, session_id);
+               session = l2tp_tunnel_get_session(tunnel, session_id);
                if (session) {
                        ret = l2tp_session_notify(&l2tp_nl_family, info, session,
                                                  L2TP_CMD_SESSION_CREATE);
@@ -710,9 +710,6 @@ static int l2tp_nl_session_send(struct sk_buff *skb, u32 portid, u32 seq, int fl
        void *hdr;
        struct nlattr *nest;
        struct l2tp_tunnel *tunnel = session->tunnel;
-       struct sock *sk = NULL;
-
-       sk = tunnel->sock;
 
        hdr = genlmsg_put(skb, portid, seq, &l2tp_nl_family, flags, cmd);
        if (!hdr)
@@ -738,10 +735,8 @@ static int l2tp_nl_session_send(struct sk_buff *skb, u32 portid, u32 seq, int fl
            nla_put_u8(skb, L2TP_ATTR_RECV_SEQ, session->recv_seq) ||
            nla_put_u8(skb, L2TP_ATTR_SEND_SEQ, session->send_seq) ||
            nla_put_u8(skb, L2TP_ATTR_LNS_MODE, session->lns_mode) ||
-#ifdef CONFIG_XFRM
-           (((sk) && (sk->sk_policy[0] || sk->sk_policy[1])) &&
+           (l2tp_tunnel_uses_xfrm(tunnel) &&
             nla_put_u8(skb, L2TP_ATTR_USING_IPSEC, 1)) ||
-#endif
            (session->reorder_timeout &&
             nla_put_msecs(skb, L2TP_ATTR_RECV_TIMEOUT,
                           session->reorder_timeout, L2TP_ATTR_PAD)))
index 6e2c8e7595e0ff0b1d50a9508ed8166bef703f9f..62f2d3f1e43147128e595fb601223c3683ed3220 100644 (file)
@@ -95,7 +95,6 @@
 #include <net/netns/generic.h>
 #include <net/ip.h>
 #include <net/udp.h>
-#include <net/xfrm.h>
 #include <net/inet_common.h>
 
 #include <asm/byteorder.h>
@@ -758,7 +757,7 @@ static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr,
        if (tunnel->peer_tunnel_id == 0)
                tunnel->peer_tunnel_id = info.peer_tunnel_id;
 
-       session = l2tp_session_get(sock_net(sk), tunnel, info.session_id);
+       session = l2tp_tunnel_get_session(tunnel, info.session_id);
        if (session) {
                drop_refcnt = true;
 
@@ -1027,8 +1026,10 @@ static int pppol2tp_getname(struct socket *sock, struct sockaddr *uaddr,
  ****************************************************************************/
 
 static void pppol2tp_copy_stats(struct pppol2tp_ioc_stats *dest,
-                               struct l2tp_stats *stats)
+                               const struct l2tp_stats *stats)
 {
+       memset(dest, 0, sizeof(*dest));
+
        dest->tx_packets = atomic_long_read(&stats->tx_packets);
        dest->tx_bytes = atomic_long_read(&stats->tx_bytes);
        dest->tx_errors = atomic_long_read(&stats->tx_errors);
@@ -1039,188 +1040,107 @@ static void pppol2tp_copy_stats(struct pppol2tp_ioc_stats *dest,
        dest->rx_errors = atomic_long_read(&stats->rx_errors);
 }
 
-/* Session ioctl helper.
- */
-static int pppol2tp_session_ioctl(struct l2tp_session *session,
-                                 unsigned int cmd, unsigned long arg)
+static int pppol2tp_tunnel_copy_stats(struct pppol2tp_ioc_stats *stats,
+                                     struct l2tp_tunnel *tunnel)
 {
-       int err = 0;
-       struct sock *sk;
-       int val = (int) arg;
-       struct l2tp_tunnel *tunnel = session->tunnel;
-       struct pppol2tp_ioc_stats stats;
+       struct l2tp_session *session;
 
-       l2tp_dbg(session, L2TP_MSG_CONTROL,
-                "%s: pppol2tp_session_ioctl(cmd=%#x, arg=%#lx)\n",
-                session->name, cmd, arg);
+       if (!stats->session_id) {
+               pppol2tp_copy_stats(stats, &tunnel->stats);
+               return 0;
+       }
 
-       sk = pppol2tp_session_get_sock(session);
-       if (!sk)
+       /* If session_id is set, search the corresponding session in the
+        * context of this tunnel and record the session's statistics.
+        */
+       session = l2tp_tunnel_get_session(tunnel, stats->session_id);
+       if (!session)
                return -EBADR;
 
-       switch (cmd) {
-       case PPPIOCGMRU:
-       case PPPIOCGFLAGS:
-               err = -EFAULT;
-               if (put_user(0, (int __user *)arg))
-                       break;
-               err = 0;
-               break;
-
-       case PPPIOCSMRU:
-       case PPPIOCSFLAGS:
-               err = -EFAULT;
-               if (get_user(val, (int __user *)arg))
-                       break;
-               err = 0;
-               break;
-
-       case PPPIOCGL2TPSTATS:
-               err = -ENXIO;
-               if (!(sk->sk_state & PPPOX_CONNECTED))
-                       break;
-
-               memset(&stats, 0, sizeof(stats));
-               stats.tunnel_id = tunnel->tunnel_id;
-               stats.session_id = session->session_id;
-               pppol2tp_copy_stats(&stats, &session->stats);
-               if (copy_to_user((void __user *) arg, &stats,
-                                sizeof(stats)))
-                       break;
-               l2tp_info(session, L2TP_MSG_CONTROL, "%s: get L2TP stats\n",
-                         session->name);
-               err = 0;
-               break;
-
-       default:
-               err = -ENOSYS;
-               break;
+       if (session->pwtype != L2TP_PWTYPE_PPP) {
+               l2tp_session_dec_refcount(session);
+               return -EBADR;
        }
 
-       sock_put(sk);
+       pppol2tp_copy_stats(stats, &session->stats);
+       l2tp_session_dec_refcount(session);
 
-       return err;
+       return 0;
 }
 
-/* Tunnel ioctl helper.
- *
- * Note the special handling for PPPIOCGL2TPSTATS below. If the ioctl data
- * specifies a session_id, the session ioctl handler is called. This allows an
- * application to retrieve session stats via a tunnel socket.
- */
-static int pppol2tp_tunnel_ioctl(struct l2tp_tunnel *tunnel,
-                                unsigned int cmd, unsigned long arg)
+static int pppol2tp_ioctl(struct socket *sock, unsigned int cmd,
+                         unsigned long arg)
 {
-       int err = 0;
-       struct sock *sk;
        struct pppol2tp_ioc_stats stats;
-
-       l2tp_dbg(tunnel, L2TP_MSG_CONTROL,
-                "%s: pppol2tp_tunnel_ioctl(cmd=%#x, arg=%#lx)\n",
-                tunnel->name, cmd, arg);
-
-       sk = tunnel->sock;
-       sock_hold(sk);
+       struct l2tp_session *session;
+       int val;
 
        switch (cmd) {
-       case PPPIOCGL2TPSTATS:
-               err = -ENXIO;
-               if (!(sk->sk_state & PPPOX_CONNECTED))
-                       break;
-
-               if (copy_from_user(&stats, (void __user *) arg,
-                                  sizeof(stats))) {
-                       err = -EFAULT;
-                       break;
-               }
-               if (stats.session_id != 0) {
-                       /* resend to session ioctl handler */
-                       struct l2tp_session *session =
-                               l2tp_session_get(sock_net(sk), tunnel,
-                                                stats.session_id);
-
-                       if (!session) {
-                               err = -EBADR;
-                               break;
-                       }
-                       if (session->pwtype != L2TP_PWTYPE_PPP) {
-                               l2tp_session_dec_refcount(session);
-                               err = -EBADR;
-                               break;
-                       }
+       case PPPIOCGMRU:
+       case PPPIOCGFLAGS:
+               session = sock->sk->sk_user_data;
+               if (!session)
+                       return -ENOTCONN;
 
-                       err = pppol2tp_session_ioctl(session, cmd, arg);
-                       l2tp_session_dec_refcount(session);
-                       break;
-               }
-#ifdef CONFIG_XFRM
-               stats.using_ipsec = (sk->sk_policy[0] || sk->sk_policy[1]) ? 1 : 0;
-#endif
-               pppol2tp_copy_stats(&stats, &tunnel->stats);
-               if (copy_to_user((void __user *) arg, &stats, sizeof(stats))) {
-                       err = -EFAULT;
-                       break;
-               }
-               l2tp_info(tunnel, L2TP_MSG_CONTROL, "%s: get L2TP stats\n",
-                         tunnel->name);
-               err = 0;
-               break;
+               /* Not defined for tunnels */
+               if (!session->session_id && !session->peer_session_id)
+                       return -ENOSYS;
 
-       default:
-               err = -ENOSYS;
+               if (put_user(0, (int __user *)arg))
+                       return -EFAULT;
                break;
-       }
-
-       sock_put(sk);
-
-       return err;
-}
 
-/* Main ioctl() handler.
- * Dispatch to tunnel or session helpers depending on the socket.
- */
-static int pppol2tp_ioctl(struct socket *sock, unsigned int cmd,
-                         unsigned long arg)
-{
-       struct sock *sk = sock->sk;
-       struct l2tp_session *session;
-       struct l2tp_tunnel *tunnel;
-       int err;
+       case PPPIOCSMRU:
+       case PPPIOCSFLAGS:
+               session = sock->sk->sk_user_data;
+               if (!session)
+                       return -ENOTCONN;
 
-       if (!sk)
-               return 0;
+               /* Not defined for tunnels */
+               if (!session->session_id && !session->peer_session_id)
+                       return -ENOSYS;
 
-       err = -EBADF;
-       if (sock_flag(sk, SOCK_DEAD) != 0)
-               goto end;
+               if (get_user(val, (int __user *)arg))
+                       return -EFAULT;
+               break;
 
-       err = -ENOTCONN;
-       if ((sk->sk_user_data == NULL) ||
-           (!(sk->sk_state & (PPPOX_CONNECTED | PPPOX_BOUND))))
-               goto end;
+       case PPPIOCGL2TPSTATS:
+               session = sock->sk->sk_user_data;
+               if (!session)
+                       return -ENOTCONN;
+
+               /* Session 0 represents the parent tunnel */
+               if (!session->session_id && !session->peer_session_id) {
+                       u32 session_id;
+                       int err;
+
+                       if (copy_from_user(&stats, (void __user *)arg,
+                                          sizeof(stats)))
+                               return -EFAULT;
+
+                       session_id = stats.session_id;
+                       err = pppol2tp_tunnel_copy_stats(&stats,
+                                                        session->tunnel);
+                       if (err < 0)
+                               return err;
+
+                       stats.session_id = session_id;
+               } else {
+                       pppol2tp_copy_stats(&stats, &session->stats);
+                       stats.session_id = session->session_id;
+               }
+               stats.tunnel_id = session->tunnel->tunnel_id;
+               stats.using_ipsec = l2tp_tunnel_uses_xfrm(session->tunnel);
 
-       /* Get session context from the socket */
-       err = -EBADF;
-       session = pppol2tp_sock_to_session(sk);
-       if (session == NULL)
-               goto end;
+               if (copy_to_user((void __user *)arg, &stats, sizeof(stats)))
+                       return -EFAULT;
+               break;
 
-       /* Special case: if session's session_id is zero, treat ioctl as a
-        * tunnel ioctl
-        */
-       if ((session->session_id == 0) &&
-           (session->peer_session_id == 0)) {
-               tunnel = session->tunnel;
-               err = pppol2tp_tunnel_ioctl(tunnel, cmd, arg);
-               goto end_put_sess;
+       default:
+               return -ENOIOCTLCMD;
        }
 
-       err = pppol2tp_session_ioctl(session, cmd, arg);
-
-end_put_sess:
-       sock_put(sk);
-end:
-       return err;
+       return 0;
 }
 
 /*****************************************************************************
index 46aee65f339bea3e39e6fcdd94d82d4646b991b4..91fbd183da2dcbfa5ea22bdb4c92938ef89b458c 100644 (file)
 #include <net/netfilter/nf_conntrack_extend.h>
 #include <net/netfilter/nf_conntrack_timeout.h>
 
-struct ctnl_timeout *
+struct nf_ct_timeout *
 (*nf_ct_timeout_find_get_hook)(struct net *net, const char *name) __read_mostly;
 EXPORT_SYMBOL_GPL(nf_ct_timeout_find_get_hook);
 
-void (*nf_ct_timeout_put_hook)(struct ctnl_timeout *timeout) __read_mostly;
+void (*nf_ct_timeout_put_hook)(struct nf_ct_timeout *timeout) __read_mostly;
 EXPORT_SYMBOL_GPL(nf_ct_timeout_put_hook);
 
+static int untimeout(struct nf_conn *ct, void *timeout)
+{
+       struct nf_conn_timeout *timeout_ext = nf_ct_timeout_find(ct);
+
+       if (timeout_ext && (!timeout || timeout_ext->timeout == timeout))
+               RCU_INIT_POINTER(timeout_ext->timeout, NULL);
+
+       /* We are not intended to delete this conntrack. */
+       return 0;
+}
+
+void nf_ct_untimeout(struct net *net, struct nf_ct_timeout *timeout)
+{
+       nf_ct_iterate_cleanup_net(net, untimeout, timeout, 0, 0);
+}
+EXPORT_SYMBOL_GPL(nf_ct_untimeout);
+
 static const struct nf_ct_ext_type timeout_extend = {
        .len    = sizeof(struct nf_conn_timeout),
        .align  = __alignof__(struct nf_conn_timeout),
index 4199e5300575dd4afea760079cf72fa13464425b..d46a236cdf31b612f808b33867cd5ee68e70af33 100644 (file)
@@ -113,13 +113,13 @@ static int cttimeout_new_timeout(struct net *net, struct sock *ctnl,
                        /* You cannot replace one timeout policy by another of
                         * different kind, sorry.
                         */
-                       if (matching->l3num != l3num ||
-                           matching->l4proto->l4proto != l4num)
+                       if (matching->timeout.l3num != l3num ||
+                           matching->timeout.l4proto->l4proto != l4num)
                                return -EINVAL;
 
-                       return ctnl_timeout_parse_policy(&matching->data,
-                                                        matching->l4proto, net,
-                                                        cda[CTA_TIMEOUT_DATA]);
+                       return ctnl_timeout_parse_policy(&matching->timeout.data,
+                                                        matching->timeout.l4proto,
+                                                        net, cda[CTA_TIMEOUT_DATA]);
                }
 
                return -EBUSY;
@@ -140,14 +140,14 @@ static int cttimeout_new_timeout(struct net *net, struct sock *ctnl,
                goto err_proto_put;
        }
 
-       ret = ctnl_timeout_parse_policy(&timeout->data, l4proto, net,
+       ret = ctnl_timeout_parse_policy(&timeout->timeout.data, l4proto, net,
                                        cda[CTA_TIMEOUT_DATA]);
        if (ret < 0)
                goto err;
 
        strcpy(timeout->name, nla_data(cda[CTA_TIMEOUT_NAME]));
-       timeout->l3num = l3num;
-       timeout->l4proto = l4proto;
+       timeout->timeout.l3num = l3num;
+       timeout->timeout.l4proto = l4proto;
        refcount_set(&timeout->refcnt, 1);
        list_add_tail_rcu(&timeout->head, &net->nfct_timeout_list);
 
@@ -166,7 +166,7 @@ ctnl_timeout_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
        struct nlmsghdr *nlh;
        struct nfgenmsg *nfmsg;
        unsigned int flags = portid ? NLM_F_MULTI : 0;
-       const struct nf_conntrack_l4proto *l4proto = timeout->l4proto;
+       const struct nf_conntrack_l4proto *l4proto = timeout->timeout.l4proto;
 
        event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK_TIMEOUT, event);
        nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
@@ -179,8 +179,9 @@ ctnl_timeout_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
        nfmsg->res_id = 0;
 
        if (nla_put_string(skb, CTA_TIMEOUT_NAME, timeout->name) ||
-           nla_put_be16(skb, CTA_TIMEOUT_L3PROTO, htons(timeout->l3num)) ||
-           nla_put_u8(skb, CTA_TIMEOUT_L4PROTO, timeout->l4proto->l4proto) ||
+           nla_put_be16(skb, CTA_TIMEOUT_L3PROTO,
+                        htons(timeout->timeout.l3num)) ||
+           nla_put_u8(skb, CTA_TIMEOUT_L4PROTO, l4proto->l4proto) ||
            nla_put_be32(skb, CTA_TIMEOUT_USE,
                         htonl(refcount_read(&timeout->refcnt))))
                goto nla_put_failure;
@@ -194,7 +195,8 @@ ctnl_timeout_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
                if (!nest_parms)
                        goto nla_put_failure;
 
-               ret = l4proto->ctnl_timeout.obj_to_nlattr(skb, &timeout->data);
+               ret = l4proto->ctnl_timeout.obj_to_nlattr(skb,
+                                                       &timeout->timeout.data);
                if (ret < 0)
                        goto nla_put_failure;
 
@@ -297,22 +299,6 @@ static int cttimeout_get_timeout(struct net *net, struct sock *ctnl,
        return ret;
 }
 
-static int untimeout(struct nf_conn *ct, void *timeout)
-{
-       struct nf_conn_timeout *timeout_ext = nf_ct_timeout_find(ct);
-
-       if (timeout_ext && (!timeout || timeout_ext->timeout == timeout))
-               RCU_INIT_POINTER(timeout_ext->timeout, NULL);
-
-       /* We are not intended to delete this conntrack. */
-       return 0;
-}
-
-static void ctnl_untimeout(struct net *net, struct ctnl_timeout *timeout)
-{
-       nf_ct_iterate_cleanup_net(net, untimeout, timeout, 0, 0);
-}
-
 /* try to delete object, fail if it is still in use. */
 static int ctnl_timeout_try_del(struct net *net, struct ctnl_timeout *timeout)
 {
@@ -324,8 +310,8 @@ static int ctnl_timeout_try_del(struct net *net, struct ctnl_timeout *timeout)
        if (refcount_dec_if_one(&timeout->refcnt)) {
                /* We are protected by nfnl mutex. */
                list_del_rcu(&timeout->head);
-               nf_ct_l4proto_put(timeout->l4proto);
-               ctnl_untimeout(net, timeout);
+               nf_ct_l4proto_put(timeout->timeout.l4proto);
+               nf_ct_untimeout(net, &timeout->timeout);
                kfree_rcu(timeout, rcu_head);
        } else {
                ret = -EBUSY;
@@ -526,8 +512,11 @@ ctnl_timeout_find_get(struct net *net, const char *name)
        return matching;
 }
 
-static void ctnl_timeout_put(struct ctnl_timeout *timeout)
+static void ctnl_timeout_put(struct nf_ct_timeout *t)
 {
+       struct ctnl_timeout *timeout =
+               container_of(t, struct ctnl_timeout, timeout);
+
        if (refcount_dec_and_test(&timeout->refcnt))
                kfree_rcu(timeout, rcu_head);
 
@@ -573,11 +562,11 @@ static void __net_exit cttimeout_net_exit(struct net *net)
        struct ctnl_timeout *cur, *tmp;
 
        nf_ct_unconfirmed_destroy(net);
-       ctnl_untimeout(net, NULL);
+       nf_ct_untimeout(net, NULL);
 
        list_for_each_entry_safe(cur, tmp, &net->nfct_timeout_list, head) {
                list_del_rcu(&cur->head);
-               nf_ct_l4proto_put(cur->l4proto);
+               nf_ct_l4proto_put(cur->timeout.l4proto);
 
                if (refcount_dec_and_test(&cur->refcnt))
                        kfree_rcu(cur, rcu_head);
index f9dba62c450f11a93cbda8aa7998c8f505b2da76..00db27dfd2ff72a9f6bf67354168711e40575fe0 100644 (file)
@@ -271,7 +271,7 @@ const char *nf_osf_find(const struct sk_buff *skb,
 
        tcp = nf_osf_hdr_ctx_init(&ctx, skb, ip, opts);
        if (!tcp)
-               return false;
+               return NULL;
 
        list_for_each_entry_rcu(kf, &nf_osf_fingers[ctx.df], finger_entry) {
                f = &kf->finger;
index 3bc82ee5464d164d69c03f25e43f0833a5cae3f4..4855d4ce1c8f8529ddbca1e61b86b1e88276f30a 100644 (file)
@@ -22,6 +22,8 @@
 #include <net/netfilter/nf_conntrack_helper.h>
 #include <net/netfilter/nf_conntrack_ecache.h>
 #include <net/netfilter/nf_conntrack_labels.h>
+#include <net/netfilter/nf_conntrack_timeout.h>
+#include <net/netfilter/nf_conntrack_l4proto.h>
 
 struct nft_ct {
        enum nft_ct_keys        key:8;
@@ -765,6 +767,194 @@ static struct nft_expr_type nft_notrack_type __read_mostly = {
        .owner          = THIS_MODULE,
 };
 
+#ifdef CONFIG_NF_CONNTRACK_TIMEOUT
+static int
+nft_ct_timeout_parse_policy(void *timeouts,
+                           const struct nf_conntrack_l4proto *l4proto,
+                           struct net *net, const struct nlattr *attr)
+{
+       struct nlattr **tb;
+       int ret = 0;
+
+       if (!l4proto->ctnl_timeout.nlattr_to_obj)
+               return 0;
+
+       tb = kcalloc(l4proto->ctnl_timeout.nlattr_max + 1, sizeof(*tb),
+                    GFP_KERNEL);
+
+       if (!tb)
+               return -ENOMEM;
+
+       ret = nla_parse_nested(tb, l4proto->ctnl_timeout.nlattr_max,
+                              attr, l4proto->ctnl_timeout.nla_policy,
+                              NULL);
+       if (ret < 0)
+               goto err;
+
+       ret = l4proto->ctnl_timeout.nlattr_to_obj(tb, net, timeouts);
+
+err:
+       kfree(tb);
+       return ret;
+}
+
+struct nft_ct_timeout_obj {
+       struct nf_conn          *tmpl;
+       u8                      l4proto;
+};
+
+static void nft_ct_timeout_obj_eval(struct nft_object *obj,
+                                   struct nft_regs *regs,
+                                   const struct nft_pktinfo *pkt)
+{
+       const struct nft_ct_timeout_obj *priv = nft_obj_data(obj);
+       struct nf_conn *ct = (struct nf_conn *)skb_nfct(pkt->skb);
+       struct sk_buff *skb = pkt->skb;
+
+       if (ct ||
+           priv->l4proto != pkt->tprot)
+               return;
+
+       nf_ct_set(skb, priv->tmpl, IP_CT_NEW);
+}
+
+static int nft_ct_timeout_obj_init(const struct nft_ctx *ctx,
+                                  const struct nlattr * const tb[],
+                                  struct nft_object *obj)
+{
+       const struct nf_conntrack_zone *zone = &nf_ct_zone_dflt;
+       struct nft_ct_timeout_obj *priv = nft_obj_data(obj);
+       const struct nf_conntrack_l4proto *l4proto;
+       struct nf_conn_timeout *timeout_ext;
+       struct nf_ct_timeout *timeout;
+       int l3num = ctx->family;
+       struct nf_conn *tmpl;
+       __u8 l4num;
+       int ret;
+
+       if (!tb[NFTA_CT_TIMEOUT_L3PROTO] ||
+           !tb[NFTA_CT_TIMEOUT_L4PROTO] ||
+           !tb[NFTA_CT_TIMEOUT_DATA])
+               return -EINVAL;
+
+       l3num = ntohs(nla_get_be16(tb[NFTA_CT_TIMEOUT_L3PROTO]));
+       l4num = nla_get_u8(tb[NFTA_CT_TIMEOUT_L4PROTO]);
+       priv->l4proto = l4num;
+
+       l4proto = nf_ct_l4proto_find_get(l3num, l4num);
+
+       if (l4proto->l4proto != l4num) {
+               ret = -EOPNOTSUPP;
+               goto err_proto_put;
+       }
+
+       timeout = kzalloc(sizeof(struct nf_ct_timeout) +
+                         l4proto->ctnl_timeout.obj_size, GFP_KERNEL);
+       if (timeout == NULL) {
+               ret = -ENOMEM;
+               goto err_proto_put;
+       }
+
+       ret = nft_ct_timeout_parse_policy(&timeout->data, l4proto, ctx->net,
+                                         tb[NFTA_CT_TIMEOUT_DATA]);
+       if (ret < 0)
+               goto err_free_timeout;
+
+       timeout->l3num = l3num;
+       timeout->l4proto = l4proto;
+       tmpl = nf_ct_tmpl_alloc(ctx->net, zone, GFP_ATOMIC);
+       if (!tmpl) {
+               ret = -ENOMEM;
+               goto err_free_timeout;
+       }
+
+       timeout_ext = nf_ct_timeout_ext_add(tmpl, timeout, GFP_ATOMIC);
+       if (!timeout_ext) {
+               ret = -ENOMEM;
+               goto err_free_tmpl;
+       }
+
+       ret = nf_ct_netns_get(ctx->net, ctx->family);
+       if (ret < 0)
+               goto err_free_tmpl;
+
+       priv->tmpl = tmpl;
+
+       return 0;
+
+err_free_tmpl:
+       nf_ct_tmpl_free(tmpl);
+err_free_timeout:
+       kfree(timeout);
+err_proto_put:
+       nf_ct_l4proto_put(l4proto);
+       return ret;
+}
+
+static void nft_ct_timeout_obj_destroy(const struct nft_ctx *ctx,
+                                      struct nft_object *obj)
+{
+       struct nft_ct_timeout_obj *priv = nft_obj_data(obj);
+       struct nf_conn_timeout *t = nf_ct_timeout_find(priv->tmpl);
+       struct nf_ct_timeout *timeout;
+
+       timeout = rcu_dereference_raw(t->timeout);
+       nf_ct_untimeout(ctx->net, timeout);
+       nf_ct_l4proto_put(timeout->l4proto);
+       nf_ct_netns_put(ctx->net, ctx->family);
+       nf_ct_tmpl_free(priv->tmpl);
+}
+
+static int nft_ct_timeout_obj_dump(struct sk_buff *skb,
+                                  struct nft_object *obj, bool reset)
+{
+       const struct nft_ct_timeout_obj *priv = nft_obj_data(obj);
+       const struct nf_conn_timeout *t = nf_ct_timeout_find(priv->tmpl);
+       const struct nf_ct_timeout *timeout = rcu_dereference_raw(t->timeout);
+       struct nlattr *nest_params;
+       int ret;
+
+       if (nla_put_u8(skb, NFTA_CT_TIMEOUT_L4PROTO, timeout->l4proto->l4proto) ||
+           nla_put_be16(skb, NFTA_CT_TIMEOUT_L3PROTO, htons(timeout->l3num)))
+               return -1;
+
+       nest_params = nla_nest_start(skb, NFTA_CT_TIMEOUT_DATA | NLA_F_NESTED);
+       if (!nest_params)
+               return -1;
+
+       ret = timeout->l4proto->ctnl_timeout.obj_to_nlattr(skb, &timeout->data);
+       if (ret < 0)
+               return -1;
+       nla_nest_end(skb, nest_params);
+       return 0;
+}
+
+static const struct nla_policy nft_ct_timeout_policy[NFTA_CT_TIMEOUT_MAX + 1] = {
+       [NFTA_CT_TIMEOUT_L3PROTO] = {.type = NLA_U16 },
+       [NFTA_CT_TIMEOUT_L4PROTO] = {.type = NLA_U8 },
+       [NFTA_CT_TIMEOUT_DATA]    = {.type = NLA_NESTED },
+};
+
+static struct nft_object_type nft_ct_timeout_obj_type;
+
+static const struct nft_object_ops nft_ct_timeout_obj_ops = {
+       .type           = &nft_ct_timeout_obj_type,
+       .size           = sizeof(struct nft_ct_timeout_obj),
+       .eval           = nft_ct_timeout_obj_eval,
+       .init           = nft_ct_timeout_obj_init,
+       .destroy        = nft_ct_timeout_obj_destroy,
+       .dump           = nft_ct_timeout_obj_dump,
+};
+
+static struct nft_object_type nft_ct_timeout_obj_type __read_mostly = {
+       .type           = NFT_OBJECT_CT_TIMEOUT,
+       .ops            = &nft_ct_timeout_obj_ops,
+       .maxattr        = NFTA_CT_TIMEOUT_MAX,
+       .policy         = nft_ct_timeout_policy,
+       .owner          = THIS_MODULE,
+};
+#endif /* CONFIG_NF_CONNTRACK_TIMEOUT */
+
 static int nft_ct_helper_obj_init(const struct nft_ctx *ctx,
                                  const struct nlattr * const tb[],
                                  struct nft_object *obj)
@@ -773,6 +963,7 @@ static int nft_ct_helper_obj_init(const struct nft_ctx *ctx,
        struct nf_conntrack_helper *help4, *help6;
        char name[NF_CT_HELPER_NAME_LEN];
        int family = ctx->family;
+       int err;
 
        if (!tb[NFTA_CT_HELPER_NAME] || !tb[NFTA_CT_HELPER_L4PROTO])
                return -EINVAL;
@@ -823,7 +1014,18 @@ static int nft_ct_helper_obj_init(const struct nft_ctx *ctx,
        priv->helper4 = help4;
        priv->helper6 = help6;
 
+       err = nf_ct_netns_get(ctx->net, ctx->family);
+       if (err < 0)
+               goto err_put_helper;
+
        return 0;
+
+err_put_helper:
+       if (priv->helper4)
+               nf_conntrack_helper_put(priv->helper4);
+       if (priv->helper6)
+               nf_conntrack_helper_put(priv->helper6);
+       return err;
 }
 
 static void nft_ct_helper_obj_destroy(const struct nft_ctx *ctx,
@@ -835,6 +1037,8 @@ static void nft_ct_helper_obj_destroy(const struct nft_ctx *ctx,
                nf_conntrack_helper_put(priv->helper4);
        if (priv->helper6)
                nf_conntrack_helper_put(priv->helper6);
+
+       nf_ct_netns_put(ctx->net, ctx->family);
 }
 
 static void nft_ct_helper_obj_eval(struct nft_object *obj,
@@ -949,9 +1153,17 @@ static int __init nft_ct_module_init(void)
        err = nft_register_obj(&nft_ct_helper_obj_type);
        if (err < 0)
                goto err2;
-
+#ifdef CONFIG_NF_CONNTRACK_TIMEOUT
+       err = nft_register_obj(&nft_ct_timeout_obj_type);
+       if (err < 0)
+               goto err3;
+#endif
        return 0;
 
+#ifdef CONFIG_NF_CONNTRACK_TIMEOUT
+err3:
+       nft_unregister_obj(&nft_ct_helper_obj_type);
+#endif
 err2:
        nft_unregister_expr(&nft_notrack_type);
 err1:
@@ -961,6 +1173,9 @@ static int __init nft_ct_module_init(void)
 
 static void __exit nft_ct_module_exit(void)
 {
+#ifdef CONFIG_NF_CONNTRACK_TIMEOUT
+       nft_unregister_obj(&nft_ct_timeout_obj_type);
+#endif
        nft_unregister_obj(&nft_ct_helper_obj_type);
        nft_unregister_expr(&nft_notrack_type);
        nft_unregister_expr(&nft_ct_type);
@@ -974,3 +1189,4 @@ MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>");
 MODULE_ALIAS_NFT_EXPR("ct");
 MODULE_ALIAS_NFT_EXPR("notrack");
 MODULE_ALIAS_NFT_OBJ(NFT_OBJECT_CT_HELPER);
+MODULE_ALIAS_NFT_OBJ(NFT_OBJECT_CT_TIMEOUT);
index 9b2f3de7be4fd0b921627cb31b7ed2e8fab0d6cb..5af74b37f4236ec0402ac41360f04a5000ae3126 100644 (file)
@@ -4,8 +4,6 @@
 #include <net/netfilter/nf_tables.h>
 #include <linux/netfilter/nfnetlink_osf.h>
 
-#define OSF_GENRE_SIZE         32
-
 struct nft_osf {
        enum nft_registers      dreg:8;
 };
@@ -37,9 +35,9 @@ static void nft_osf_eval(const struct nft_expr *expr, struct nft_regs *regs,
 
        os_name = nf_osf_find(skb, nf_osf_fingers);
        if (!os_name)
-               strncpy((char *)dest, "unknown", IFNAMSIZ);
+               strncpy((char *)dest, "unknown", NFT_OSF_MAXGENRELEN);
        else
-               strncpy((char *)dest, os_name, IFNAMSIZ);
+               strncpy((char *)dest, os_name, NFT_OSF_MAXGENRELEN);
 }
 
 static int nft_osf_init(const struct nft_ctx *ctx,
@@ -51,7 +49,7 @@ static int nft_osf_init(const struct nft_ctx *ctx,
 
        priv->dreg = nft_parse_register(tb[NFTA_OSF_DREG]);
        err = nft_validate_register_store(ctx, priv->dreg, NULL,
-                                         NFTA_DATA_VALUE, OSF_GENRE_SIZE);
+                                         NFTA_DATA_VALUE, NFT_OSF_MAXGENRELEN);
        if (err < 0)
                return err;
 
index 7ba454e9e3fa3a89492e9fad1d41bc21a95eb5c7..89457efd2e008261b549dff8fe73d0ec8abaad02 100644 (file)
@@ -104,7 +104,7 @@ xt_ct_set_helper(struct nf_conn *ct, const char *helper_name,
 }
 
 #ifdef CONFIG_NF_CONNTRACK_TIMEOUT
-static void __xt_ct_tg_timeout_put(struct ctnl_timeout *timeout)
+static void __xt_ct_tg_timeout_put(struct nf_ct_timeout *timeout)
 {
        typeof(nf_ct_timeout_put_hook) timeout_put;
 
@@ -121,7 +121,7 @@ xt_ct_set_timeout(struct nf_conn *ct, const struct xt_tgchk_param *par,
 #ifdef CONFIG_NF_CONNTRACK_TIMEOUT
        typeof(nf_ct_timeout_find_get_hook) timeout_find_get;
        const struct nf_conntrack_l4proto *l4proto;
-       struct ctnl_timeout *timeout;
+       struct nf_ct_timeout *timeout;
        struct nf_conn_timeout *timeout_ext;
        const char *errmsg = NULL;
        int ret = 0;
index 75c92a87e7b2481141161c8945f5e7eef8e0abf8..5610061e7f2e00b935ce44dd9cf82d10eb77a7bf 100644 (file)
@@ -4137,52 +4137,36 @@ static const struct vm_operations_struct packet_mmap_ops = {
        .close  =       packet_mm_close,
 };
 
-static void free_pg_vec(struct pgv *pg_vec, unsigned int order,
-                       unsigned int len)
+static void free_pg_vec(struct pgv *pg_vec, unsigned int len)
 {
        int i;
 
        for (i = 0; i < len; i++) {
                if (likely(pg_vec[i].buffer)) {
-                       if (is_vmalloc_addr(pg_vec[i].buffer))
-                               vfree(pg_vec[i].buffer);
-                       else
-                               free_pages((unsigned long)pg_vec[i].buffer,
-                                          order);
+                       kvfree(pg_vec[i].buffer);
                        pg_vec[i].buffer = NULL;
                }
        }
        kfree(pg_vec);
 }
 
-static char *alloc_one_pg_vec_page(unsigned long order)
+static char *alloc_one_pg_vec_page(unsigned long size)
 {
        char *buffer;
-       gfp_t gfp_flags = GFP_KERNEL | __GFP_COMP |
-                         __GFP_ZERO | __GFP_NOWARN | __GFP_NORETRY;
 
-       buffer = (char *) __get_free_pages(gfp_flags, order);
+       buffer = kvzalloc(size, GFP_KERNEL);
        if (buffer)
                return buffer;
 
-       /* __get_free_pages failed, fall back to vmalloc */
-       buffer = vzalloc(array_size((1 << order), PAGE_SIZE));
-       if (buffer)
-               return buffer;
+       buffer = kvzalloc(size, GFP_KERNEL | __GFP_RETRY_MAYFAIL);
 
-       /* vmalloc failed, lets dig into swap here */
-       gfp_flags &= ~__GFP_NORETRY;
-       buffer = (char *) __get_free_pages(gfp_flags, order);
-       if (buffer)
-               return buffer;
-
-       /* complete and utter failure */
-       return NULL;
+       return buffer;
 }
 
-static struct pgv *alloc_pg_vec(struct tpacket_req *req, int order)
+static struct pgv *alloc_pg_vec(struct tpacket_req *req)
 {
        unsigned int block_nr = req->tp_block_nr;
+       unsigned long size = req->tp_block_size;
        struct pgv *pg_vec;
        int i;
 
@@ -4191,7 +4175,7 @@ static struct pgv *alloc_pg_vec(struct tpacket_req *req, int order)
                goto out;
 
        for (i = 0; i < block_nr; i++) {
-               pg_vec[i].buffer = alloc_one_pg_vec_page(order);
+               pg_vec[i].buffer = alloc_one_pg_vec_page(size);
                if (unlikely(!pg_vec[i].buffer))
                        goto out_free_pgvec;
        }
@@ -4200,7 +4184,7 @@ static struct pgv *alloc_pg_vec(struct tpacket_req *req, int order)
        return pg_vec;
 
 out_free_pgvec:
-       free_pg_vec(pg_vec, order, block_nr);
+       free_pg_vec(pg_vec, block_nr);
        pg_vec = NULL;
        goto out;
 }
@@ -4210,9 +4194,9 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
 {
        struct pgv *pg_vec = NULL;
        struct packet_sock *po = pkt_sk(sk);
-       int was_running, order = 0;
        struct packet_ring_buffer *rb;
        struct sk_buff_head *rb_queue;
+       int was_running;
        __be16 num;
        int err = -EINVAL;
        /* Added to avoid minimal code churn */
@@ -4274,8 +4258,7 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
                        goto out;
 
                err = -ENOMEM;
-               order = get_order(req->tp_block_size);
-               pg_vec = alloc_pg_vec(req, order);
+               pg_vec = alloc_pg_vec(req);
                if (unlikely(!pg_vec))
                        goto out;
                switch (po->tp_version) {
@@ -4329,7 +4312,6 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
                rb->frame_size = req->tp_frame_size;
                spin_unlock_bh(&rb_queue->lock);
 
-               swap(rb->pg_vec_order, order);
                swap(rb->pg_vec_len, req->tp_block_nr);
 
                rb->pg_vec_pages = req->tp_block_size/PAGE_SIZE;
@@ -4355,7 +4337,7 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
        }
 
        if (pg_vec)
-               free_pg_vec(pg_vec, order, req->tp_block_nr);
+               free_pg_vec(pg_vec, req->tp_block_nr);
 out:
        return err;
 }
index 3bb7c5fb3bff2fd5d91c3d973d006d0cdde29a0b..8f50036f62f05c76632dea82491d6a60dba39f0f 100644 (file)
@@ -64,7 +64,6 @@ struct packet_ring_buffer {
        unsigned int            frame_size;
        unsigned int            frame_max;
 
-       unsigned int            pg_vec_order;
        unsigned int            pg_vec_pages;
        unsigned int            pg_vec_len;
 
index 4a7af7aff37d247f84bfdd250bbc592ae1bb67b2..d75bd15151e6e30429cf842dfe3a74a17192d33f 100644 (file)
@@ -15,7 +15,6 @@
 #include "ar-internal.h"
 
 static struct ctl_table_header *rxrpc_sysctl_reg_table;
-static const unsigned int zero = 0;
 static const unsigned int one = 1;
 static const unsigned int four = 4;
 static const unsigned int thirtytwo = 32;
index 6203eb075c9aab06fc9ff30d762b774d2d9d2e71..9b30e62805c7436b030d15958eaec607fdedb8b6 100644 (file)
@@ -34,8 +34,8 @@ struct tcf_bpf_cfg {
 static unsigned int bpf_net_id;
 static struct tc_action_ops act_bpf_ops;
 
-static int tcf_bpf(struct sk_buff *skb, const struct tc_action *act,
-                  struct tcf_result *res)
+static int tcf_bpf_act(struct sk_buff *skb, const struct tc_action *act,
+                      struct tcf_result *res)
 {
        bool at_ingress = skb_at_tc_ingress(skb);
        struct tcf_bpf *prog = to_bpf(act);
@@ -143,11 +143,12 @@ static int tcf_bpf_dump(struct sk_buff *skb, struct tc_action *act,
                .index   = prog->tcf_index,
                .refcnt  = refcount_read(&prog->tcf_refcnt) - ref,
                .bindcnt = atomic_read(&prog->tcf_bindcnt) - bind,
-               .action  = prog->tcf_action,
        };
        struct tcf_t tm;
        int ret;
 
+       spin_lock(&prog->tcf_lock);
+       opt.action = prog->tcf_action;
        if (nla_put(skb, TCA_ACT_BPF_PARMS, sizeof(opt), &opt))
                goto nla_put_failure;
 
@@ -163,9 +164,11 @@ static int tcf_bpf_dump(struct sk_buff *skb, struct tc_action *act,
                          TCA_ACT_BPF_PAD))
                goto nla_put_failure;
 
+       spin_unlock(&prog->tcf_lock);
        return skb->len;
 
 nla_put_failure:
+       spin_unlock(&prog->tcf_lock);
        nlmsg_trim(skb, tp);
        return -1;
 }
@@ -264,7 +267,7 @@ static void tcf_bpf_prog_fill_cfg(const struct tcf_bpf *prog,
 {
        cfg->is_ebpf = tcf_bpf_is_ebpf(prog);
        /* updates to prog->filter are prevented, since it's called either
-        * with rtnl lock or during final cleanup in rcu callback
+        * with tcf lock or during final cleanup in rcu callback
         */
        cfg->filter = rcu_dereference_protected(prog->filter, 1);
 
@@ -336,8 +339,8 @@ static int tcf_bpf_init(struct net *net, struct nlattr *nla,
                goto out;
 
        prog = to_bpf(*act);
-       ASSERT_RTNL();
 
+       spin_lock(&prog->tcf_lock);
        if (res != ACT_P_CREATED)
                tcf_bpf_prog_fill_cfg(prog, &old);
 
@@ -349,6 +352,7 @@ static int tcf_bpf_init(struct net *net, struct nlattr *nla,
 
        prog->tcf_action = parm->action;
        rcu_assign_pointer(prog->filter, cfg.filter);
+       spin_unlock(&prog->tcf_lock);
 
        if (res == ACT_P_CREATED) {
                tcf_idr_insert(tn, *act);
@@ -402,7 +406,7 @@ static struct tc_action_ops act_bpf_ops __read_mostly = {
        .kind           =       "bpf",
        .type           =       TCA_ACT_BPF,
        .owner          =       THIS_MODULE,
-       .act            =       tcf_bpf,
+       .act            =       tcf_bpf_act,
        .dump           =       tcf_bpf_dump,
        .cleanup        =       tcf_bpf_cleanup,
        .init           =       tcf_bpf_init,
index 2f9bc833d0467029f8e854e6a036583836d24567..54c0bf54f2acf71cd63ff17ef0cb74898bc0d478 100644 (file)
@@ -31,8 +31,8 @@
 static unsigned int connmark_net_id;
 static struct tc_action_ops act_connmark_ops;
 
-static int tcf_connmark(struct sk_buff *skb, const struct tc_action *a,
-                       struct tcf_result *res)
+static int tcf_connmark_act(struct sk_buff *skb, const struct tc_action *a,
+                           struct tcf_result *res)
 {
        const struct nf_conntrack_tuple_hash *thash;
        struct nf_conntrack_tuple tuple;
@@ -209,7 +209,7 @@ static struct tc_action_ops act_connmark_ops = {
        .kind           =       "connmark",
        .type           =       TCA_ACT_CONNMARK,
        .owner          =       THIS_MODULE,
-       .act            =       tcf_connmark,
+       .act            =       tcf_connmark_act,
        .dump           =       tcf_connmark_dump,
        .init           =       tcf_connmark_init,
        .walk           =       tcf_connmark_walker,
index 648a3a35b720112eec3b46bf9aada9a8e772e796..5596fae4e4785f2e2c564cdcead58fa139b1482d 100644 (file)
@@ -50,7 +50,7 @@ static int tcf_csum_init(struct net *net, struct nlattr *nla,
                         struct netlink_ext_ack *extack)
 {
        struct tc_action_net *tn = net_generic(net, csum_net_id);
-       struct tcf_csum_params *params_old, *params_new;
+       struct tcf_csum_params *params_new;
        struct nlattr *tb[TCA_CSUM_MAX + 1];
        struct tc_csum *parm;
        struct tcf_csum *p;
@@ -88,20 +88,22 @@ static int tcf_csum_init(struct net *net, struct nlattr *nla,
        }
 
        p = to_tcf_csum(*a);
-       ASSERT_RTNL();
 
        params_new = kzalloc(sizeof(*params_new), GFP_KERNEL);
        if (unlikely(!params_new)) {
                tcf_idr_release(*a, bind);
                return -ENOMEM;
        }
-       params_old = rtnl_dereference(p->params);
+       params_new->update_flags = parm->update_flags;
 
+       spin_lock(&p->tcf_lock);
        p->tcf_action = parm->action;
-       params_new->update_flags = parm->update_flags;
-       rcu_assign_pointer(p->params, params_new);
-       if (params_old)
-               kfree_rcu(params_old, rcu);
+       rcu_swap_protected(p->params, params_new,
+                          lockdep_is_held(&p->tcf_lock));
+       spin_unlock(&p->tcf_lock);
+
+       if (params_new)
+               kfree_rcu(params_new, rcu);
 
        if (ret == ACT_P_CREATED)
                tcf_idr_insert(tn, *a);
@@ -553,8 +555,8 @@ static int tcf_csum_ipv6(struct sk_buff *skb, u32 update_flags)
        return 0;
 }
 
-static int tcf_csum(struct sk_buff *skb, const struct tc_action *a,
-                   struct tcf_result *res)
+static int tcf_csum_act(struct sk_buff *skb, const struct tc_action *a,
+                       struct tcf_result *res)
 {
        struct tcf_csum *p = to_tcf_csum(a);
        struct tcf_csum_params *params;
@@ -599,11 +601,13 @@ static int tcf_csum_dump(struct sk_buff *skb, struct tc_action *a, int bind,
                .index   = p->tcf_index,
                .refcnt  = refcount_read(&p->tcf_refcnt) - ref,
                .bindcnt = atomic_read(&p->tcf_bindcnt) - bind,
-               .action  = p->tcf_action,
        };
        struct tcf_t t;
 
-       params = rtnl_dereference(p->params);
+       spin_lock(&p->tcf_lock);
+       params = rcu_dereference_protected(p->params,
+                                          lockdep_is_held(&p->tcf_lock));
+       opt.action = p->tcf_action;
        opt.update_flags = params->update_flags;
 
        if (nla_put(skb, TCA_CSUM_PARMS, sizeof(opt), &opt))
@@ -612,10 +616,12 @@ static int tcf_csum_dump(struct sk_buff *skb, struct tc_action *a, int bind,
        tcf_tm_dump(&t, &p->tcf_tm);
        if (nla_put_64bit(skb, TCA_CSUM_TM, sizeof(t), &t, TCA_CSUM_PAD))
                goto nla_put_failure;
+       spin_unlock(&p->tcf_lock);
 
        return skb->len;
 
 nla_put_failure:
+       spin_unlock(&p->tcf_lock);
        nlmsg_trim(skb, b);
        return -1;
 }
@@ -664,7 +670,7 @@ static struct tc_action_ops act_csum_ops = {
        .kind           = "csum",
        .type           = TCA_ACT_CSUM,
        .owner          = THIS_MODULE,
-       .act            = tcf_csum,
+       .act            = tcf_csum_act,
        .dump           = tcf_csum_dump,
        .init           = tcf_csum_init,
        .cleanup        = tcf_csum_cleanup,
index 661b72b9147d52d320f094b91b7392488c1f25c2..52a3e474d82232db0d4b05f327aeb8c1e084b03d 100644 (file)
@@ -113,7 +113,7 @@ static int tcf_gact_init(struct net *net, struct nlattr *nla,
 
        gact = to_gact(*a);
 
-       ASSERT_RTNL();
+       spin_lock(&gact->tcf_lock);
        gact->tcf_action = parm->action;
 #ifdef CONFIG_GACT_PROB
        if (p_parm) {
@@ -126,13 +126,15 @@ static int tcf_gact_init(struct net *net, struct nlattr *nla,
                gact->tcfg_ptype   = p_parm->ptype;
        }
 #endif
+       spin_unlock(&gact->tcf_lock);
+
        if (ret == ACT_P_CREATED)
                tcf_idr_insert(tn, *a);
        return ret;
 }
 
-static int tcf_gact(struct sk_buff *skb, const struct tc_action *a,
-                   struct tcf_result *res)
+static int tcf_gact_act(struct sk_buff *skb, const struct tc_action *a,
+                       struct tcf_result *res)
 {
        struct tcf_gact *gact = to_gact(a);
        int action = READ_ONCE(gact->tcf_action);
@@ -178,10 +180,11 @@ static int tcf_gact_dump(struct sk_buff *skb, struct tc_action *a,
                .index   = gact->tcf_index,
                .refcnt  = refcount_read(&gact->tcf_refcnt) - ref,
                .bindcnt = atomic_read(&gact->tcf_bindcnt) - bind,
-               .action  = gact->tcf_action,
        };
        struct tcf_t t;
 
+       spin_lock(&gact->tcf_lock);
+       opt.action = gact->tcf_action;
        if (nla_put(skb, TCA_GACT_PARMS, sizeof(opt), &opt))
                goto nla_put_failure;
 #ifdef CONFIG_GACT_PROB
@@ -199,9 +202,12 @@ static int tcf_gact_dump(struct sk_buff *skb, struct tc_action *a,
        tcf_tm_dump(&t, &gact->tcf_tm);
        if (nla_put_64bit(skb, TCA_GACT_TM, sizeof(t), &t, TCA_GACT_PAD))
                goto nla_put_failure;
+       spin_unlock(&gact->tcf_lock);
+
        return skb->len;
 
 nla_put_failure:
+       spin_unlock(&gact->tcf_lock);
        nlmsg_trim(skb, b);
        return -1;
 }
@@ -248,7 +254,7 @@ static struct tc_action_ops act_gact_ops = {
        .kind           =       "gact",
        .type           =       TCA_ACT_GACT,
        .owner          =       THIS_MODULE,
-       .act            =       tcf_gact,
+       .act            =       tcf_gact_act,
        .stats_update   =       tcf_gact_stats_update,
        .dump           =       tcf_gact_dump,
        .init           =       tcf_gact_init,
index df4060e32d43e85708678ebb283319a5344db481..5d200495e4671ffd25fae992bccf821e88d2fc55 100644 (file)
@@ -268,7 +268,8 @@ static const char *ife_meta_id2name(u32 metaid)
  * under ife->tcf_lock for existing action
 */
 static int load_metaops_and_vet(struct tcf_ife_info *ife, u32 metaid,
-                               void *val, int len, bool exists)
+                               void *val, int len, bool exists,
+                               bool rtnl_held)
 {
        struct tcf_meta_ops *ops = find_ife_oplist(metaid);
        int ret = 0;
@@ -278,9 +279,11 @@ static int load_metaops_and_vet(struct tcf_ife_info *ife, u32 metaid,
 #ifdef CONFIG_MODULES
                if (exists)
                        spin_unlock_bh(&ife->tcf_lock);
-               rtnl_unlock();
+               if (rtnl_held)
+                       rtnl_unlock();
                request_module("ife-meta-%s", ife_meta_id2name(metaid));
-               rtnl_lock();
+               if (rtnl_held)
+                       rtnl_lock();
                if (exists)
                        spin_lock_bh(&ife->tcf_lock);
                ops = find_ife_oplist(metaid);
@@ -421,7 +424,7 @@ static void tcf_ife_cleanup(struct tc_action *a)
 
 /* under ife->tcf_lock for existing action */
 static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb,
-                            bool exists)
+                            bool exists, bool rtnl_held)
 {
        int len = 0;
        int rc = 0;
@@ -433,7 +436,8 @@ static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb,
                        val = nla_data(tb[i]);
                        len = nla_len(tb[i]);
 
-                       rc = load_metaops_and_vet(ife, i, val, len, exists);
+                       rc = load_metaops_and_vet(ife, i, val, len, exists,
+                                                 rtnl_held);
                        if (rc != 0)
                                return rc;
 
@@ -454,7 +458,7 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
        struct tc_action_net *tn = net_generic(net, ife_net_id);
        struct nlattr *tb[TCA_IFE_MAX + 1];
        struct nlattr *tb2[IFE_META_MAX + 1];
-       struct tcf_ife_params *p, *p_old;
+       struct tcf_ife_params *p;
        struct tcf_ife_info *ife;
        u16 ife_type = ETH_P_IFE;
        struct tc_ife *parm;
@@ -558,7 +562,7 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
                        return err;
                }
 
-               err = populate_metalist(ife, tb2, exists);
+               err = populate_metalist(ife, tb2, exists, rtnl_held);
                if (err)
                        goto metadata_parse_err;
 
@@ -581,13 +585,13 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
        }
 
        ife->tcf_action = parm->action;
+       /* protected by tcf_lock when modifying existing action */
+       rcu_swap_protected(ife->params, p, 1);
+
        if (exists)
                spin_unlock_bh(&ife->tcf_lock);
-
-       p_old = rtnl_dereference(ife->params);
-       rcu_assign_pointer(ife->params, p);
-       if (p_old)
-               kfree_rcu(p_old, rcu);
+       if (p)
+               kfree_rcu(p, rcu);
 
        if (ret == ACT_P_CREATED)
                tcf_idr_insert(tn, *a);
@@ -600,16 +604,20 @@ static int tcf_ife_dump(struct sk_buff *skb, struct tc_action *a, int bind,
 {
        unsigned char *b = skb_tail_pointer(skb);
        struct tcf_ife_info *ife = to_ife(a);
-       struct tcf_ife_params *p = rtnl_dereference(ife->params);
+       struct tcf_ife_params *p;
        struct tc_ife opt = {
                .index = ife->tcf_index,
                .refcnt = refcount_read(&ife->tcf_refcnt) - ref,
                .bindcnt = atomic_read(&ife->tcf_bindcnt) - bind,
-               .action = ife->tcf_action,
-               .flags = p->flags,
        };
        struct tcf_t t;
 
+       spin_lock_bh(&ife->tcf_lock);
+       opt.action = ife->tcf_action;
+       p = rcu_dereference_protected(ife->params,
+                                     lockdep_is_held(&ife->tcf_lock));
+       opt.flags = p->flags;
+
        if (nla_put(skb, TCA_IFE_PARMS, sizeof(opt), &opt))
                goto nla_put_failure;
 
@@ -635,9 +643,11 @@ static int tcf_ife_dump(struct sk_buff *skb, struct tc_action *a, int bind,
                pr_info("Failed to dump metalist\n");
        }
 
+       spin_unlock_bh(&ife->tcf_lock);
        return skb->len;
 
 nla_put_failure:
+       spin_unlock_bh(&ife->tcf_lock);
        nlmsg_trim(skb, b);
        return -1;
 }
index 0dc787a57798292be40ba1f66c16d2affd31a046..51f235bbeb5bccb4267fd70eeec8399494c9f246 100644 (file)
@@ -222,8 +222,8 @@ static int tcf_xt_init(struct net *net, struct nlattr *nla,
                              bind);
 }
 
-static int tcf_ipt(struct sk_buff *skb, const struct tc_action *a,
-                  struct tcf_result *res)
+static int tcf_ipt_act(struct sk_buff *skb, const struct tc_action *a,
+                      struct tcf_result *res)
 {
        int ret = 0, result = 0;
        struct tcf_ipt *ipt = to_ipt(a);
@@ -288,6 +288,7 @@ static int tcf_ipt_dump(struct sk_buff *skb, struct tc_action *a, int bind,
         * for foolproof you need to not assume this
         */
 
+       spin_lock_bh(&ipt->tcf_lock);
        t = kmemdup(ipt->tcfi_t, ipt->tcfi_t->u.user.target_size, GFP_ATOMIC);
        if (unlikely(!t))
                goto nla_put_failure;
@@ -307,10 +308,12 @@ static int tcf_ipt_dump(struct sk_buff *skb, struct tc_action *a, int bind,
        if (nla_put_64bit(skb, TCA_IPT_TM, sizeof(tm), &tm, TCA_IPT_PAD))
                goto nla_put_failure;
 
+       spin_unlock_bh(&ipt->tcf_lock);
        kfree(t);
        return skb->len;
 
 nla_put_failure:
+       spin_unlock_bh(&ipt->tcf_lock);
        nlmsg_trim(skb, b);
        kfree(t);
        return -1;
@@ -345,7 +348,7 @@ static struct tc_action_ops act_ipt_ops = {
        .kind           =       "ipt",
        .type           =       TCA_ACT_IPT,
        .owner          =       THIS_MODULE,
-       .act            =       tcf_ipt,
+       .act            =       tcf_ipt_act,
        .dump           =       tcf_ipt_dump,
        .cleanup        =       tcf_ipt_release,
        .init           =       tcf_ipt_init,
@@ -403,7 +406,7 @@ static struct tc_action_ops act_xt_ops = {
        .kind           =       "xt",
        .type           =       TCA_ACT_XT,
        .owner          =       THIS_MODULE,
-       .act            =       tcf_ipt,
+       .act            =       tcf_ipt_act,
        .dump           =       tcf_ipt_dump,
        .cleanup        =       tcf_ipt_release,
        .init           =       tcf_xt_init,
index b26d060da08ea350019c21e72c1494744a82f01f..8ec216001077905298fb44fc4ec2ae6754af203a 100644 (file)
@@ -30,6 +30,7 @@
 #include <net/tc_act/tc_mirred.h>
 
 static LIST_HEAD(mirred_list);
+static DEFINE_SPINLOCK(mirred_list_lock);
 
 static bool tcf_mirred_is_act_redirect(int action)
 {
@@ -62,13 +63,23 @@ static bool tcf_mirred_can_reinsert(int action)
        return false;
 }
 
+static struct net_device *tcf_mirred_dev_dereference(struct tcf_mirred *m)
+{
+       return rcu_dereference_protected(m->tcfm_dev,
+                                        lockdep_is_held(&m->tcf_lock));
+}
+
 static void tcf_mirred_release(struct tc_action *a)
 {
        struct tcf_mirred *m = to_mirred(a);
        struct net_device *dev;
 
+       spin_lock(&mirred_list_lock);
        list_del(&m->tcfm_list);
-       dev = rtnl_dereference(m->tcfm_dev);
+       spin_unlock(&mirred_list_lock);
+
+       /* last reference to action, no need to lock */
+       dev = rcu_dereference_protected(m->tcfm_dev, 1);
        if (dev)
                dev_put(dev);
 }
@@ -128,22 +139,9 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla,
                NL_SET_ERR_MSG_MOD(extack, "Unknown mirred option");
                return -EINVAL;
        }
-       if (parm->ifindex) {
-               dev = __dev_get_by_index(net, parm->ifindex);
-               if (dev == NULL) {
-                       if (exists)
-                               tcf_idr_release(*a, bind);
-                       else
-                               tcf_idr_cleanup(tn, parm->index);
-                       return -ENODEV;
-               }
-               mac_header_xmit = dev_is_mac_header_xmit(dev);
-       } else {
-               dev = NULL;
-       }
 
        if (!exists) {
-               if (!dev) {
+               if (!parm->ifindex) {
                        tcf_idr_cleanup(tn, parm->index);
                        NL_SET_ERR_MSG_MOD(extack, "Specified device does not exist");
                        return -EINVAL;
@@ -161,27 +159,39 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla,
        }
        m = to_mirred(*a);
 
-       ASSERT_RTNL();
+       spin_lock(&m->tcf_lock);
        m->tcf_action = parm->action;
        m->tcfm_eaction = parm->eaction;
-       if (dev != NULL) {
-               if (ret != ACT_P_CREATED)
-                       dev_put(rcu_dereference_protected(m->tcfm_dev, 1));
-               dev_hold(dev);
-               rcu_assign_pointer(m->tcfm_dev, dev);
+
+       if (parm->ifindex) {
+               dev = dev_get_by_index(net, parm->ifindex);
+               if (!dev) {
+                       spin_unlock(&m->tcf_lock);
+                       tcf_idr_release(*a, bind);
+                       return -ENODEV;
+               }
+               mac_header_xmit = dev_is_mac_header_xmit(dev);
+               rcu_swap_protected(m->tcfm_dev, dev,
+                                  lockdep_is_held(&m->tcf_lock));
+               if (dev)
+                       dev_put(dev);
                m->tcfm_mac_header_xmit = mac_header_xmit;
        }
+       spin_unlock(&m->tcf_lock);
 
        if (ret == ACT_P_CREATED) {
+               spin_lock(&mirred_list_lock);
                list_add(&m->tcfm_list, &mirred_list);
+               spin_unlock(&mirred_list_lock);
+
                tcf_idr_insert(tn, *a);
        }
 
        return ret;
 }
 
-static int tcf_mirred(struct sk_buff *skb, const struct tc_action *a,
-                     struct tcf_result *res)
+static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
+                         struct tcf_result *res)
 {
        struct tcf_mirred *m = to_mirred(a);
        struct sk_buff *skb2 = skb;
@@ -287,26 +297,33 @@ static int tcf_mirred_dump(struct sk_buff *skb, struct tc_action *a, int bind,
 {
        unsigned char *b = skb_tail_pointer(skb);
        struct tcf_mirred *m = to_mirred(a);
-       struct net_device *dev = rtnl_dereference(m->tcfm_dev);
        struct tc_mirred opt = {
                .index   = m->tcf_index,
-               .action  = m->tcf_action,
                .refcnt  = refcount_read(&m->tcf_refcnt) - ref,
                .bindcnt = atomic_read(&m->tcf_bindcnt) - bind,
-               .eaction = m->tcfm_eaction,
-               .ifindex = dev ? dev->ifindex : 0,
        };
+       struct net_device *dev;
        struct tcf_t t;
 
+       spin_lock(&m->tcf_lock);
+       opt.action = m->tcf_action;
+       opt.eaction = m->tcfm_eaction;
+       dev = tcf_mirred_dev_dereference(m);
+       if (dev)
+               opt.ifindex = dev->ifindex;
+
        if (nla_put(skb, TCA_MIRRED_PARMS, sizeof(opt), &opt))
                goto nla_put_failure;
 
        tcf_tm_dump(&t, &m->tcf_tm);
        if (nla_put_64bit(skb, TCA_MIRRED_TM, sizeof(t), &t, TCA_MIRRED_PAD))
                goto nla_put_failure;
+       spin_unlock(&m->tcf_lock);
+
        return skb->len;
 
 nla_put_failure:
+       spin_unlock(&m->tcf_lock);
        nlmsg_trim(skb, b);
        return -1;
 }
@@ -337,15 +354,19 @@ static int mirred_device_event(struct notifier_block *unused,
 
        ASSERT_RTNL();
        if (event == NETDEV_UNREGISTER) {
+               spin_lock(&mirred_list_lock);
                list_for_each_entry(m, &mirred_list, tcfm_list) {
-                       if (rcu_access_pointer(m->tcfm_dev) == dev) {
+                       spin_lock(&m->tcf_lock);
+                       if (tcf_mirred_dev_dereference(m) == dev) {
                                dev_put(dev);
                                /* Note : no rcu grace period necessary, as
                                 * net_device are already rcu protected.
                                 */
                                RCU_INIT_POINTER(m->tcfm_dev, NULL);
                        }
+                       spin_unlock(&m->tcf_lock);
                }
+               spin_unlock(&mirred_list_lock);
        }
 
        return NOTIFY_DONE;
@@ -358,8 +379,20 @@ static struct notifier_block mirred_device_notifier = {
 static struct net_device *tcf_mirred_get_dev(const struct tc_action *a)
 {
        struct tcf_mirred *m = to_mirred(a);
+       struct net_device *dev;
+
+       rcu_read_lock();
+       dev = rcu_dereference(m->tcfm_dev);
+       if (dev)
+               dev_hold(dev);
+       rcu_read_unlock();
 
-       return rtnl_dereference(m->tcfm_dev);
+       return dev;
+}
+
+static void tcf_mirred_put_dev(struct net_device *dev)
+{
+       dev_put(dev);
 }
 
 static int tcf_mirred_delete(struct net *net, u32 index)
@@ -373,7 +406,7 @@ static struct tc_action_ops act_mirred_ops = {
        .kind           =       "mirred",
        .type           =       TCA_ACT_MIRRED,
        .owner          =       THIS_MODULE,
-       .act            =       tcf_mirred,
+       .act            =       tcf_mirred_act,
        .stats_update   =       tcf_stats_update,
        .dump           =       tcf_mirred_dump,
        .cleanup        =       tcf_mirred_release,
@@ -382,6 +415,7 @@ static struct tc_action_ops act_mirred_ops = {
        .lookup         =       tcf_mirred_search,
        .size           =       sizeof(struct tcf_mirred),
        .get_dev        =       tcf_mirred_get_dev,
+       .put_dev        =       tcf_mirred_put_dev,
        .delete         =       tcf_mirred_delete,
 };
 
index 4dd9188a72fddd9ebb7ccf87950d1068b61837af..822e903bfc25f008d9754d1340a5b4352f4fe59e 100644 (file)
@@ -93,8 +93,8 @@ static int tcf_nat_init(struct net *net, struct nlattr *nla, struct nlattr *est,
        return ret;
 }
 
-static int tcf_nat(struct sk_buff *skb, const struct tc_action *a,
-                  struct tcf_result *res)
+static int tcf_nat_act(struct sk_buff *skb, const struct tc_action *a,
+                      struct tcf_result *res)
 {
        struct tcf_nat *p = to_tcf_nat(a);
        struct iphdr *iph;
@@ -311,7 +311,7 @@ static struct tc_action_ops act_nat_ops = {
        .kind           =       "nat",
        .type           =       TCA_ACT_NAT,
        .owner          =       THIS_MODULE,
-       .act            =       tcf_nat,
+       .act            =       tcf_nat_act,
        .dump           =       tcf_nat_dump,
        .init           =       tcf_nat_init,
        .walk           =       tcf_nat_walker,
index 43ba999b2d2322c7abb1063ceebe6af10d80c6b7..8a7a7cb94e8308e8b4d240f957b64080b269f485 100644 (file)
@@ -187,44 +187,38 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
                        tcf_idr_cleanup(tn, parm->index);
                        goto out_free;
                }
-               p = to_pedit(*a);
-               keys = kmalloc(ksize, GFP_KERNEL);
-               if (!keys) {
-                       tcf_idr_release(*a, bind);
-                       ret = -ENOMEM;
-                       goto out_free;
-               }
                ret = ACT_P_CREATED;
        } else if (err > 0) {
                if (bind)
                        goto out_free;
                if (!ovr) {
-                       tcf_idr_release(*a, bind);
                        ret = -EEXIST;
-                       goto out_free;
-               }
-               p = to_pedit(*a);
-               if (p->tcfp_nkeys && p->tcfp_nkeys != parm->nkeys) {
-                       keys = kmalloc(ksize, GFP_KERNEL);
-                       if (!keys) {
-                               ret = -ENOMEM;
-                               goto out_free;
-                       }
+                       goto out_release;
                }
        } else {
                return err;
        }
 
+       p = to_pedit(*a);
        spin_lock_bh(&p->tcf_lock);
-       p->tcfp_flags = parm->flags;
-       p->tcf_action = parm->action;
-       if (keys) {
+
+       if (ret == ACT_P_CREATED ||
+           (p->tcfp_nkeys && p->tcfp_nkeys != parm->nkeys)) {
+               keys = kmalloc(ksize, GFP_ATOMIC);
+               if (!keys) {
+                       spin_unlock_bh(&p->tcf_lock);
+                       ret = -ENOMEM;
+                       goto out_release;
+               }
                kfree(p->tcfp_keys);
                p->tcfp_keys = keys;
                p->tcfp_nkeys = parm->nkeys;
        }
        memcpy(p->tcfp_keys, parm->keys, ksize);
 
+       p->tcfp_flags = parm->flags;
+       p->tcf_action = parm->action;
+
        kfree(p->tcfp_keys_ex);
        p->tcfp_keys_ex = keys_ex;
 
@@ -232,6 +226,9 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
        if (ret == ACT_P_CREATED)
                tcf_idr_insert(tn, *a);
        return ret;
+
+out_release:
+       tcf_idr_release(*a, bind);
 out_free:
        kfree(keys_ex);
        return ret;
@@ -291,8 +288,8 @@ static int pedit_skb_hdr_offset(struct sk_buff *skb,
        return ret;
 }
 
-static int tcf_pedit(struct sk_buff *skb, const struct tc_action *a,
-                    struct tcf_result *res)
+static int tcf_pedit_act(struct sk_buff *skb, const struct tc_action *a,
+                        struct tcf_result *res)
 {
        struct tcf_pedit *p = to_pedit(a);
        int i;
@@ -410,6 +407,7 @@ static int tcf_pedit_dump(struct sk_buff *skb, struct tc_action *a,
        if (unlikely(!opt))
                return -ENOBUFS;
 
+       spin_lock_bh(&p->tcf_lock);
        memcpy(opt->keys, p->tcfp_keys,
               p->tcfp_nkeys * sizeof(struct tc_pedit_key));
        opt->index = p->tcf_index;
@@ -432,11 +430,13 @@ static int tcf_pedit_dump(struct sk_buff *skb, struct tc_action *a,
        tcf_tm_dump(&t, &p->tcf_tm);
        if (nla_put_64bit(skb, TCA_PEDIT_TM, sizeof(t), &t, TCA_PEDIT_PAD))
                goto nla_put_failure;
+       spin_unlock_bh(&p->tcf_lock);
 
        kfree(opt);
        return skb->len;
 
 nla_put_failure:
+       spin_unlock_bh(&p->tcf_lock);
        nlmsg_trim(skb, b);
        kfree(opt);
        return -1;
@@ -471,7 +471,7 @@ static struct tc_action_ops act_pedit_ops = {
        .kind           =       "pedit",
        .type           =       TCA_ACT_PEDIT,
        .owner          =       THIS_MODULE,
-       .act            =       tcf_pedit,
+       .act            =       tcf_pedit_act,
        .dump           =       tcf_pedit_dump,
        .cleanup        =       tcf_pedit_cleanup,
        .init           =       tcf_pedit_init,
index 1f3192ea8df7a5f4de28c297cac3337cfccfe601..06f0742db593166370cfb69f1ef8a687e29007be 100644 (file)
@@ -56,7 +56,7 @@ struct tc_police_compat {
 static unsigned int police_net_id;
 static struct tc_action_ops act_police_ops;
 
-static int tcf_act_police_walker(struct net *net, struct sk_buff *skb,
+static int tcf_police_walker(struct net *net, struct sk_buff *skb,
                                 struct netlink_callback *cb, int type,
                                 const struct tc_action_ops *ops,
                                 struct netlink_ext_ack *extack)
@@ -73,7 +73,7 @@ static const struct nla_policy police_policy[TCA_POLICE_MAX + 1] = {
        [TCA_POLICE_RESULT]     = { .type = NLA_U32 },
 };
 
-static int tcf_act_police_init(struct net *net, struct nlattr *nla,
+static int tcf_police_init(struct net *net, struct nlattr *nla,
                               struct nlattr *est, struct tc_action **a,
                               int ovr, int bind, bool rtnl_held,
                               struct netlink_ext_ack *extack)
@@ -203,7 +203,7 @@ static int tcf_act_police_init(struct net *net, struct nlattr *nla,
        return err;
 }
 
-static int tcf_act_police(struct sk_buff *skb, const struct tc_action *a,
+static int tcf_police_act(struct sk_buff *skb, const struct tc_action *a,
                          struct tcf_result *res)
 {
        struct tcf_police *police = to_police(a);
@@ -267,21 +267,22 @@ static int tcf_act_police(struct sk_buff *skb, const struct tc_action *a,
        return police->tcf_action;
 }
 
-static int tcf_act_police_dump(struct sk_buff *skb, struct tc_action *a,
+static int tcf_police_dump(struct sk_buff *skb, struct tc_action *a,
                               int bind, int ref)
 {
        unsigned char *b = skb_tail_pointer(skb);
        struct tcf_police *police = to_police(a);
        struct tc_police opt = {
                .index = police->tcf_index,
-               .action = police->tcf_action,
-               .mtu = police->tcfp_mtu,
-               .burst = PSCHED_NS2TICKS(police->tcfp_burst),
                .refcnt = refcount_read(&police->tcf_refcnt) - ref,
                .bindcnt = atomic_read(&police->tcf_bindcnt) - bind,
        };
        struct tcf_t t;
 
+       spin_lock_bh(&police->tcf_lock);
+       opt.action = police->tcf_action;
+       opt.mtu = police->tcfp_mtu;
+       opt.burst = PSCHED_NS2TICKS(police->tcfp_burst);
        if (police->rate_present)
                psched_ratecfg_getrate(&opt.rate, &police->rate);
        if (police->peak_present)
@@ -301,10 +302,12 @@ static int tcf_act_police_dump(struct sk_buff *skb, struct tc_action *a,
        t.expires = jiffies_to_clock_t(police->tcf_tm.expires);
        if (nla_put_64bit(skb, TCA_POLICE_TM, sizeof(t), &t, TCA_POLICE_PAD))
                goto nla_put_failure;
+       spin_unlock_bh(&police->tcf_lock);
 
        return skb->len;
 
 nla_put_failure:
+       spin_unlock_bh(&police->tcf_lock);
        nlmsg_trim(skb, b);
        return -1;
 }
@@ -332,10 +335,10 @@ static struct tc_action_ops act_police_ops = {
        .kind           =       "police",
        .type           =       TCA_ID_POLICE,
        .owner          =       THIS_MODULE,
-       .act            =       tcf_act_police,
-       .dump           =       tcf_act_police_dump,
-       .init           =       tcf_act_police_init,
-       .walk           =       tcf_act_police_walker,
+       .act            =       tcf_police_act,
+       .dump           =       tcf_police_dump,
+       .init           =       tcf_police_init,
+       .walk           =       tcf_police_walker,
        .lookup         =       tcf_police_search,
        .delete         =       tcf_police_delete,
        .size           =       sizeof(struct tcf_police),
index 2608ccc83e5e7796d565438d6d0b739654695297..81071afe1b43e3a7f482d145849d13ef4b98ef0e 100644 (file)
@@ -80,11 +80,13 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
        }
        s = to_sample(*a);
 
+       spin_lock(&s->tcf_lock);
        s->tcf_action = parm->action;
        s->rate = nla_get_u32(tb[TCA_SAMPLE_RATE]);
        s->psample_group_num = nla_get_u32(tb[TCA_SAMPLE_PSAMPLE_GROUP]);
        psample_group = psample_group_get(net, s->psample_group_num);
        if (!psample_group) {
+               spin_unlock(&s->tcf_lock);
                tcf_idr_release(*a, bind);
                return -ENOMEM;
        }
@@ -94,6 +96,7 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
                s->truncate = true;
                s->trunc_size = nla_get_u32(tb[TCA_SAMPLE_TRUNC_SIZE]);
        }
+       spin_unlock(&s->tcf_lock);
 
        if (ret == ACT_P_CREATED)
                tcf_idr_insert(tn, *a);
@@ -105,7 +108,8 @@ static void tcf_sample_cleanup(struct tc_action *a)
        struct tcf_sample *s = to_sample(a);
        struct psample_group *psample_group;
 
-       psample_group = rtnl_dereference(s->psample_group);
+       /* last reference to action, no need to lock */
+       psample_group = rcu_dereference_protected(s->psample_group, 1);
        RCU_INIT_POINTER(s->psample_group, NULL);
        if (psample_group)
                psample_group_put(psample_group);
@@ -174,12 +178,13 @@ static int tcf_sample_dump(struct sk_buff *skb, struct tc_action *a,
        struct tcf_sample *s = to_sample(a);
        struct tc_sample opt = {
                .index      = s->tcf_index,
-               .action     = s->tcf_action,
                .refcnt     = refcount_read(&s->tcf_refcnt) - ref,
                .bindcnt    = atomic_read(&s->tcf_bindcnt) - bind,
        };
        struct tcf_t t;
 
+       spin_lock(&s->tcf_lock);
+       opt.action = s->tcf_action;
        if (nla_put(skb, TCA_SAMPLE_PARMS, sizeof(opt), &opt))
                goto nla_put_failure;
 
@@ -196,9 +201,12 @@ static int tcf_sample_dump(struct sk_buff *skb, struct tc_action *a,
 
        if (nla_put_u32(skb, TCA_SAMPLE_PSAMPLE_GROUP, s->psample_group_num))
                goto nla_put_failure;
+       spin_unlock(&s->tcf_lock);
+
        return skb->len;
 
 nla_put_failure:
+       spin_unlock(&s->tcf_lock);
        nlmsg_trim(skb, b);
        return -1;
 }
index aa51152e00668e76cccc08e3592769f1f7874915..e616523ba3c15f9126146240d440d96849ac4ded 100644 (file)
@@ -28,8 +28,8 @@ static unsigned int simp_net_id;
 static struct tc_action_ops act_simp_ops;
 
 #define SIMP_MAX_DATA  32
-static int tcf_simp(struct sk_buff *skb, const struct tc_action *a,
-                   struct tcf_result *res)
+static int tcf_simp_act(struct sk_buff *skb, const struct tc_action *a,
+                       struct tcf_result *res)
 {
        struct tcf_defact *d = to_defact(a);
 
@@ -156,10 +156,11 @@ static int tcf_simp_dump(struct sk_buff *skb, struct tc_action *a,
                .index   = d->tcf_index,
                .refcnt  = refcount_read(&d->tcf_refcnt) - ref,
                .bindcnt = atomic_read(&d->tcf_bindcnt) - bind,
-               .action  = d->tcf_action,
        };
        struct tcf_t t;
 
+       spin_lock_bh(&d->tcf_lock);
+       opt.action = d->tcf_action;
        if (nla_put(skb, TCA_DEF_PARMS, sizeof(opt), &opt) ||
            nla_put_string(skb, TCA_DEF_DATA, d->tcfd_defdata))
                goto nla_put_failure;
@@ -167,9 +168,12 @@ static int tcf_simp_dump(struct sk_buff *skb, struct tc_action *a,
        tcf_tm_dump(&t, &d->tcf_tm);
        if (nla_put_64bit(skb, TCA_DEF_TM, sizeof(t), &t, TCA_DEF_PAD))
                goto nla_put_failure;
+       spin_unlock_bh(&d->tcf_lock);
+
        return skb->len;
 
 nla_put_failure:
+       spin_unlock_bh(&d->tcf_lock);
        nlmsg_trim(skb, b);
        return -1;
 }
@@ -203,7 +207,7 @@ static struct tc_action_ops act_simp_ops = {
        .kind           =       "simple",
        .type           =       TCA_ACT_SIMP,
        .owner          =       THIS_MODULE,
-       .act            =       tcf_simp,
+       .act            =       tcf_simp_act,
        .dump           =       tcf_simp_dump,
        .cleanup        =       tcf_simp_release,
        .init           =       tcf_simp_init,
index a6db47ebec112ad79413acf102f6593b5863f50c..926d7bc4a89d9db85078677f1bdb9d6c57a767fc 100644 (file)
@@ -33,8 +33,8 @@
 static unsigned int skbedit_net_id;
 static struct tc_action_ops act_skbedit_ops;
 
-static int tcf_skbedit(struct sk_buff *skb, const struct tc_action *a,
-                      struct tcf_result *res)
+static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a,
+                          struct tcf_result *res)
 {
        struct tcf_skbedit *d = to_skbedit(a);
        struct tcf_skbedit_params *params;
@@ -310,7 +310,7 @@ static struct tc_action_ops act_skbedit_ops = {
        .kind           =       "skbedit",
        .type           =       TCA_ACT_SKBEDIT,
        .owner          =       THIS_MODULE,
-       .act            =       tcf_skbedit,
+       .act            =       tcf_skbedit_act,
        .dump           =       tcf_skbedit_dump,
        .init           =       tcf_skbedit_init,
        .cleanup        =       tcf_skbedit_cleanup,
index c437c6d51a71763bf77e8350e41ccf6f5be000e3..d6a1af0c41712cf6e61bcdaefc06959e616efa21 100644 (file)
@@ -24,7 +24,7 @@ static unsigned int skbmod_net_id;
 static struct tc_action_ops act_skbmod_ops;
 
 #define MAX_EDIT_LEN ETH_HLEN
-static int tcf_skbmod_run(struct sk_buff *skb, const struct tc_action *a,
+static int tcf_skbmod_act(struct sk_buff *skb, const struct tc_action *a,
                          struct tcf_result *res)
 {
        struct tcf_skbmod *d = to_skbmod(a);
@@ -156,7 +156,6 @@ static int tcf_skbmod_init(struct net *net, struct nlattr *nla,
 
        d = to_skbmod(*a);
 
-       ASSERT_RTNL();
        p = kzalloc(sizeof(struct tcf_skbmod_params), GFP_KERNEL);
        if (unlikely(!p)) {
                tcf_idr_release(*a, bind);
@@ -166,10 +165,10 @@ static int tcf_skbmod_init(struct net *net, struct nlattr *nla,
        p->flags = lflags;
        d->tcf_action = parm->action;
 
-       p_old = rtnl_dereference(d->skbmod_p);
-
        if (ovr)
                spin_lock_bh(&d->tcf_lock);
+       /* Protected by tcf_lock if overwriting existing action. */
+       p_old = rcu_dereference_protected(d->skbmod_p, 1);
 
        if (lflags & SKBMOD_F_DMAC)
                ether_addr_copy(p->eth_dst, daddr);
@@ -205,15 +204,18 @@ static int tcf_skbmod_dump(struct sk_buff *skb, struct tc_action *a,
 {
        struct tcf_skbmod *d = to_skbmod(a);
        unsigned char *b = skb_tail_pointer(skb);
-       struct tcf_skbmod_params  *p = rtnl_dereference(d->skbmod_p);
+       struct tcf_skbmod_params  *p;
        struct tc_skbmod opt = {
                .index   = d->tcf_index,
                .refcnt  = refcount_read(&d->tcf_refcnt) - ref,
                .bindcnt = atomic_read(&d->tcf_bindcnt) - bind,
-               .action  = d->tcf_action,
        };
        struct tcf_t t;
 
+       spin_lock_bh(&d->tcf_lock);
+       opt.action = d->tcf_action;
+       p = rcu_dereference_protected(d->skbmod_p,
+                                     lockdep_is_held(&d->tcf_lock));
        opt.flags  = p->flags;
        if (nla_put(skb, TCA_SKBMOD_PARMS, sizeof(opt), &opt))
                goto nla_put_failure;
@@ -231,8 +233,10 @@ static int tcf_skbmod_dump(struct sk_buff *skb, struct tc_action *a,
        if (nla_put_64bit(skb, TCA_SKBMOD_TM, sizeof(t), &t, TCA_SKBMOD_PAD))
                goto nla_put_failure;
 
+       spin_unlock_bh(&d->tcf_lock);
        return skb->len;
 nla_put_failure:
+       spin_unlock_bh(&d->tcf_lock);
        nlmsg_trim(skb, b);
        return -1;
 }
@@ -266,7 +270,7 @@ static struct tc_action_ops act_skbmod_ops = {
        .kind           =       "skbmod",
        .type           =       TCA_ACT_SKBMOD,
        .owner          =       THIS_MODULE,
-       .act            =       tcf_skbmod_run,
+       .act            =       tcf_skbmod_act,
        .dump           =       tcf_skbmod_dump,
        .init           =       tcf_skbmod_init,
        .cleanup        =       tcf_skbmod_cleanup,
index d42d9e112789f3ca4d3037d395a8e30a1a31b989..ba2ae9f75ef59722773c3c7895067da689a0d3c4 100644 (file)
@@ -204,7 +204,6 @@ static int tunnel_key_init(struct net *net, struct nlattr *nla,
 {
        struct tc_action_net *tn = net_generic(net, tunnel_key_net_id);
        struct nlattr *tb[TCA_TUNNEL_KEY_MAX + 1];
-       struct tcf_tunnel_key_params *params_old;
        struct tcf_tunnel_key_params *params_new;
        struct metadata_dst *metadata = NULL;
        struct tc_tunnel_key *parm;
@@ -346,24 +345,22 @@ static int tunnel_key_init(struct net *net, struct nlattr *nla,
 
        t = to_tunnel_key(*a);
 
-       ASSERT_RTNL();
        params_new = kzalloc(sizeof(*params_new), GFP_KERNEL);
        if (unlikely(!params_new)) {
                tcf_idr_release(*a, bind);
                NL_SET_ERR_MSG(extack, "Cannot allocate tunnel key parameters");
                return -ENOMEM;
        }
-
-       params_old = rtnl_dereference(t->params);
-
-       t->tcf_action = parm->action;
        params_new->tcft_action = parm->t_action;
        params_new->tcft_enc_metadata = metadata;
 
-       rcu_assign_pointer(t->params, params_new);
-
-       if (params_old)
-               kfree_rcu(params_old, rcu);
+       spin_lock(&t->tcf_lock);
+       t->tcf_action = parm->action;
+       rcu_swap_protected(t->params, params_new,
+                          lockdep_is_held(&t->tcf_lock));
+       spin_unlock(&t->tcf_lock);
+       if (params_new)
+               kfree_rcu(params_new, rcu);
 
        if (ret == ACT_P_CREATED)
                tcf_idr_insert(tn, *a);
@@ -485,12 +482,13 @@ static int tunnel_key_dump(struct sk_buff *skb, struct tc_action *a,
                .index    = t->tcf_index,
                .refcnt   = refcount_read(&t->tcf_refcnt) - ref,
                .bindcnt  = atomic_read(&t->tcf_bindcnt) - bind,
-               .action   = t->tcf_action,
        };
        struct tcf_t tm;
 
-       params = rtnl_dereference(t->params);
-
+       spin_lock(&t->tcf_lock);
+       params = rcu_dereference_protected(t->params,
+                                          lockdep_is_held(&t->tcf_lock));
+       opt.action   = t->tcf_action;
        opt.t_action = params->tcft_action;
 
        if (nla_put(skb, TCA_TUNNEL_KEY_PARMS, sizeof(opt), &opt))
@@ -522,10 +520,12 @@ static int tunnel_key_dump(struct sk_buff *skb, struct tc_action *a,
        if (nla_put_64bit(skb, TCA_TUNNEL_KEY_TM, sizeof(tm),
                          &tm, TCA_TUNNEL_KEY_PAD))
                goto nla_put_failure;
+       spin_unlock(&t->tcf_lock);
 
        return skb->len;
 
 nla_put_failure:
+       spin_unlock(&t->tcf_lock);
        nlmsg_trim(skb, b);
        return -1;
 }
index 15a0ee214c9cfc171300dbe2266569f7a4387d3c..d1f5028384c9ce43d2706345d7bedf61860c1c6c 100644 (file)
@@ -22,8 +22,8 @@
 static unsigned int vlan_net_id;
 static struct tc_action_ops act_vlan_ops;
 
-static int tcf_vlan(struct sk_buff *skb, const struct tc_action *a,
-                   struct tcf_result *res)
+static int tcf_vlan_act(struct sk_buff *skb, const struct tc_action *a,
+                       struct tcf_result *res)
 {
        struct tcf_vlan *v = to_vlan(a);
        struct tcf_vlan_params *p;
@@ -109,7 +109,7 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla,
 {
        struct tc_action_net *tn = net_generic(net, vlan_net_id);
        struct nlattr *tb[TCA_VLAN_MAX + 1];
-       struct tcf_vlan_params *p, *p_old;
+       struct tcf_vlan_params *p;
        struct tc_vlan *parm;
        struct tcf_vlan *v;
        int action;
@@ -202,26 +202,24 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla,
 
        v = to_vlan(*a);
 
-       ASSERT_RTNL();
        p = kzalloc(sizeof(*p), GFP_KERNEL);
        if (!p) {
                tcf_idr_release(*a, bind);
                return -ENOMEM;
        }
 
-       v->tcf_action = parm->action;
-
-       p_old = rtnl_dereference(v->vlan_p);
-
        p->tcfv_action = action;
        p->tcfv_push_vid = push_vid;
        p->tcfv_push_prio = push_prio;
        p->tcfv_push_proto = push_proto;
 
-       rcu_assign_pointer(v->vlan_p, p);
+       spin_lock(&v->tcf_lock);
+       v->tcf_action = parm->action;
+       rcu_swap_protected(v->vlan_p, p, lockdep_is_held(&v->tcf_lock));
+       spin_unlock(&v->tcf_lock);
 
-       if (p_old)
-               kfree_rcu(p_old, rcu);
+       if (p)
+               kfree_rcu(p, rcu);
 
        if (ret == ACT_P_CREATED)
                tcf_idr_insert(tn, *a);
@@ -243,16 +241,18 @@ static int tcf_vlan_dump(struct sk_buff *skb, struct tc_action *a,
 {
        unsigned char *b = skb_tail_pointer(skb);
        struct tcf_vlan *v = to_vlan(a);
-       struct tcf_vlan_params *p = rtnl_dereference(v->vlan_p);
+       struct tcf_vlan_params *p;
        struct tc_vlan opt = {
                .index    = v->tcf_index,
                .refcnt   = refcount_read(&v->tcf_refcnt) - ref,
                .bindcnt  = atomic_read(&v->tcf_bindcnt) - bind,
-               .action   = v->tcf_action,
-               .v_action = p->tcfv_action,
        };
        struct tcf_t t;
 
+       spin_lock(&v->tcf_lock);
+       opt.action = v->tcf_action;
+       p = rcu_dereference_protected(v->vlan_p, lockdep_is_held(&v->tcf_lock));
+       opt.v_action = p->tcfv_action;
        if (nla_put(skb, TCA_VLAN_PARMS, sizeof(opt), &opt))
                goto nla_put_failure;
 
@@ -268,9 +268,12 @@ static int tcf_vlan_dump(struct sk_buff *skb, struct tc_action *a,
        tcf_tm_dump(&t, &v->tcf_tm);
        if (nla_put_64bit(skb, TCA_VLAN_TM, sizeof(t), &t, TCA_VLAN_PAD))
                goto nla_put_failure;
+       spin_unlock(&v->tcf_lock);
+
        return skb->len;
 
 nla_put_failure:
+       spin_unlock(&v->tcf_lock);
        nlmsg_trim(skb, b);
        return -1;
 }
@@ -304,7 +307,7 @@ static struct tc_action_ops act_vlan_ops = {
        .kind           =       "vlan",
        .type           =       TCA_ACT_VLAN,
        .owner          =       THIS_MODULE,
-       .act            =       tcf_vlan,
+       .act            =       tcf_vlan_act,
        .dump           =       tcf_vlan_dump,
        .init           =       tcf_vlan_init,
        .cleanup        =       tcf_vlan_cleanup,
index 194c2e0b27372878448979bfdeee3fadb932cd52..31bd1439cf6059fbcb3e99525e4a964154ef7907 100644 (file)
@@ -780,6 +780,8 @@ void tcf_block_put_ext(struct tcf_block *block, struct Qdisc *q,
                block->refcnt--;
                if (list_empty(&block->chain_list))
                        kfree(block);
+       } else {
+               block->refcnt--;
        }
 }
 EXPORT_SYMBOL(tcf_block_put_ext);
@@ -2174,6 +2176,7 @@ static int tc_exts_setup_cb_egdev_call(struct tcf_exts *exts,
                if (!dev)
                        continue;
                ret = tc_setup_cb_egdev_call(dev, type, type_data, err_stop);
+               a->ops->put_dev(dev);
                if (ret < 0)
                        return ret;
                ok_count += ret;
index bfb9f812e2ef9fa605b08dc1f534781573c3abf8..ce8087846f05947d2990f6b6deebcadc7c255ac1 100644 (file)
@@ -325,7 +325,8 @@ int sctp_chunk_abandoned(struct sctp_chunk *chunk)
        if (SCTP_PR_TTL_ENABLED(chunk->sinfo.sinfo_flags) &&
            time_after(jiffies, chunk->msg->expires_at)) {
                struct sctp_stream_out *streamout =
-                       &chunk->asoc->stream.out[chunk->sinfo.sinfo_stream];
+                       SCTP_SO(&chunk->asoc->stream,
+                               chunk->sinfo.sinfo_stream);
 
                if (chunk->sent_count) {
                        chunk->asoc->abandoned_sent[SCTP_PR_INDEX(TTL)]++;
@@ -339,7 +340,8 @@ int sctp_chunk_abandoned(struct sctp_chunk *chunk)
        } else if (SCTP_PR_RTX_ENABLED(chunk->sinfo.sinfo_flags) &&
                   chunk->sent_count > chunk->sinfo.sinfo_timetolive) {
                struct sctp_stream_out *streamout =
-                       &chunk->asoc->stream.out[chunk->sinfo.sinfo_stream];
+                       SCTP_SO(&chunk->asoc->stream,
+                               chunk->sinfo.sinfo_stream);
 
                chunk->asoc->abandoned_sent[SCTP_PR_INDEX(RTX)]++;
                streamout->ext->abandoned_sent[SCTP_PR_INDEX(RTX)]++;
index d68aa33485a94e87858fed9b655f00a1b9748998..d74d00b299421a940766f157bf8b48038b827cd1 100644 (file)
@@ -80,7 +80,7 @@ static inline void sctp_outq_head_data(struct sctp_outq *q,
        q->out_qlen += ch->skb->len;
 
        stream = sctp_chunk_stream_no(ch);
-       oute = q->asoc->stream.out[stream].ext;
+       oute = SCTP_SO(&q->asoc->stream, stream)->ext;
        list_add(&ch->stream_list, &oute->outq);
 }
 
@@ -101,7 +101,7 @@ static inline void sctp_outq_tail_data(struct sctp_outq *q,
        q->out_qlen += ch->skb->len;
 
        stream = sctp_chunk_stream_no(ch);
-       oute = q->asoc->stream.out[stream].ext;
+       oute = SCTP_SO(&q->asoc->stream, stream)->ext;
        list_add_tail(&ch->stream_list, &oute->outq);
 }
 
@@ -372,7 +372,7 @@ static int sctp_prsctp_prune_sent(struct sctp_association *asoc,
                sctp_insert_list(&asoc->outqueue.abandoned,
                                 &chk->transmitted_list);
 
-               streamout = &asoc->stream.out[chk->sinfo.sinfo_stream];
+               streamout = SCTP_SO(&asoc->stream, chk->sinfo.sinfo_stream);
                asoc->sent_cnt_removable--;
                asoc->abandoned_sent[SCTP_PR_INDEX(PRIO)]++;
                streamout->ext->abandoned_sent[SCTP_PR_INDEX(PRIO)]++;
@@ -416,7 +416,7 @@ static int sctp_prsctp_prune_unsent(struct sctp_association *asoc,
                asoc->abandoned_unsent[SCTP_PR_INDEX(PRIO)]++;
                if (chk->sinfo.sinfo_stream < asoc->stream.outcnt) {
                        struct sctp_stream_out *streamout =
-                               &asoc->stream.out[chk->sinfo.sinfo_stream];
+                               SCTP_SO(&asoc->stream, chk->sinfo.sinfo_stream);
 
                        streamout->ext->abandoned_unsent[SCTP_PR_INDEX(PRIO)]++;
                }
@@ -1082,6 +1082,7 @@ static void sctp_outq_flush_data(struct sctp_flush_ctx *ctx,
        /* Finally, transmit new packets.  */
        while ((chunk = sctp_outq_dequeue_data(ctx->q)) != NULL) {
                __u32 sid = ntohs(chunk->subh.data_hdr->stream);
+               __u8 stream_state = SCTP_SO(&ctx->asoc->stream, sid)->state;
 
                /* Has this chunk expired? */
                if (sctp_chunk_abandoned(chunk)) {
@@ -1091,7 +1092,7 @@ static void sctp_outq_flush_data(struct sctp_flush_ctx *ctx,
                        continue;
                }
 
-               if (ctx->asoc->stream.out[sid].state == SCTP_STREAM_CLOSED) {
+               if (stream_state == SCTP_STREAM_CLOSED) {
                        sctp_outq_head_data(ctx->q, chunk);
                        break;
                }
index 502c0d7cb105e27306df08155527d147e4fdac5f..e96b15a66abaa481b64db1ebd34f75d59d724c3d 100644 (file)
@@ -1911,7 +1911,7 @@ static int sctp_sendmsg_to_asoc(struct sctp_association *asoc,
                goto err;
        }
 
-       if (unlikely(!asoc->stream.out[sinfo->sinfo_stream].ext)) {
+       if (unlikely(!SCTP_SO(&asoc->stream, sinfo->sinfo_stream)->ext)) {
                err = sctp_stream_init_ext(&asoc->stream, sinfo->sinfo_stream);
                if (err)
                        goto err;
@@ -7154,7 +7154,7 @@ static int sctp_getsockopt_pr_streamstatus(struct sock *sk, int len,
        if (!asoc || params.sprstat_sid >= asoc->stream.outcnt)
                goto out;
 
-       streamoute = asoc->stream.out[params.sprstat_sid].ext;
+       streamoute = SCTP_SO(&asoc->stream, params.sprstat_sid)->ext;
        if (!streamoute) {
                /* Not allocated yet, means all stats are 0 */
                params.sprstat_abandoned_unsent = 0;
index f1f1d1b232ba3e75ed28bf60f5eb111c56fa4852..ffb940d3b57c1ca90dec8861b33da6d40f01824f 100644 (file)
 #include <net/sctp/sm.h>
 #include <net/sctp/stream_sched.h>
 
+static struct flex_array *fa_alloc(size_t elem_size, size_t elem_count,
+                                  gfp_t gfp)
+{
+       struct flex_array *result;
+       int err;
+
+       result = flex_array_alloc(elem_size, elem_count, gfp);
+       if (result) {
+               err = flex_array_prealloc(result, 0, elem_count, gfp);
+               if (err) {
+                       flex_array_free(result);
+                       result = NULL;
+               }
+       }
+
+       return result;
+}
+
+static void fa_free(struct flex_array *fa)
+{
+       if (fa)
+               flex_array_free(fa);
+}
+
+static void fa_copy(struct flex_array *fa, struct flex_array *from,
+                   size_t index, size_t count)
+{
+       void *elem;
+
+       while (count--) {
+               elem = flex_array_get(from, index);
+               flex_array_put(fa, index, elem, 0);
+               index++;
+       }
+}
+
+static void fa_zero(struct flex_array *fa, size_t index, size_t count)
+{
+       void *elem;
+
+       while (count--) {
+               elem = flex_array_get(fa, index);
+               memset(elem, 0, fa->element_size);
+               index++;
+       }
+}
+
 /* Migrates chunks from stream queues to new stream queues if needed,
  * but not across associations. Also, removes those chunks to streams
  * higher than the new max.
@@ -78,34 +125,33 @@ static void sctp_stream_outq_migrate(struct sctp_stream *stream,
                 * sctp_stream_update will swap ->out pointers.
                 */
                for (i = 0; i < outcnt; i++) {
-                       kfree(new->out[i].ext);
-                       new->out[i].ext = stream->out[i].ext;
-                       stream->out[i].ext = NULL;
+                       kfree(SCTP_SO(new, i)->ext);
+                       SCTP_SO(new, i)->ext = SCTP_SO(stream, i)->ext;
+                       SCTP_SO(stream, i)->ext = NULL;
                }
        }
 
        for (i = outcnt; i < stream->outcnt; i++)
-               kfree(stream->out[i].ext);
+               kfree(SCTP_SO(stream, i)->ext);
 }
 
 static int sctp_stream_alloc_out(struct sctp_stream *stream, __u16 outcnt,
                                 gfp_t gfp)
 {
-       struct sctp_stream_out *out;
+       struct flex_array *out;
+       size_t elem_size = sizeof(struct sctp_stream_out);
 
-       out = kmalloc_array(outcnt, sizeof(*out), gfp);
+       out = fa_alloc(elem_size, outcnt, gfp);
        if (!out)
                return -ENOMEM;
 
        if (stream->out) {
-               memcpy(out, stream->out, min(outcnt, stream->outcnt) *
-                                        sizeof(*out));
-               kfree(stream->out);
+               fa_copy(out, stream->out, 0, min(outcnt, stream->outcnt));
+               fa_free(stream->out);
        }
 
        if (outcnt > stream->outcnt)
-               memset(out + stream->outcnt, 0,
-                      (outcnt - stream->outcnt) * sizeof(*out));
+               fa_zero(out, stream->outcnt, (outcnt - stream->outcnt));
 
        stream->out = out;
 
@@ -115,22 +161,20 @@ static int sctp_stream_alloc_out(struct sctp_stream *stream, __u16 outcnt,
 static int sctp_stream_alloc_in(struct sctp_stream *stream, __u16 incnt,
                                gfp_t gfp)
 {
-       struct sctp_stream_in *in;
-
-       in = kmalloc_array(incnt, sizeof(*stream->in), gfp);
+       struct flex_array *in;
+       size_t elem_size = sizeof(struct sctp_stream_in);
 
+       in = fa_alloc(elem_size, incnt, gfp);
        if (!in)
                return -ENOMEM;
 
        if (stream->in) {
-               memcpy(in, stream->in, min(incnt, stream->incnt) *
-                                      sizeof(*in));
-               kfree(stream->in);
+               fa_copy(in, stream->in, 0, min(incnt, stream->incnt));
+               fa_free(stream->in);
        }
 
        if (incnt > stream->incnt)
-               memset(in + stream->incnt, 0,
-                      (incnt - stream->incnt) * sizeof(*in));
+               fa_zero(in, stream->incnt, (incnt - stream->incnt));
 
        stream->in = in;
 
@@ -162,7 +206,7 @@ int sctp_stream_init(struct sctp_stream *stream, __u16 outcnt, __u16 incnt,
 
        stream->outcnt = outcnt;
        for (i = 0; i < stream->outcnt; i++)
-               stream->out[i].state = SCTP_STREAM_OPEN;
+               SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN;
 
        sched->init(stream);
 
@@ -174,7 +218,7 @@ int sctp_stream_init(struct sctp_stream *stream, __u16 outcnt, __u16 incnt,
        ret = sctp_stream_alloc_in(stream, incnt, gfp);
        if (ret) {
                sched->free(stream);
-               kfree(stream->out);
+               fa_free(stream->out);
                stream->out = NULL;
                stream->outcnt = 0;
                goto out;
@@ -193,7 +237,7 @@ int sctp_stream_init_ext(struct sctp_stream *stream, __u16 sid)
        soute = kzalloc(sizeof(*soute), GFP_KERNEL);
        if (!soute)
                return -ENOMEM;
-       stream->out[sid].ext = soute;
+       SCTP_SO(stream, sid)->ext = soute;
 
        return sctp_sched_init_sid(stream, sid, GFP_KERNEL);
 }
@@ -205,9 +249,9 @@ void sctp_stream_free(struct sctp_stream *stream)
 
        sched->free(stream);
        for (i = 0; i < stream->outcnt; i++)
-               kfree(stream->out[i].ext);
-       kfree(stream->out);
-       kfree(stream->in);
+               kfree(SCTP_SO(stream, i)->ext);
+       fa_free(stream->out);
+       fa_free(stream->in);
 }
 
 void sctp_stream_clear(struct sctp_stream *stream)
@@ -215,12 +259,12 @@ void sctp_stream_clear(struct sctp_stream *stream)
        int i;
 
        for (i = 0; i < stream->outcnt; i++) {
-               stream->out[i].mid = 0;
-               stream->out[i].mid_uo = 0;
+               SCTP_SO(stream, i)->mid = 0;
+               SCTP_SO(stream, i)->mid_uo = 0;
        }
 
        for (i = 0; i < stream->incnt; i++)
-               stream->in[i].mid = 0;
+               SCTP_SI(stream, i)->mid = 0;
 }
 
 void sctp_stream_update(struct sctp_stream *stream, struct sctp_stream *new)
@@ -273,8 +317,8 @@ static bool sctp_stream_outq_is_empty(struct sctp_stream *stream,
        for (i = 0; i < str_nums; i++) {
                __u16 sid = ntohs(str_list[i]);
 
-               if (stream->out[sid].ext &&
-                   !list_empty(&stream->out[sid].ext->outq))
+               if (SCTP_SO(stream, sid)->ext &&
+                   !list_empty(&SCTP_SO(stream, sid)->ext->outq))
                        return false;
        }
 
@@ -361,11 +405,11 @@ int sctp_send_reset_streams(struct sctp_association *asoc,
        if (out) {
                if (str_nums)
                        for (i = 0; i < str_nums; i++)
-                               stream->out[str_list[i]].state =
+                               SCTP_SO(stream, str_list[i])->state =
                                                       SCTP_STREAM_CLOSED;
                else
                        for (i = 0; i < stream->outcnt; i++)
-                               stream->out[i].state = SCTP_STREAM_CLOSED;
+                               SCTP_SO(stream, i)->state = SCTP_STREAM_CLOSED;
        }
 
        asoc->strreset_chunk = chunk;
@@ -380,11 +424,11 @@ int sctp_send_reset_streams(struct sctp_association *asoc,
 
                if (str_nums)
                        for (i = 0; i < str_nums; i++)
-                               stream->out[str_list[i]].state =
+                               SCTP_SO(stream, str_list[i])->state =
                                                       SCTP_STREAM_OPEN;
                else
                        for (i = 0; i < stream->outcnt; i++)
-                               stream->out[i].state = SCTP_STREAM_OPEN;
+                               SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN;
 
                goto out;
        }
@@ -418,7 +462,7 @@ int sctp_send_reset_assoc(struct sctp_association *asoc)
 
        /* Block further xmit of data until this request is completed */
        for (i = 0; i < stream->outcnt; i++)
-               stream->out[i].state = SCTP_STREAM_CLOSED;
+               SCTP_SO(stream, i)->state = SCTP_STREAM_CLOSED;
 
        asoc->strreset_chunk = chunk;
        sctp_chunk_hold(asoc->strreset_chunk);
@@ -429,7 +473,7 @@ int sctp_send_reset_assoc(struct sctp_association *asoc)
                asoc->strreset_chunk = NULL;
 
                for (i = 0; i < stream->outcnt; i++)
-                       stream->out[i].state = SCTP_STREAM_OPEN;
+                       SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN;
 
                return retval;
        }
@@ -609,10 +653,10 @@ struct sctp_chunk *sctp_process_strreset_outreq(
                }
 
                for (i = 0; i < nums; i++)
-                       stream->in[ntohs(str_p[i])].mid = 0;
+                       SCTP_SI(stream, ntohs(str_p[i]))->mid = 0;
        } else {
                for (i = 0; i < stream->incnt; i++)
-                       stream->in[i].mid = 0;
+                       SCTP_SI(stream, i)->mid = 0;
        }
 
        result = SCTP_STRRESET_PERFORMED;
@@ -683,11 +727,11 @@ struct sctp_chunk *sctp_process_strreset_inreq(
 
        if (nums)
                for (i = 0; i < nums; i++)
-                       stream->out[ntohs(str_p[i])].state =
+                       SCTP_SO(stream, ntohs(str_p[i]))->state =
                                               SCTP_STREAM_CLOSED;
        else
                for (i = 0; i < stream->outcnt; i++)
-                       stream->out[i].state = SCTP_STREAM_CLOSED;
+                       SCTP_SO(stream, i)->state = SCTP_STREAM_CLOSED;
 
        asoc->strreset_chunk = chunk;
        asoc->strreset_outstanding = 1;
@@ -786,11 +830,11 @@ struct sctp_chunk *sctp_process_strreset_tsnreq(
         *      incoming and outgoing streams.
         */
        for (i = 0; i < stream->outcnt; i++) {
-               stream->out[i].mid = 0;
-               stream->out[i].mid_uo = 0;
+               SCTP_SO(stream, i)->mid = 0;
+               SCTP_SO(stream, i)->mid_uo = 0;
        }
        for (i = 0; i < stream->incnt; i++)
-               stream->in[i].mid = 0;
+               SCTP_SI(stream, i)->mid = 0;
 
        result = SCTP_STRRESET_PERFORMED;
 
@@ -979,15 +1023,18 @@ struct sctp_chunk *sctp_process_strreset_resp(
                       sizeof(__u16);
 
                if (result == SCTP_STRRESET_PERFORMED) {
+                       struct sctp_stream_out *sout;
                        if (nums) {
                                for (i = 0; i < nums; i++) {
-                                       stream->out[ntohs(str_p[i])].mid = 0;
-                                       stream->out[ntohs(str_p[i])].mid_uo = 0;
+                                       sout = SCTP_SO(stream, ntohs(str_p[i]));
+                                       sout->mid = 0;
+                                       sout->mid_uo = 0;
                                }
                        } else {
                                for (i = 0; i < stream->outcnt; i++) {
-                                       stream->out[i].mid = 0;
-                                       stream->out[i].mid_uo = 0;
+                                       sout = SCTP_SO(stream, i);
+                                       sout->mid = 0;
+                                       sout->mid_uo = 0;
                                }
                        }
 
@@ -995,7 +1042,7 @@ struct sctp_chunk *sctp_process_strreset_resp(
                }
 
                for (i = 0; i < stream->outcnt; i++)
-                       stream->out[i].state = SCTP_STREAM_OPEN;
+                       SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN;
 
                *evp = sctp_ulpevent_make_stream_reset_event(asoc, flags,
                        nums, str_p, GFP_ATOMIC);
@@ -1050,15 +1097,15 @@ struct sctp_chunk *sctp_process_strreset_resp(
                        asoc->adv_peer_ack_point = asoc->ctsn_ack_point;
 
                        for (i = 0; i < stream->outcnt; i++) {
-                               stream->out[i].mid = 0;
-                               stream->out[i].mid_uo = 0;
+                               SCTP_SO(stream, i)->mid = 0;
+                               SCTP_SO(stream, i)->mid_uo = 0;
                        }
                        for (i = 0; i < stream->incnt; i++)
-                               stream->in[i].mid = 0;
+                               SCTP_SI(stream, i)->mid = 0;
                }
 
                for (i = 0; i < stream->outcnt; i++)
-                       stream->out[i].state = SCTP_STREAM_OPEN;
+                       SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN;
 
                *evp = sctp_ulpevent_make_assoc_reset_event(asoc, flags,
                        stsn, rtsn, GFP_ATOMIC);
@@ -1072,7 +1119,7 @@ struct sctp_chunk *sctp_process_strreset_resp(
 
                if (result == SCTP_STRRESET_PERFORMED)
                        for (i = number; i < stream->outcnt; i++)
-                               stream->out[i].state = SCTP_STREAM_OPEN;
+                               SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN;
                else
                        stream->outcnt = number;
 
index d3764c18129971c22ce37d60285ec06ef5fe60a1..0a78cdf864633eb68af1520fe5e09484dd8ef76f 100644 (file)
@@ -197,7 +197,7 @@ static struct sctp_ulpevent *sctp_intl_retrieve_partial(
        __u32 next_fsn = 0;
        int is_last = 0;
 
-       sin = sctp_stream_in(ulpq->asoc, event->stream);
+       sin = sctp_stream_in(&ulpq->asoc->stream, event->stream);
 
        skb_queue_walk(&ulpq->reasm, pos) {
                struct sctp_ulpevent *cevent = sctp_skb2event(pos);
@@ -278,7 +278,7 @@ static struct sctp_ulpevent *sctp_intl_retrieve_reassembled(
        __u32 pd_len = 0;
        __u32 mid = 0;
 
-       sin = sctp_stream_in(ulpq->asoc, event->stream);
+       sin = sctp_stream_in(&ulpq->asoc->stream, event->stream);
 
        skb_queue_walk(&ulpq->reasm, pos) {
                struct sctp_ulpevent *cevent = sctp_skb2event(pos);
@@ -368,7 +368,7 @@ static struct sctp_ulpevent *sctp_intl_reasm(struct sctp_ulpq *ulpq,
 
        sctp_intl_store_reasm(ulpq, event);
 
-       sin = sctp_stream_in(ulpq->asoc, event->stream);
+       sin = sctp_stream_in(&ulpq->asoc->stream, event->stream);
        if (sin->pd_mode && event->mid == sin->mid &&
            event->fsn == sin->fsn)
                retval = sctp_intl_retrieve_partial(ulpq, event);
@@ -575,7 +575,7 @@ static struct sctp_ulpevent *sctp_intl_retrieve_partial_uo(
        __u32 next_fsn = 0;
        int is_last = 0;
 
-       sin = sctp_stream_in(ulpq->asoc, event->stream);
+       sin = sctp_stream_in(&ulpq->asoc->stream, event->stream);
 
        skb_queue_walk(&ulpq->reasm_uo, pos) {
                struct sctp_ulpevent *cevent = sctp_skb2event(pos);
@@ -659,7 +659,7 @@ static struct sctp_ulpevent *sctp_intl_retrieve_reassembled_uo(
        __u32 pd_len = 0;
        __u32 mid = 0;
 
-       sin = sctp_stream_in(ulpq->asoc, event->stream);
+       sin = sctp_stream_in(&ulpq->asoc->stream, event->stream);
 
        skb_queue_walk(&ulpq->reasm_uo, pos) {
                struct sctp_ulpevent *cevent = sctp_skb2event(pos);
@@ -750,7 +750,7 @@ static struct sctp_ulpevent *sctp_intl_reasm_uo(struct sctp_ulpq *ulpq,
 
        sctp_intl_store_reasm_uo(ulpq, event);
 
-       sin = sctp_stream_in(ulpq->asoc, event->stream);
+       sin = sctp_stream_in(&ulpq->asoc->stream, event->stream);
        if (sin->pd_mode_uo && event->mid == sin->mid_uo &&
            event->fsn == sin->fsn_uo)
                retval = sctp_intl_retrieve_partial_uo(ulpq, event);
@@ -774,7 +774,7 @@ static struct sctp_ulpevent *sctp_intl_retrieve_first_uo(struct sctp_ulpq *ulpq)
        skb_queue_walk(&ulpq->reasm_uo, pos) {
                struct sctp_ulpevent *cevent = sctp_skb2event(pos);
 
-               csin = sctp_stream_in(ulpq->asoc, cevent->stream);
+               csin = sctp_stream_in(&ulpq->asoc->stream, cevent->stream);
                if (csin->pd_mode_uo)
                        continue;
 
@@ -875,7 +875,7 @@ static struct sctp_ulpevent *sctp_intl_retrieve_first(struct sctp_ulpq *ulpq)
        skb_queue_walk(&ulpq->reasm, pos) {
                struct sctp_ulpevent *cevent = sctp_skb2event(pos);
 
-               csin = sctp_stream_in(ulpq->asoc, cevent->stream);
+               csin = sctp_stream_in(&ulpq->asoc->stream, cevent->stream);
                if (csin->pd_mode)
                        continue;
 
@@ -1053,7 +1053,7 @@ static void sctp_intl_abort_pd(struct sctp_ulpq *ulpq, gfp_t gfp)
        __u16 sid;
 
        for (sid = 0; sid < stream->incnt; sid++) {
-               struct sctp_stream_in *sin = &stream->in[sid];
+               struct sctp_stream_in *sin = SCTP_SI(stream, sid);
                __u32 mid;
 
                if (sin->pd_mode_uo) {
@@ -1247,7 +1247,7 @@ static void sctp_handle_fwdtsn(struct sctp_ulpq *ulpq, struct sctp_chunk *chunk)
 static void sctp_intl_skip(struct sctp_ulpq *ulpq, __u16 sid, __u32 mid,
                           __u8 flags)
 {
-       struct sctp_stream_in *sin = sctp_stream_in(ulpq->asoc, sid);
+       struct sctp_stream_in *sin = sctp_stream_in(&ulpq->asoc->stream, sid);
        struct sctp_stream *stream  = &ulpq->asoc->stream;
 
        if (flags & SCTP_FTSN_U_BIT) {
index f5fcd425232a614eff22da5520151f408d106180..a6c04a94b08f20c4bbd05357ad080edf7b4fafc0 100644 (file)
@@ -161,7 +161,7 @@ int sctp_sched_set_sched(struct sctp_association *asoc,
 
                /* Give the next scheduler a clean slate. */
                for (i = 0; i < asoc->stream.outcnt; i++) {
-                       void *p = asoc->stream.out[i].ext;
+                       void *p = SCTP_SO(&asoc->stream, i)->ext;
 
                        if (!p)
                                continue;
@@ -175,7 +175,7 @@ int sctp_sched_set_sched(struct sctp_association *asoc,
        asoc->outqueue.sched = n;
        n->init(&asoc->stream);
        for (i = 0; i < asoc->stream.outcnt; i++) {
-               if (!asoc->stream.out[i].ext)
+               if (!SCTP_SO(&asoc->stream, i)->ext)
                        continue;
 
                ret = n->init_sid(&asoc->stream, i, GFP_KERNEL);
@@ -217,7 +217,7 @@ int sctp_sched_set_value(struct sctp_association *asoc, __u16 sid,
        if (sid >= asoc->stream.outcnt)
                return -EINVAL;
 
-       if (!asoc->stream.out[sid].ext) {
+       if (!SCTP_SO(&asoc->stream, sid)->ext) {
                int ret;
 
                ret = sctp_stream_init_ext(&asoc->stream, sid);
@@ -234,7 +234,7 @@ int sctp_sched_get_value(struct sctp_association *asoc, __u16 sid,
        if (sid >= asoc->stream.outcnt)
                return -EINVAL;
 
-       if (!asoc->stream.out[sid].ext)
+       if (!SCTP_SO(&asoc->stream, sid)->ext)
                return 0;
 
        return asoc->outqueue.sched->get(&asoc->stream, sid, value);
@@ -252,7 +252,7 @@ void sctp_sched_dequeue_done(struct sctp_outq *q, struct sctp_chunk *ch)
                 * priority stream comes in.
                 */
                sid = sctp_chunk_stream_no(ch);
-               sout = &q->asoc->stream.out[sid];
+               sout = SCTP_SO(&q->asoc->stream, sid);
                q->asoc->stream.out_curr = sout;
                return;
        }
@@ -272,8 +272,9 @@ void sctp_sched_dequeue_common(struct sctp_outq *q, struct sctp_chunk *ch)
 int sctp_sched_init_sid(struct sctp_stream *stream, __u16 sid, gfp_t gfp)
 {
        struct sctp_sched_ops *sched = sctp_sched_ops_from_stream(stream);
+       struct sctp_stream_out_ext *ext = SCTP_SO(stream, sid)->ext;
 
-       INIT_LIST_HEAD(&stream->out[sid].ext->outq);
+       INIT_LIST_HEAD(&ext->outq);
        return sched->init_sid(stream, sid, gfp);
 }
 
index 7997d35dd0fdfbd9a9a3180762dd3c7be556b1af..2245083a98f24ab53653d8377e0ccff3c1aa3cf6 100644 (file)
@@ -75,10 +75,10 @@ static struct sctp_stream_priorities *sctp_sched_prio_get_head(
 
        /* No luck. So we search on all streams now. */
        for (i = 0; i < stream->outcnt; i++) {
-               if (!stream->out[i].ext)
+               if (!SCTP_SO(stream, i)->ext)
                        continue;
 
-               p = stream->out[i].ext->prio_head;
+               p = SCTP_SO(stream, i)->ext->prio_head;
                if (!p)
                        /* Means all other streams won't be initialized
                         * as well.
@@ -165,7 +165,7 @@ static void sctp_sched_prio_sched(struct sctp_stream *stream,
 static int sctp_sched_prio_set(struct sctp_stream *stream, __u16 sid,
                               __u16 prio, gfp_t gfp)
 {
-       struct sctp_stream_out *sout = &stream->out[sid];
+       struct sctp_stream_out *sout = SCTP_SO(stream, sid);
        struct sctp_stream_out_ext *soute = sout->ext;
        struct sctp_stream_priorities *prio_head, *old;
        bool reschedule = false;
@@ -186,7 +186,7 @@ static int sctp_sched_prio_set(struct sctp_stream *stream, __u16 sid,
                return 0;
 
        for (i = 0; i < stream->outcnt; i++) {
-               soute = stream->out[i].ext;
+               soute = SCTP_SO(stream, i)->ext;
                if (soute && soute->prio_head == old)
                        /* It's still in use, nothing else to do here. */
                        return 0;
@@ -201,7 +201,7 @@ static int sctp_sched_prio_set(struct sctp_stream *stream, __u16 sid,
 static int sctp_sched_prio_get(struct sctp_stream *stream, __u16 sid,
                               __u16 *value)
 {
-       *value = stream->out[sid].ext->prio_head->prio;
+       *value = SCTP_SO(stream, sid)->ext->prio_head->prio;
        return 0;
 }
 
@@ -215,7 +215,7 @@ static int sctp_sched_prio_init(struct sctp_stream *stream)
 static int sctp_sched_prio_init_sid(struct sctp_stream *stream, __u16 sid,
                                    gfp_t gfp)
 {
-       INIT_LIST_HEAD(&stream->out[sid].ext->prio_list);
+       INIT_LIST_HEAD(&SCTP_SO(stream, sid)->ext->prio_list);
        return sctp_sched_prio_set(stream, sid, 0, gfp);
 }
 
@@ -233,9 +233,9 @@ static void sctp_sched_prio_free(struct sctp_stream *stream)
         */
        sctp_sched_prio_unsched_all(stream);
        for (i = 0; i < stream->outcnt; i++) {
-               if (!stream->out[i].ext)
+               if (!SCTP_SO(stream, i)->ext)
                        continue;
-               prio = stream->out[i].ext->prio_head;
+               prio = SCTP_SO(stream, i)->ext->prio_head;
                if (prio && list_empty(&prio->prio_sched))
                        list_add(&prio->prio_sched, &list);
        }
@@ -255,7 +255,7 @@ static void sctp_sched_prio_enqueue(struct sctp_outq *q,
        ch = list_first_entry(&msg->chunks, struct sctp_chunk, frag_list);
        sid = sctp_chunk_stream_no(ch);
        stream = &q->asoc->stream;
-       sctp_sched_prio_sched(stream, stream->out[sid].ext);
+       sctp_sched_prio_sched(stream, SCTP_SO(stream, sid)->ext);
 }
 
 static struct sctp_chunk *sctp_sched_prio_dequeue(struct sctp_outq *q)
@@ -297,7 +297,7 @@ static void sctp_sched_prio_dequeue_done(struct sctp_outq *q,
         * this priority.
         */
        sid = sctp_chunk_stream_no(ch);
-       soute = q->asoc->stream.out[sid].ext;
+       soute = SCTP_SO(&q->asoc->stream, sid)->ext;
        prio = soute->prio_head;
 
        sctp_sched_prio_next_stream(prio);
@@ -317,7 +317,7 @@ static void sctp_sched_prio_sched_all(struct sctp_stream *stream)
                __u16 sid;
 
                sid = sctp_chunk_stream_no(ch);
-               sout = &stream->out[sid];
+               sout = SCTP_SO(stream, sid);
                if (sout->ext)
                        sctp_sched_prio_sched(stream, sout->ext);
        }
index 1155692448f1aecf87095b379a753747ec303782..52ba743fa7a7d26c47f3c4dbaea89d90c00b2229 100644 (file)
@@ -100,7 +100,7 @@ static int sctp_sched_rr_init(struct sctp_stream *stream)
 static int sctp_sched_rr_init_sid(struct sctp_stream *stream, __u16 sid,
                                  gfp_t gfp)
 {
-       INIT_LIST_HEAD(&stream->out[sid].ext->rr_list);
+       INIT_LIST_HEAD(&SCTP_SO(stream, sid)->ext->rr_list);
 
        return 0;
 }
@@ -120,7 +120,7 @@ static void sctp_sched_rr_enqueue(struct sctp_outq *q,
        ch = list_first_entry(&msg->chunks, struct sctp_chunk, frag_list);
        sid = sctp_chunk_stream_no(ch);
        stream = &q->asoc->stream;
-       sctp_sched_rr_sched(stream, stream->out[sid].ext);
+       sctp_sched_rr_sched(stream, SCTP_SO(stream, sid)->ext);
 }
 
 static struct sctp_chunk *sctp_sched_rr_dequeue(struct sctp_outq *q)
@@ -154,7 +154,7 @@ static void sctp_sched_rr_dequeue_done(struct sctp_outq *q,
 
        /* Last chunk on that msg, move to the next stream */
        sid = sctp_chunk_stream_no(ch);
-       soute = q->asoc->stream.out[sid].ext;
+       soute = SCTP_SO(&q->asoc->stream, sid)->ext;
 
        sctp_sched_rr_next_stream(&q->asoc->stream);
 
@@ -173,7 +173,7 @@ static void sctp_sched_rr_sched_all(struct sctp_stream *stream)
                __u16 sid;
 
                sid = sctp_chunk_stream_no(ch);
-               soute = stream->out[sid].ext;
+               soute = SCTP_SO(stream, sid)->ext;
                if (soute)
                        sctp_sched_rr_sched(stream, soute);
        }
index cfade7fdcc6d3115d18e9da7b0bfd51b247ed7f7..e36f21ce725208c62164847602e99df91d2c40ba 100644 (file)
@@ -184,6 +184,37 @@ struct smc_ism_event_work {
        struct smcd_event event;
 };
 
+#define ISM_EVENT_REQUEST              0x0001
+#define ISM_EVENT_RESPONSE             0x0002
+#define ISM_EVENT_REQUEST_IR           0x00000001
+#define ISM_EVENT_CODE_TESTLINK                0x83
+
+static void smcd_handle_sw_event(struct smc_ism_event_work *wrk)
+{
+       union {
+               u64     info;
+               struct {
+                       u32             uid;
+                       unsigned short  vlanid;
+                       u16             code;
+               };
+       } ev_info;
+
+       switch (wrk->event.code) {
+       case ISM_EVENT_CODE_TESTLINK:   /* Activity timer */
+               ev_info.info = wrk->event.info;
+               if (ev_info.code == ISM_EVENT_REQUEST) {
+                       ev_info.code = ISM_EVENT_RESPONSE;
+                       wrk->smcd->ops->signal_event(wrk->smcd,
+                                                    wrk->event.tok,
+                                                    ISM_EVENT_REQUEST_IR,
+                                                    ISM_EVENT_CODE_TESTLINK,
+                                                    ev_info.info);
+                       }
+               break;
+       }
+}
+
 /* worker for SMC-D events */
 static void smc_ism_event_work(struct work_struct *work)
 {
@@ -196,6 +227,9 @@ static void smc_ism_event_work(struct work_struct *work)
                break;
        case ISM_EVENT_DMB:
                break;
+       case ISM_EVENT_SWR:     /* Software defined event */
+               smcd_handle_sw_event(wrk);
+               break;
        }
        kfree(wrk);
 }
index 83d67df33f0cfac685b3078af0f2a215add3eb7e..52fbe727d7c10c7e723238ee28ce05a11a720e77 100644 (file)
@@ -48,19 +48,13 @@ static int tls_do_decryption(struct sock *sk,
                             struct scatterlist *sgout,
                             char *iv_recv,
                             size_t data_len,
-                            struct sk_buff *skb,
-                            gfp_t flags)
+                            struct aead_request *aead_req)
 {
        struct tls_context *tls_ctx = tls_get_ctx(sk);
        struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
-       struct aead_request *aead_req;
-
        int ret;
 
-       aead_req = aead_request_alloc(ctx->aead_recv, flags);
-       if (!aead_req)
-               return -ENOMEM;
-
+       aead_request_set_tfm(aead_req, ctx->aead_recv);
        aead_request_set_ad(aead_req, TLS_AAD_SPACE_SIZE);
        aead_request_set_crypt(aead_req, sgin, sgout,
                               data_len + tls_ctx->rx.tag_size,
@@ -69,8 +63,6 @@ static int tls_do_decryption(struct sock *sk,
                                  crypto_req_done, &ctx->async_wait);
 
        ret = crypto_wait_req(crypto_aead_decrypt(aead_req), &ctx->async_wait);
-
-       aead_request_free(aead_req);
        return ret;
 }
 
@@ -657,8 +649,132 @@ static struct sk_buff *tls_wait_data(struct sock *sk, int flags,
        return skb;
 }
 
+/* This function decrypts the input skb into either out_iov or in out_sg
+ * or in skb buffers itself. The input parameter 'zc' indicates if
+ * zero-copy mode needs to be tried or not. With zero-copy mode, either
+ * out_iov or out_sg must be non-NULL. In case both out_iov and out_sg are
+ * NULL, then the decryption happens inside skb buffers itself, i.e.
+ * zero-copy gets disabled and 'zc' is updated.
+ */
+
+static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
+                           struct iov_iter *out_iov,
+                           struct scatterlist *out_sg,
+                           int *chunk, bool *zc)
+{
+       struct tls_context *tls_ctx = tls_get_ctx(sk);
+       struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
+       struct strp_msg *rxm = strp_msg(skb);
+       int n_sgin, n_sgout, nsg, mem_size, aead_size, err, pages = 0;
+       struct aead_request *aead_req;
+       struct sk_buff *unused;
+       u8 *aad, *iv, *mem = NULL;
+       struct scatterlist *sgin = NULL;
+       struct scatterlist *sgout = NULL;
+       const int data_len = rxm->full_len - tls_ctx->rx.overhead_size;
+
+       if (*zc && (out_iov || out_sg)) {
+               if (out_iov)
+                       n_sgout = iov_iter_npages(out_iov, INT_MAX) + 1;
+               else
+                       n_sgout = sg_nents(out_sg);
+       } else {
+               n_sgout = 0;
+               *zc = false;
+       }
+
+       n_sgin = skb_cow_data(skb, 0, &unused);
+       if (n_sgin < 1)
+               return -EBADMSG;
+
+       /* Increment to accommodate AAD */
+       n_sgin = n_sgin + 1;
+
+       nsg = n_sgin + n_sgout;
+
+       aead_size = sizeof(*aead_req) + crypto_aead_reqsize(ctx->aead_recv);
+       mem_size = aead_size + (nsg * sizeof(struct scatterlist));
+       mem_size = mem_size + TLS_AAD_SPACE_SIZE;
+       mem_size = mem_size + crypto_aead_ivsize(ctx->aead_recv);
+
+       /* Allocate a single block of memory which contains
+        * aead_req || sgin[] || sgout[] || aad || iv.
+        * This order achieves correct alignment for aead_req, sgin, sgout.
+        */
+       mem = kmalloc(mem_size, sk->sk_allocation);
+       if (!mem)
+               return -ENOMEM;
+
+       /* Segment the allocated memory */
+       aead_req = (struct aead_request *)mem;
+       sgin = (struct scatterlist *)(mem + aead_size);
+       sgout = sgin + n_sgin;
+       aad = (u8 *)(sgout + n_sgout);
+       iv = aad + TLS_AAD_SPACE_SIZE;
+
+       /* Prepare IV */
+       err = skb_copy_bits(skb, rxm->offset + TLS_HEADER_SIZE,
+                           iv + TLS_CIPHER_AES_GCM_128_SALT_SIZE,
+                           tls_ctx->rx.iv_size);
+       if (err < 0) {
+               kfree(mem);
+               return err;
+       }
+       memcpy(iv, tls_ctx->rx.iv, TLS_CIPHER_AES_GCM_128_SALT_SIZE);
+
+       /* Prepare AAD */
+       tls_make_aad(aad, rxm->full_len - tls_ctx->rx.overhead_size,
+                    tls_ctx->rx.rec_seq, tls_ctx->rx.rec_seq_size,
+                    ctx->control);
+
+       /* Prepare sgin */
+       sg_init_table(sgin, n_sgin);
+       sg_set_buf(&sgin[0], aad, TLS_AAD_SPACE_SIZE);
+       err = skb_to_sgvec(skb, &sgin[1],
+                          rxm->offset + tls_ctx->rx.prepend_size,
+                          rxm->full_len - tls_ctx->rx.prepend_size);
+       if (err < 0) {
+               kfree(mem);
+               return err;
+       }
+
+       if (n_sgout) {
+               if (out_iov) {
+                       sg_init_table(sgout, n_sgout);
+                       sg_set_buf(&sgout[0], aad, TLS_AAD_SPACE_SIZE);
+
+                       *chunk = 0;
+                       err = zerocopy_from_iter(sk, out_iov, data_len, &pages,
+                                                chunk, &sgout[1],
+                                                (n_sgout - 1), false);
+                       if (err < 0)
+                               goto fallback_to_reg_recv;
+               } else if (out_sg) {
+                       memcpy(sgout, out_sg, n_sgout * sizeof(*sgout));
+               } else {
+                       goto fallback_to_reg_recv;
+               }
+       } else {
+fallback_to_reg_recv:
+               sgout = sgin;
+               pages = 0;
+               *chunk = 0;
+               *zc = false;
+       }
+
+       /* Prepare and submit AEAD request */
+       err = tls_do_decryption(sk, sgin, sgout, iv, data_len, aead_req);
+
+       /* Release the pages in case iov was mapped to pages */
+       for (; pages > 0; pages--)
+               put_page(sg_page(&sgout[pages]));
+
+       kfree(mem);
+       return err;
+}
+
 static int decrypt_skb_update(struct sock *sk, struct sk_buff *skb,
-                             struct scatterlist *sgout, bool *zc)
+                             struct iov_iter *dest, int *chunk, bool *zc)
 {
        struct tls_context *tls_ctx = tls_get_ctx(sk);
        struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
@@ -671,7 +787,7 @@ static int decrypt_skb_update(struct sock *sk, struct sk_buff *skb,
                return err;
 #endif
        if (!ctx->decrypted) {
-               err = decrypt_skb(sk, skb, sgout);
+               err = decrypt_internal(sk, skb, dest, NULL, chunk, zc);
                if (err < 0)
                        return err;
        } else {
@@ -690,54 +806,10 @@ static int decrypt_skb_update(struct sock *sk, struct sk_buff *skb,
 int decrypt_skb(struct sock *sk, struct sk_buff *skb,
                struct scatterlist *sgout)
 {
-       struct tls_context *tls_ctx = tls_get_ctx(sk);
-       struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
-       char iv[TLS_CIPHER_AES_GCM_128_SALT_SIZE + MAX_IV_SIZE];
-       struct scatterlist sgin_arr[MAX_SKB_FRAGS + 2];
-       struct scatterlist *sgin = &sgin_arr[0];
-       struct strp_msg *rxm = strp_msg(skb);
-       int ret, nsg = ARRAY_SIZE(sgin_arr);
-       struct sk_buff *unused;
-
-       ret = skb_copy_bits(skb, rxm->offset + TLS_HEADER_SIZE,
-                           iv + TLS_CIPHER_AES_GCM_128_SALT_SIZE,
-                           tls_ctx->rx.iv_size);
-       if (ret < 0)
-               return ret;
-
-       memcpy(iv, tls_ctx->rx.iv, TLS_CIPHER_AES_GCM_128_SALT_SIZE);
-       if (!sgout) {
-               nsg = skb_cow_data(skb, 0, &unused) + 1;
-               sgin = kmalloc_array(nsg, sizeof(*sgin), sk->sk_allocation);
-               sgout = sgin;
-       }
-
-       sg_init_table(sgin, nsg);
-       sg_set_buf(&sgin[0], ctx->rx_aad_ciphertext, TLS_AAD_SPACE_SIZE);
-
-       nsg = skb_to_sgvec(skb, &sgin[1],
-                          rxm->offset + tls_ctx->rx.prepend_size,
-                          rxm->full_len - tls_ctx->rx.prepend_size);
-       if (nsg < 0) {
-               ret = nsg;
-               goto out;
-       }
-
-       tls_make_aad(ctx->rx_aad_ciphertext,
-                    rxm->full_len - tls_ctx->rx.overhead_size,
-                    tls_ctx->rx.rec_seq,
-                    tls_ctx->rx.rec_seq_size,
-                    ctx->control);
-
-       ret = tls_do_decryption(sk, sgin, sgout, iv,
-                               rxm->full_len - tls_ctx->rx.overhead_size,
-                               skb, sk->sk_allocation);
-
-out:
-       if (sgin != &sgin_arr[0])
-               kfree(sgin);
+       bool zc = true;
+       int chunk;
 
-       return ret;
+       return decrypt_internal(sk, skb, NULL, sgout, &chunk, &zc);
 }
 
 static bool tls_sw_advance_skb(struct sock *sk, struct sk_buff *skb,
@@ -816,43 +888,17 @@ int tls_sw_recvmsg(struct sock *sk,
                }
 
                if (!ctx->decrypted) {
-                       int page_count;
-                       int to_copy;
-
-                       page_count = iov_iter_npages(&msg->msg_iter,
-                                                    MAX_SKB_FRAGS);
-                       to_copy = rxm->full_len - tls_ctx->rx.overhead_size;
-                       if (!is_kvec && to_copy <= len && page_count < MAX_SKB_FRAGS &&
-                           likely(!(flags & MSG_PEEK)))  {
-                               struct scatterlist sgin[MAX_SKB_FRAGS + 1];
-                               int pages = 0;
+                       int to_copy = rxm->full_len - tls_ctx->rx.overhead_size;
 
+                       if (!is_kvec && to_copy <= len &&
+                           likely(!(flags & MSG_PEEK)))
                                zc = true;
-                               sg_init_table(sgin, MAX_SKB_FRAGS + 1);
-                               sg_set_buf(&sgin[0], ctx->rx_aad_plaintext,
-                                          TLS_AAD_SPACE_SIZE);
-
-                               err = zerocopy_from_iter(sk, &msg->msg_iter,
-                                                        to_copy, &pages,
-                                                        &chunk, &sgin[1],
-                                                        MAX_SKB_FRAGS, false);
-                               if (err < 0)
-                                       goto fallback_to_reg_recv;
-
-                               err = decrypt_skb_update(sk, skb, sgin, &zc);
-                               for (; pages > 0; pages--)
-                                       put_page(sg_page(&sgin[pages]));
-                               if (err < 0) {
-                                       tls_err_abort(sk, EBADMSG);
-                                       goto recv_end;
-                               }
-                       } else {
-fallback_to_reg_recv:
-                               err = decrypt_skb_update(sk, skb, NULL, &zc);
-                               if (err < 0) {
-                                       tls_err_abort(sk, EBADMSG);
-                                       goto recv_end;
-                               }
+
+                       err = decrypt_skb_update(sk, skb, &msg->msg_iter,
+                                                &chunk, &zc);
+                       if (err < 0) {
+                               tls_err_abort(sk, EBADMSG);
+                               goto recv_end;
                        }
                        ctx->decrypted = true;
                }
@@ -903,7 +949,7 @@ ssize_t tls_sw_splice_read(struct socket *sock,  loff_t *ppos,
        int err = 0;
        long timeo;
        int chunk;
-       bool zc;
+       bool zc = false;
 
        lock_sock(sk);
 
@@ -920,7 +966,7 @@ ssize_t tls_sw_splice_read(struct socket *sock,  loff_t *ppos,
        }
 
        if (!ctx->decrypted) {
-               err = decrypt_skb_update(sk, skb, NULL, &zc);
+               err = decrypt_skb_update(sk, skb, NULL, &chunk, &zc);
 
                if (err < 0) {
                        tls_err_abort(sk, EBADMSG);
index 081ef4bb4fe30b940524a5513da637d26739014a..a306d1c756227fa86187f1ae5932afab31d87df9 100644 (file)
@@ -15,7 +15,7 @@
 #include "bpf_helpers.h"
 #include "hash_func01.h"
 
-#define MAX_CPUS 12 /* WARNING - sync with _user.c */
+#define MAX_CPUS 64 /* WARNING - sync with _user.c */
 
 /* Special map type that can XDP_REDIRECT frames to another CPU */
 struct bpf_map_def SEC("maps") cpu_map = {
index 007710d7c748871e615ebebb1d3d37dde8d0ac05..9a6c7e0a6dd1a7412f43e99da975ac64bc7223c6 100644 (file)
@@ -19,7 +19,7 @@ static const char *__doc__ =
 #include <arpa/inet.h>
 #include <linux/if_link.h>
 
-#define MAX_CPUS 12 /* WARNING - sync with _kern.c */
+#define MAX_CPUS 64 /* WARNING - sync with _kern.c */
 
 /* How many xdp_progs are defined in _kern.c */
 #define MAX_PROG 6
@@ -527,7 +527,7 @@ static void stress_cpumap(void)
         * procedure.
         */
        create_cpu_entry(1,  1024, 0, false);
-       create_cpu_entry(1,   128, 0, false);
+       create_cpu_entry(1,     8, 0, false);
        create_cpu_entry(1, 16000, 0, false);
 }
 
index b593b36ccff86910644c63346af8e497346ebab4..38b2b4818e8ebfbe9d19deb0aebef5fdbcc19d28 100644 (file)
@@ -14,10 +14,6 @@ ifdef CONFIG_UBSAN_ALIGNMENT
       CFLAGS_UBSAN += $(call cc-option, -fsanitize=alignment)
 endif
 
-ifdef CONFIG_UBSAN_NULL
-      CFLAGS_UBSAN += $(call cc-option, -fsanitize=null)
-endif
-
       # -fsanitize=* options makes GCC less smart than usual and
       # increase number of 'maybe-uninitialized false-positives
       CFLAGS_UBSAN += $(call cc-option, -Wno-maybe-uninitialized)
index 09ecf8162f7a8a8edd1947c64c17da9720179af9..cf94b0770522ffb57246fec6f3f4dd7eb9598ec2 100644 (file)
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: GPL-2.0 */
+// SPDX-License-Identifier: LGPL-2.1
 /* Copyright (c) 2018 Facebook */
 
 #include <stdlib.h>
index 43c658ccfc2ba4061b244f32eb0cf3bfdeb54ba8..4897e0724d4ec6180e1262338ef6bd0e9fd72ab9 100644 (file)
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: GPL-2.0 */
+/* SPDX-License-Identifier: LGPL-2.1 */
 /* Copyright (c) 2018 Facebook */
 
 #ifndef __BPF_BTF_H
index 9e78df207919366fbdd048f7645568ac701ad5f5..0c7d9e556b47d0a8129586a634a423e4ad1af067 100644 (file)
@@ -354,7 +354,7 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
                while (s->bytes_recvd < total_bytes) {
                        if (txmsg_cork) {
                                timeout.tv_sec = 0;
-                               timeout.tv_usec = 1000;
+                               timeout.tv_usec = 300000;
                        } else {
                                timeout.tv_sec = 1;
                                timeout.tv_usec = 0;
index 3a0336782d2d30902633b511e55fc44873cc7365..49a6f8c3fdae75cf95925a9e0a27a075f076d096 100644 (file)
@@ -17,6 +17,10 @@ REQUIREMENTS
 *  The kernel must have veth support available, as a veth pair is created
    prior to running the tests.
 
+*  The kernel must have the appropriate infrastructure enabled to run all tdc
+   unit tests. See the config file in this directory for minimum required
+   features. As new tests will be added, config options list will be updated.
+
 *  All tc-related features being tested must be built in or available as
    modules.  To check what is required in current setup run:
    ./tdc.py -c
@@ -109,8 +113,8 @@ COMMAND LINE ARGUMENTS
 Run tdc.py -h to see the full list of available arguments.
 
 usage: tdc.py [-h] [-p PATH] [-D DIR [DIR ...]] [-f FILE [FILE ...]]
-              [-c [CATG [CATG ...]]] [-e ID [ID ...]] [-l] [-s] [-i] [-v]
-              [-d DEVICE] [-n NS] [-V]
+              [-c [CATG [CATG ...]]] [-e ID [ID ...]] [-l] [-s] [-i] [-v] [-N]
+              [-d DEVICE] [-P] [-n] [-V]
 
 Linux TC unit tests
 
@@ -118,8 +122,10 @@ optional arguments:
   -h, --help            show this help message and exit
   -p PATH, --path PATH  The full path to the tc executable to use
   -v, --verbose         Show the commands that are being run
+  -N, --notap           Suppress tap results for command under test
   -d DEVICE, --device DEVICE
                         Execute the test case in flower category
+  -P, --pause           Pause execution just before post-suite stage
 
 selection:
   select which test cases: files plus directories; filtered by categories
@@ -146,10 +152,10 @@ action:
   -i, --id              Generate ID numbers for new test cases
 
 netns:
-  options for nsPlugin(run commands in net namespace)
+  options for nsPlugin (run commands in net namespace)
 
-  -n NS, --namespace NS
-                        Run commands in namespace NS
+  -n, --namespace
+                        Run commands in namespace as specified in tdc_config.py
 
 valgrind:
   options for valgrindPlugin (run command under test under Valgrind)
diff --git a/tools/testing/selftests/tc-testing/config b/tools/testing/selftests/tc-testing/config
new file mode 100644 (file)
index 0000000..2033020
--- /dev/null
@@ -0,0 +1,48 @@
+CONFIG_NET_SCHED=y
+
+#
+# Queueing/Scheduling
+#
+CONFIG_NET_SCH_PRIO=m
+CONFIG_NET_SCH_INGRESS=m
+
+#
+# Classification
+#
+CONFIG_NET_CLS=y
+CONFIG_NET_CLS_FW=m
+CONFIG_NET_CLS_U32=m
+CONFIG_CLS_U32_PERF=y
+CONFIG_CLS_U32_MARK=y
+CONFIG_NET_EMATCH=y
+CONFIG_NET_EMATCH_STACK=32
+CONFIG_NET_EMATCH_CMP=m
+CONFIG_NET_EMATCH_NBYTE=m
+CONFIG_NET_EMATCH_U32=m
+CONFIG_NET_EMATCH_META=m
+CONFIG_NET_EMATCH_TEXT=m
+CONFIG_NET_EMATCH_IPSET=m
+CONFIG_NET_EMATCH_IPT=m
+CONFIG_NET_CLS_ACT=y
+CONFIG_NET_ACT_POLICE=m
+CONFIG_NET_ACT_GACT=m
+CONFIG_GACT_PROB=y
+CONFIG_NET_ACT_MIRRED=m
+CONFIG_NET_ACT_SAMPLE=m
+CONFIG_NET_ACT_IPT=m
+CONFIG_NET_ACT_NAT=m
+CONFIG_NET_ACT_PEDIT=m
+CONFIG_NET_ACT_SIMP=m
+CONFIG_NET_ACT_SKBEDIT=m
+CONFIG_NET_ACT_CSUM=m
+CONFIG_NET_ACT_VLAN=m
+CONFIG_NET_ACT_BPF=m
+CONFIG_NET_ACT_CONNMARK=m
+CONFIG_NET_ACT_SKBMOD=m
+CONFIG_NET_ACT_IFE=m
+CONFIG_NET_ACT_TUNNEL_KEY=m
+CONFIG_NET_IFE_SKBMARK=m
+CONFIG_NET_IFE_SKBPRIO=m
+CONFIG_NET_IFE_SKBTCINDEX=m
+CONFIG_NET_CLS_IND=y
+CONFIG_NET_SCH_FIFO=y