Merge tag 'ASB-2023-03-05_4.19-stable' of https://android.googlesource.com/kernel/common into android13-4.19-kona
https://source.android.com/docs/security/bulletin/2023-03-01 CVE-2021-33655 * tag 'ASB-2023-03-05_4.19-stable' of https://android.googlesource.com/kernel/common: Linux 4.19.275 USB: core: Don't hold device lock while reading the "descriptors" sysfs file USB: serial: option: add support for VW/Skoda "Carstick LTE" dmaengine: sh: rcar-dmac: Check for error num after dma_set_max_seg_size vc_screen: don't clobber return value in vcs_read net: Remove WARN_ON_ONCE(sk->sk_forward_alloc) from sk_stream_kill_queues(). IB/hfi1: Assign npages earlier btrfs: send: limit number of clones and allocated memory size ACPI: NFIT: fix a potential deadlock during NFIT teardown ARM: dts: rockchip: add power-domains property to dp node on rk3288 UPSTREAM: selinux: check return value of sel_make_avc_files UPSTREAM: lib/test_meminit: destroy cache in kmem_cache_alloc_bulk() test UPSTREAM: wireguard: ratelimiter: use kvcalloc() instead of kvzalloc() UPSTREAM: wireguard: receive: drop handshakes if queue lock is contended UPSTREAM: wireguard: receive: use ring buffer for incoming handshakes UPSTREAM: wireguard: device: reset peer src endpoint when netns exits UPSTREAM: wireguard: selftests: actually test for routing loops UPSTREAM: kasan: fix tag for large allocations when using CONFIG_SLAB UPSTREAM: usb: musb: select GENERIC_PHY instead of depending on it UPSTREAM: driver core: Reject pointless SYNC_STATE_ONLY device links BACKPORT: PM: EM: Fix inefficient states detection UPSTREAM: cfg80211: scan: fix RCU in cfg80211_add_nontrans_list() UPSTREAM: thermal/core: Fix thermal_cooling_device_register() prototype UPSTREAM: PM: EM: Increase energy calculation precision UPSTREAM: lib/test_stackinit: Fix static initializer test BACKPORT: userfaultfd: do not untag user pointers UPSTREAM: net/xfrm/compat: Copy xfrm_spdattr_type_t atributes UPSTREAM: sched/uclamp: Ignore max aggregation if rq is idle UPSTREAM: net: xfrm: fix memory leak in xfrm_user_rcv_msg UPSTREAM: f2fs: Advertise encrypted casefolding in sysfs UPSTREAM: fuse: ignore PG_workingset after stealing BACKPORT: loop: Fix missing discard support when using LOOP_CONFIGURE BACKPORT: nvmem: core: add a missing of_node_put UPSTREAM: usb: typec: mux: Fix copy-paste mistake in typec_mux_match Linux 4.19.274 bpf: add missing header file include ext4: Fix function prototype mismatch for ext4_feat_ktype wifi: mwifiex: Add missing compatible string for SD8787 uaccess: Add speculation barrier to copy_from_user() mac80211: mesh: embedd mesh_paths and mpp_paths into ieee80211_if_mesh drm/i915/gvt: fix double free bug in split_2MB_gtt_entry alarmtimer: Prevent starvation by small intervals and SIG_IGN powerpc: dts: t208x: Disable 10G on MAC1 and MAC2 can: kvaser_usb: hydra: help gcc-13 to figure out cmd_len random: always mix cycle counter in add_latent_entropy() powerpc: dts: t208x: Mark MAC1 and MAC2 as 10G wifi: rtl8xxxu: gen2: Turn on the rate control BACKPORT: fscrypt: fix derivation of SipHash keys on big endian CPUs UPSTREAM: wireguard: allowedips: free empty intermediate nodes when removing single node BACKPORT: wireguard: allowedips: allocate nodes in kmem_cache Linux 4.19.273 net: phy: meson-gxl: Add generic dummy stubs for MMD register access nilfs2: fix underflow in second superblock position calculations kvm: initialize all of the kvm_debugregs structure before sending it to userspace i40e: Add checking for null for nlmsg_find_attr() ipv6: Fix tcp socket connection with DSCP. ipv6: Fix datagram socket connection with DSCP. net: mpls: fix stale pointer if allocation fails during device rename net: stmmac: Restrict warning on disabling DMA store and fwd mode bnxt_en: Fix mqprio and XDP ring checking logic net: stmmac: fix order of dwmac5 FlexPPS parametrization sequence net/usb: kalmia: Don't pass act_len in usb_bulk_msg error path dccp/tcp: Avoid negative sk_forward_alloc by ipv6_pinfo.pktoptions. net: bgmac: fix BCM5358 support by setting correct flags i40e: add double of VLAN header when computing the max MTU revert "squashfs: harden sanity check in squashfs_read_xattr_id_table" hugetlb: check for undefined shift on 32 bit architectures ALSA: hda/realtek - fixed wrong gpio assigned ALSA: hda/conexant: add a new hda codec SN6180 mmc: sdio: fix possible resource leaks in some error paths Revert "x86/fpu: Use _Alignof to avoid undefined behavior in TYPE_ALIGN" netfilter: nft_tproxy: restrict to prerouting hook aio: fix mremap after fork null-deref nvme-fc: fix a missing queue put in nvmet_fc_ls_create_association net/rose: Fix to not accept on connected socket tools/virtio: fix the vringh test for virtio ring changes ASoC: cs42l56: fix DT probe migrate: hugetlb: check for hugetlb shared PMD in node migration bpf: Always return target ifindex in bpf_fib_lookup arm64: dts: meson-axg: Make mmc host controller interrupts level-sensitive arm64: dts: meson-gx: Make mmc host controller interrupts level-sensitive riscv: Fixup race condition on PG_dcache_clean in flush_icache_pte usb: typec: altmodes/displayport: Fix probe pin assign check usb: core: add quirk for Alcor Link AK9563 smartcard reader net: USB: Fix wrong-direction WARNING in plusb.c pinctrl: intel: Restore the pins that used to be in Direct IRQ mode pinctrl: intel: Convert unsigned to unsigned int pinctrl: single: fix potential NULL dereference pinctrl: aspeed: Fix confusing types in return value ALSA: pci: lx6464es: fix a debug loop selftests: forwarding: lib: quote the sysctl values rds: rds_rm_zerocopy_callback() use list_first_entry() net: phy: meson-gxl: use MMD access dummy stubs for GXL, internal PHY net: phy: meson-gxl: add g12a support net: phy: add macros for PHYID matching IB/hfi1: Restore allocated resources on failed copyout ALSA: emux: Avoid potential array out-of-bound in snd_emux_xg_control() btrfs: limit device extents to the device size iio:adc:twl6030: Enable measurement of VAC thermal: intel: int340x: Add locking to int340x_thermal_get_trip_type() serial: 8250_dma: Fix DMA Rx rearm race serial: 8250_dma: Fix DMA Rx completion race Squashfs: fix handling and sanity checking of xattr_ids count mm/swapfile: add cond_resched() in get_swap_pages() mm: hugetlb: proc: check for hugetlb shared PMD in /proc/PID/smaps riscv: disable generation of unwind tables parisc: Wire up PTRACE_GETREGS/PTRACE_SETREGS for compat case parisc: Fix return code of pdc_iodc_print() iio:adc:twl6030: Enable measurements of VUSB, VBAT and others iio: adc: berlin2-adc: Add missing of_node_put() in error path iio: hid: fix the retval in accel_3d_capture_sample efi: Accept version 2 of memory attributes table watchdog: diag288_wdt: fix __diag288() inline assembly watchdog: diag288_wdt: do not use stack buffers for hardware data fbcon: Check font dimension limits thermal: intel: int340x: Protect trip temperature from concurrent updates KVM: x86/vmx: Do not skip segment attributes if unusable bit is set KVM: VMX: Move caching of MSR_IA32_XSS to hardware_setup() KVM: VMX: Move VMX specific files to a "vmx" subdirectory nVMX x86: Check VMX-preemption timer controls on vmentry of L2 guests Input: i8042 - add Clevo PCX0DX to i8042 quirk table Input: i8042 - add TUXEDO devices to i8042 quirk tables Input: i8042 - merge quirk tables Input: i8042 - move __initconst to fix code styling warning vc_screen: move load of struct vc_data pointer in vcs_read() to avoid UAF usb: gadget: f_fs: Fix unbalanced spinlock in __ffs_ep0_queue_wait usb: dwc3: qcom: enable vbus override when in OTG dr-mode usb: dwc3: dwc3-qcom: Fix typo in the dwc3 vbus override API iio: adc: stm32-dfsdm: fill module aliases net/x25: Fix to not accept on connected socket i2c: rk3x: fix a bunch of kernel-doc warnings scsi: iscsi_tcp: Fix UAF during login when accessing the shost ipaddress scsi: target: core: Fix warning on RT kernels net: openvswitch: fix flow memory leak in ovs_flow_cmd_new ata: libata: Fix sata_down_spd_limit() when no link speed is reported squashfs: harden sanity check in squashfs_read_xattr_id_table netrom: Fix use-after-free caused by accept on already connected socket ALSA: hda/via: Avoid potential array out-of-bound in add_secret_dac_path() bus: sunxi-rsb: Fix error handling in sunxi_rsb_init() firewire: fix memory leak for payload of request subaction to IEC 61883-1 FCP region UPSTREAM: wireguard: allowedips: remove nodes in O(1) UPSTREAM: wireguard: allowedips: initialize list head in selftest UPSTREAM: wireguard: use synchronize_net rather than synchronize_rcu UPSTREAM: wireguard: do not use -O3 UPSTREAM: wireguard: selftests: make sure rp_filter is disabled on vethc BACKPORT: wireguard: selftests: remove old conntrack kconfig value BACKPORT: usb: typec: mux: Fix matching with typec_altmode_desc UPSTREAM: sched/uclamp: Fix locking around cpu_util_update_eff() UPSTREAM: sched/uclamp: Fix wrong implementation of cpu.uclamp.min UPSTREAM: usb: musb: Fix an error message UPSTREAM: arm64: doc: Add brk/mmap/mremap() to the Tagged Address ABI Exceptions BACKPORT: selinux: add proper NULL termination to the secclass_map permissions UPSTREAM: crypto: arm/curve25519 - Move '.fpu' after '.arch' UPSTREAM: libnvdimm/region: Fix nvdimm_has_flush() to handle ND_REGION_ASYNC UPSTREAM: of: property: fw_devlink: do not link ".*,nr-gpios" UPSTREAM: xfrm/compat: Cleanup WARN()s that can be user-triggered UPSTREAM: wireguard: selftests: test multiple parallel streams UPSTREAM: crypto: mips: add poly1305-core.S to .gitignore BACKPORT: arm64: kasan: fix page_alloc tagging with DEBUG_VIRTUAL UPSTREAM: crypto: mips/poly1305 - enable for all MIPS processors UPSTREAM: kbuild: do not include include/config/auto.conf from adjust_autoksyms.sh UPSTREAM: wireguard: kconfig: use arm chacha even with no neon UPSTREAM: wireguard: queueing: get rid of per-peer ring buffers UPSTREAM: wireguard: device: do not generate ICMP for non-IP packets BACKPORT: mac80211_hwsim: notify wmediumd of used MAC addresses BACKPORT: mac80211_hwsim: add concurrent channels scanning support over virtio BACKPORT: perf_event_open: switch to copy_struct_from_user() BACKPORT: sched_setattr: switch to copy_struct_from_user() Conflicts: kernel/power/energy_model.c net/wireless/scan.c Change-Id: I55c29a161fd214642259ddfb19fb749a416babb2
This commit is contained in:
@@ -45,14 +45,24 @@ how the user addresses are used by the kernel:
|
||||
|
||||
1. User addresses not accessed by the kernel but used for address space
|
||||
management (e.g. ``mprotect()``, ``madvise()``). The use of valid
|
||||
tagged pointers in this context is allowed with the exception of
|
||||
``brk()``, ``mmap()`` and the ``new_address`` argument to
|
||||
``mremap()`` as these have the potential to alias with existing
|
||||
user addresses.
|
||||
tagged pointers in this context is allowed with these exceptions:
|
||||
|
||||
NOTE: This behaviour changed in v5.6 and so some earlier kernels may
|
||||
incorrectly accept valid tagged pointers for the ``brk()``,
|
||||
``mmap()`` and ``mremap()`` system calls.
|
||||
- ``brk()``, ``mmap()`` and the ``new_address`` argument to
|
||||
``mremap()`` as these have the potential to alias with existing
|
||||
user addresses.
|
||||
|
||||
NOTE: This behaviour changed in v5.6 and so some earlier kernels may
|
||||
incorrectly accept valid tagged pointers for the ``brk()``,
|
||||
``mmap()`` and ``mremap()`` system calls.
|
||||
|
||||
- The ``range.start``, ``start`` and ``dst`` arguments to the
|
||||
``UFFDIO_*`` ``ioctl()``s used on a file descriptor obtained from
|
||||
``userfaultfd()``, as fault addresses subsequently obtained by reading
|
||||
the file descriptor will be untagged, which may otherwise confuse
|
||||
tag-unaware programs.
|
||||
|
||||
NOTE: This behaviour changed in v5.14 and so some earlier kernels may
|
||||
incorrectly accept valid tagged pointers for this system call.
|
||||
|
||||
2. User addresses accessed by the kernel (e.g. ``write()``). This ABI
|
||||
relaxation is disabled by default and the application thread needs to
|
||||
@@ -113,6 +123,12 @@ ABI relaxation:
|
||||
|
||||
- ``shmat()`` and ``shmdt()``.
|
||||
|
||||
- ``brk()`` (since kernel v5.6).
|
||||
|
||||
- ``mmap()`` (since kernel v5.6).
|
||||
|
||||
- ``mremap()``, the ``new_address`` argument (since kernel v5.6).
|
||||
|
||||
Any attempt to use non-zero tagged pointers may result in an error code
|
||||
being returned, a (fatal) signal being raised, or other modes of
|
||||
failure.
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 19
|
||||
SUBLEVEL = 272
|
||||
SUBLEVEL = 275
|
||||
EXTRAVERSION =
|
||||
NAME = "People's Front"
|
||||
|
||||
|
||||
@@ -1172,6 +1172,7 @@
|
||||
clock-names = "dp", "pclk";
|
||||
phys = <&edp_phy>;
|
||||
phy-names = "dp";
|
||||
power-domains = <&power RK3288_PD_VIO>;
|
||||
resets = <&cru SRST_EDP>;
|
||||
reset-names = "dp";
|
||||
rockchip,grf = <&grf>;
|
||||
|
||||
@@ -10,8 +10,8 @@
|
||||
#include <linux/linkage.h>
|
||||
|
||||
.text
|
||||
.fpu neon
|
||||
.arch armv7-a
|
||||
.fpu neon
|
||||
.align 4
|
||||
|
||||
ENTRY(curve25519_neon)
|
||||
|
||||
@@ -167,7 +167,7 @@
|
||||
sd_emmc_b: sd@5000 {
|
||||
compatible = "amlogic,meson-axg-mmc";
|
||||
reg = <0x0 0x5000 0x0 0x800>;
|
||||
interrupts = <GIC_SPI 217 IRQ_TYPE_EDGE_RISING>;
|
||||
interrupts = <GIC_SPI 217 IRQ_TYPE_LEVEL_HIGH>;
|
||||
status = "disabled";
|
||||
clocks = <&clkc CLKID_SD_EMMC_B>,
|
||||
<&clkc CLKID_SD_EMMC_B_CLK0>,
|
||||
@@ -179,7 +179,7 @@
|
||||
sd_emmc_c: mmc@7000 {
|
||||
compatible = "amlogic,meson-axg-mmc";
|
||||
reg = <0x0 0x7000 0x0 0x800>;
|
||||
interrupts = <GIC_SPI 218 IRQ_TYPE_EDGE_RISING>;
|
||||
interrupts = <GIC_SPI 218 IRQ_TYPE_LEVEL_HIGH>;
|
||||
status = "disabled";
|
||||
clocks = <&clkc CLKID_SD_EMMC_C>,
|
||||
<&clkc CLKID_SD_EMMC_C_CLK0>,
|
||||
|
||||
@@ -470,21 +470,21 @@
|
||||
sd_emmc_a: mmc@70000 {
|
||||
compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc";
|
||||
reg = <0x0 0x70000 0x0 0x800>;
|
||||
interrupts = <GIC_SPI 216 IRQ_TYPE_EDGE_RISING>;
|
||||
interrupts = <GIC_SPI 216 IRQ_TYPE_LEVEL_HIGH>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
sd_emmc_b: mmc@72000 {
|
||||
compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc";
|
||||
reg = <0x0 0x72000 0x0 0x800>;
|
||||
interrupts = <GIC_SPI 217 IRQ_TYPE_EDGE_RISING>;
|
||||
interrupts = <GIC_SPI 217 IRQ_TYPE_LEVEL_HIGH>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
sd_emmc_c: mmc@74000 {
|
||||
compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc";
|
||||
reg = <0x0 0x74000 0x0 0x800>;
|
||||
interrupts = <GIC_SPI 218 IRQ_TYPE_EDGE_RISING>;
|
||||
interrupts = <GIC_SPI 218 IRQ_TYPE_LEVEL_HIGH>;
|
||||
status = "disabled";
|
||||
};
|
||||
};
|
||||
|
||||
@@ -337,6 +337,11 @@ static inline void *phys_to_virt(phys_addr_t x)
|
||||
#define ARCH_PFN_OFFSET ((unsigned long)PHYS_PFN_OFFSET)
|
||||
|
||||
#ifndef CONFIG_SPARSEMEM_VMEMMAP
|
||||
#define page_to_virt(x) ({ \
|
||||
__typeof__(x) __page = x; \
|
||||
void *__addr = __va(page_to_phys(__page)); \
|
||||
(void *)__tag_set((const void *)__addr, page_kasan_tag(__page));\
|
||||
})
|
||||
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
|
||||
#define _virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
|
||||
#else
|
||||
|
||||
2
arch/mips/crypto/.gitignore
vendored
Normal file
2
arch/mips/crypto/.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
poly1305-core.S
|
||||
@@ -12,8 +12,8 @@ AFLAGS_chacha-core.o += -O2 # needed to fill branch delay slots
|
||||
obj-$(CONFIG_CRYPTO_POLY1305_MIPS) += poly1305-mips.o
|
||||
poly1305-mips-y := poly1305-core.o poly1305-glue.o
|
||||
|
||||
perlasm-flavour-$(CONFIG_CPU_MIPS32) := o32
|
||||
perlasm-flavour-$(CONFIG_CPU_MIPS64) := 64
|
||||
perlasm-flavour-$(CONFIG_32BIT) := o32
|
||||
perlasm-flavour-$(CONFIG_64BIT) := 64
|
||||
|
||||
quiet_cmd_perlasm = PERLASM $@
|
||||
cmd_perlasm = $(PERL) $(<) $(perlasm-flavour-y) $(@)
|
||||
|
||||
@@ -1197,7 +1197,7 @@ static char __attribute__((aligned(64))) iodc_dbuf[4096];
|
||||
*/
|
||||
int pdc_iodc_print(const unsigned char *str, unsigned count)
|
||||
{
|
||||
unsigned int i;
|
||||
unsigned int i, found = 0;
|
||||
unsigned long flags;
|
||||
|
||||
for (i = 0; i < count;) {
|
||||
@@ -1206,6 +1206,7 @@ int pdc_iodc_print(const unsigned char *str, unsigned count)
|
||||
iodc_dbuf[i+0] = '\r';
|
||||
iodc_dbuf[i+1] = '\n';
|
||||
i += 2;
|
||||
found = 1;
|
||||
goto print;
|
||||
default:
|
||||
iodc_dbuf[i] = str[i];
|
||||
@@ -1222,7 +1223,7 @@ int pdc_iodc_print(const unsigned char *str, unsigned count)
|
||||
__pa(iodc_retbuf), 0, __pa(iodc_dbuf), i, 0);
|
||||
spin_unlock_irqrestore(&pdc_lock, flags);
|
||||
|
||||
return i;
|
||||
return i - found;
|
||||
}
|
||||
|
||||
#if !defined(BOOTLOADER)
|
||||
|
||||
@@ -128,6 +128,12 @@ long arch_ptrace(struct task_struct *child, long request,
|
||||
unsigned long tmp;
|
||||
long ret = -EIO;
|
||||
|
||||
unsigned long user_regs_struct_size = sizeof(struct user_regs_struct);
|
||||
#ifdef CONFIG_64BIT
|
||||
if (is_compat_task())
|
||||
user_regs_struct_size /= 2;
|
||||
#endif
|
||||
|
||||
switch (request) {
|
||||
|
||||
/* Read the word at location addr in the USER area. For ptraced
|
||||
@@ -183,14 +189,14 @@ long arch_ptrace(struct task_struct *child, long request,
|
||||
return copy_regset_to_user(child,
|
||||
task_user_regset_view(current),
|
||||
REGSET_GENERAL,
|
||||
0, sizeof(struct user_regs_struct),
|
||||
0, user_regs_struct_size,
|
||||
datap);
|
||||
|
||||
case PTRACE_SETREGS: /* Set all gp regs in the child. */
|
||||
return copy_regset_from_user(child,
|
||||
task_user_regset_view(current),
|
||||
REGSET_GENERAL,
|
||||
0, sizeof(struct user_regs_struct),
|
||||
0, user_regs_struct_size,
|
||||
datap);
|
||||
|
||||
case PTRACE_GETFPREGS: /* Get the child FPU state. */
|
||||
@@ -304,6 +310,11 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
|
||||
}
|
||||
}
|
||||
break;
|
||||
case PTRACE_GETREGS:
|
||||
case PTRACE_SETREGS:
|
||||
case PTRACE_GETFPREGS:
|
||||
case PTRACE_SETFPREGS:
|
||||
return arch_ptrace(child, request, addr, data);
|
||||
|
||||
default:
|
||||
ret = compat_ptrace_request(child, request, addr, data);
|
||||
|
||||
44
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-2.dtsi
Normal file
44
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-2.dtsi
Normal file
@@ -0,0 +1,44 @@
|
||||
// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0-or-later
|
||||
/*
|
||||
* QorIQ FMan v3 10g port #2 device tree stub [ controller @ offset 0x400000 ]
|
||||
*
|
||||
* Copyright 2022 Sean Anderson <sean.anderson@seco.com>
|
||||
* Copyright 2012 - 2015 Freescale Semiconductor Inc.
|
||||
*/
|
||||
|
||||
fman@400000 {
|
||||
fman0_rx_0x08: port@88000 {
|
||||
cell-index = <0x8>;
|
||||
compatible = "fsl,fman-v3-port-rx";
|
||||
reg = <0x88000 0x1000>;
|
||||
fsl,fman-10g-port;
|
||||
};
|
||||
|
||||
fman0_tx_0x28: port@a8000 {
|
||||
cell-index = <0x28>;
|
||||
compatible = "fsl,fman-v3-port-tx";
|
||||
reg = <0xa8000 0x1000>;
|
||||
fsl,fman-10g-port;
|
||||
};
|
||||
|
||||
ethernet@e0000 {
|
||||
cell-index = <0>;
|
||||
compatible = "fsl,fman-memac";
|
||||
reg = <0xe0000 0x1000>;
|
||||
fsl,fman-ports = <&fman0_rx_0x08 &fman0_tx_0x28>;
|
||||
ptp-timer = <&ptp_timer0>;
|
||||
pcsphy-handle = <&pcsphy0>;
|
||||
};
|
||||
|
||||
mdio@e1000 {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
|
||||
reg = <0xe1000 0x1000>;
|
||||
fsl,erratum-a011043; /* must ignore read errors */
|
||||
|
||||
pcsphy0: ethernet-phy@0 {
|
||||
reg = <0x0>;
|
||||
};
|
||||
};
|
||||
};
|
||||
44
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-3.dtsi
Normal file
44
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-3.dtsi
Normal file
@@ -0,0 +1,44 @@
|
||||
// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0-or-later
|
||||
/*
|
||||
* QorIQ FMan v3 10g port #3 device tree stub [ controller @ offset 0x400000 ]
|
||||
*
|
||||
* Copyright 2022 Sean Anderson <sean.anderson@seco.com>
|
||||
* Copyright 2012 - 2015 Freescale Semiconductor Inc.
|
||||
*/
|
||||
|
||||
fman@400000 {
|
||||
fman0_rx_0x09: port@89000 {
|
||||
cell-index = <0x9>;
|
||||
compatible = "fsl,fman-v3-port-rx";
|
||||
reg = <0x89000 0x1000>;
|
||||
fsl,fman-10g-port;
|
||||
};
|
||||
|
||||
fman0_tx_0x29: port@a9000 {
|
||||
cell-index = <0x29>;
|
||||
compatible = "fsl,fman-v3-port-tx";
|
||||
reg = <0xa9000 0x1000>;
|
||||
fsl,fman-10g-port;
|
||||
};
|
||||
|
||||
ethernet@e2000 {
|
||||
cell-index = <1>;
|
||||
compatible = "fsl,fman-memac";
|
||||
reg = <0xe2000 0x1000>;
|
||||
fsl,fman-ports = <&fman0_rx_0x09 &fman0_tx_0x29>;
|
||||
ptp-timer = <&ptp_timer0>;
|
||||
pcsphy-handle = <&pcsphy1>;
|
||||
};
|
||||
|
||||
mdio@e3000 {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
|
||||
reg = <0xe3000 0x1000>;
|
||||
fsl,erratum-a011043; /* must ignore read errors */
|
||||
|
||||
pcsphy1: ethernet-phy@0 {
|
||||
reg = <0x0>;
|
||||
};
|
||||
};
|
||||
};
|
||||
@@ -631,8 +631,8 @@
|
||||
/include/ "qoriq-bman1.dtsi"
|
||||
|
||||
/include/ "qoriq-fman3-0.dtsi"
|
||||
/include/ "qoriq-fman3-0-1g-0.dtsi"
|
||||
/include/ "qoriq-fman3-0-1g-1.dtsi"
|
||||
/include/ "qoriq-fman3-0-10g-2.dtsi"
|
||||
/include/ "qoriq-fman3-0-10g-3.dtsi"
|
||||
/include/ "qoriq-fman3-0-1g-2.dtsi"
|
||||
/include/ "qoriq-fman3-0-1g-3.dtsi"
|
||||
/include/ "qoriq-fman3-0-1g-4.dtsi"
|
||||
@@ -681,3 +681,19 @@
|
||||
interrupts = <16 2 1 9>;
|
||||
};
|
||||
};
|
||||
|
||||
&fman0_rx_0x08 {
|
||||
/delete-property/ fsl,fman-10g-port;
|
||||
};
|
||||
|
||||
&fman0_tx_0x28 {
|
||||
/delete-property/ fsl,fman-10g-port;
|
||||
};
|
||||
|
||||
&fman0_rx_0x09 {
|
||||
/delete-property/ fsl,fman-10g-port;
|
||||
};
|
||||
|
||||
&fman0_tx_0x29 {
|
||||
/delete-property/ fsl,fman-10g-port;
|
||||
};
|
||||
|
||||
@@ -72,6 +72,9 @@ ifeq ($(CONFIG_MODULE_SECTIONS),y)
|
||||
KBUILD_LDFLAGS_MODULE += -T $(srctree)/arch/riscv/kernel/module.lds
|
||||
endif
|
||||
|
||||
# Avoid generating .eh_frame sections.
|
||||
KBUILD_CFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables
|
||||
|
||||
KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax)
|
||||
KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax)
|
||||
|
||||
|
||||
@@ -18,6 +18,8 @@ void flush_icache_pte(pte_t pte)
|
||||
{
|
||||
struct page *page = pte_page(pte);
|
||||
|
||||
if (!test_and_set_bit(PG_dcache_clean, &page->flags))
|
||||
if (!test_bit(PG_dcache_clean, &page->flags)) {
|
||||
flush_icache_all();
|
||||
set_bit(PG_dcache_clean, &page->flags);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -138,6 +138,9 @@ static void __init fpu__init_system_generic(void)
|
||||
unsigned int fpu_kernel_xstate_size;
|
||||
EXPORT_SYMBOL_GPL(fpu_kernel_xstate_size);
|
||||
|
||||
/* Get alignment of the TYPE. */
|
||||
#define TYPE_ALIGN(TYPE) offsetof(struct { char x; TYPE test; }, test)
|
||||
|
||||
/*
|
||||
* Enforce that 'MEMBER' is the last field of 'TYPE'.
|
||||
*
|
||||
@@ -145,8 +148,8 @@ EXPORT_SYMBOL_GPL(fpu_kernel_xstate_size);
|
||||
* because that's how C aligns structs.
|
||||
*/
|
||||
#define CHECK_MEMBER_AT_END_OF(TYPE, MEMBER) \
|
||||
BUILD_BUG_ON(sizeof(TYPE) != \
|
||||
ALIGN(offsetofend(TYPE, MEMBER), _Alignof(TYPE)))
|
||||
BUILD_BUG_ON(sizeof(TYPE) != ALIGN(offsetofend(TYPE, MEMBER), \
|
||||
TYPE_ALIGN(TYPE)))
|
||||
|
||||
/*
|
||||
* We append the 'struct fpu' to the task_struct:
|
||||
|
||||
@@ -16,7 +16,7 @@ kvm-y += x86.o mmu.o emulate.o i8259.o irq.o lapic.o \
|
||||
i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \
|
||||
hyperv.o page_track.o debugfs.o
|
||||
|
||||
kvm-intel-y += vmx.o pmu_intel.o
|
||||
kvm-intel-y += vmx/vmx.o vmx/pmu_intel.o
|
||||
kvm-amd-y += svm.o pmu_amd.o
|
||||
|
||||
obj-$(CONFIG_KVM) += kvm.o
|
||||
|
||||
@@ -2062,6 +2062,12 @@ static inline bool nested_cpu_has_shadow_vmcs(struct vmcs12 *vmcs12)
|
||||
return nested_cpu_has2(vmcs12, SECONDARY_EXEC_SHADOW_VMCS);
|
||||
}
|
||||
|
||||
static inline bool nested_cpu_has_save_preemption_timer(struct vmcs12 *vmcs12)
|
||||
{
|
||||
return vmcs12->vm_exit_controls &
|
||||
VM_EXIT_SAVE_VMX_PREEMPTION_TIMER;
|
||||
}
|
||||
|
||||
static inline bool is_nmi(u32 intr_info)
|
||||
{
|
||||
return (intr_info & (INTR_INFO_INTR_TYPE_MASK | INTR_INFO_VALID_MASK))
|
||||
@@ -4734,9 +4740,6 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
|
||||
}
|
||||
}
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_XSAVES))
|
||||
rdmsrl(MSR_IA32_XSS, host_xss);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -5518,18 +5521,15 @@ static u32 vmx_segment_access_rights(struct kvm_segment *var)
|
||||
{
|
||||
u32 ar;
|
||||
|
||||
if (var->unusable || !var->present)
|
||||
ar = 1 << 16;
|
||||
else {
|
||||
ar = var->type & 15;
|
||||
ar |= (var->s & 1) << 4;
|
||||
ar |= (var->dpl & 3) << 5;
|
||||
ar |= (var->present & 1) << 7;
|
||||
ar |= (var->avl & 1) << 12;
|
||||
ar |= (var->l & 1) << 13;
|
||||
ar |= (var->db & 1) << 14;
|
||||
ar |= (var->g & 1) << 15;
|
||||
}
|
||||
ar = var->type & 15;
|
||||
ar |= (var->s & 1) << 4;
|
||||
ar |= (var->dpl & 3) << 5;
|
||||
ar |= (var->present & 1) << 7;
|
||||
ar |= (var->avl & 1) << 12;
|
||||
ar |= (var->l & 1) << 13;
|
||||
ar |= (var->db & 1) << 14;
|
||||
ar |= (var->g & 1) << 15;
|
||||
ar |= (var->unusable || !var->present) << 16;
|
||||
|
||||
return ar;
|
||||
}
|
||||
@@ -7951,6 +7951,9 @@ static __init int hardware_setup(void)
|
||||
WARN_ONCE(host_bndcfgs, "KVM: BNDCFGS in host will be lost");
|
||||
}
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_XSAVES))
|
||||
rdmsrl(MSR_IA32_XSS, host_xss);
|
||||
|
||||
if (!cpu_has_vmx_vpid() || !cpu_has_vmx_invvpid() ||
|
||||
!(cpu_has_vmx_invvpid_single() || cpu_has_vmx_invvpid_global()))
|
||||
enable_vpid = 0;
|
||||
@@ -12609,6 +12612,10 @@ static int check_vmentry_prereqs(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
|
||||
if (nested_vmx_check_msr_switch_controls(vcpu, vmcs12))
|
||||
return VMXERR_ENTRY_INVALID_CONTROL_FIELD;
|
||||
|
||||
if (!nested_cpu_has_preemption_timer(vmcs12) &&
|
||||
nested_cpu_has_save_preemption_timer(vmcs12))
|
||||
return VMXERR_ENTRY_INVALID_CONTROL_FIELD;
|
||||
|
||||
if (nested_vmx_check_pml_controls(vcpu, vmcs12))
|
||||
return VMXERR_ENTRY_INVALID_CONTROL_FIELD;
|
||||
|
||||
@@ -3637,12 +3637,11 @@ static void kvm_vcpu_ioctl_x86_get_debugregs(struct kvm_vcpu *vcpu,
|
||||
{
|
||||
unsigned long val;
|
||||
|
||||
memset(dbgregs, 0, sizeof(*dbgregs));
|
||||
memcpy(dbgregs->db, vcpu->arch.db, sizeof(vcpu->arch.db));
|
||||
kvm_get_dr(vcpu, 6, &val);
|
||||
dbgregs->dr6 = val;
|
||||
dbgregs->dr7 = vcpu->arch.dr7;
|
||||
dbgregs->flags = 0;
|
||||
memset(&dbgregs->reserved, 0, sizeof(dbgregs->reserved));
|
||||
}
|
||||
|
||||
static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
|
||||
|
||||
@@ -758,7 +758,7 @@ config CRYPTO_POLY1305_X86_64
|
||||
|
||||
config CRYPTO_POLY1305_MIPS
|
||||
tristate "Poly1305 authenticator algorithm (MIPS optimized)"
|
||||
depends on CPU_MIPS32 || (CPU_MIPS64 && 64BIT)
|
||||
depends on MIPS
|
||||
select CRYPTO_ARCH_HAVE_LIB_POLY1305
|
||||
|
||||
config CRYPTO_MD4
|
||||
|
||||
@@ -3442,8 +3442,8 @@ void acpi_nfit_shutdown(void *data)
|
||||
|
||||
mutex_lock(&acpi_desc->init_mutex);
|
||||
set_bit(ARS_CANCEL, &acpi_desc->scrub_flags);
|
||||
cancel_delayed_work_sync(&acpi_desc->dwork);
|
||||
mutex_unlock(&acpi_desc->init_mutex);
|
||||
cancel_delayed_work_sync(&acpi_desc->dwork);
|
||||
|
||||
/*
|
||||
* Bounce the nvdimm bus lock to make sure any in-flight
|
||||
|
||||
@@ -3112,7 +3112,7 @@ int sata_down_spd_limit(struct ata_link *link, u32 spd_limit)
|
||||
*/
|
||||
if (spd > 1)
|
||||
mask &= (1 << (spd - 1)) - 1;
|
||||
else
|
||||
else if (link->sata_spd)
|
||||
return -EINVAL;
|
||||
|
||||
/* were we already at the bottom? */
|
||||
|
||||
@@ -306,7 +306,8 @@ struct device_link *device_link_add(struct device *consumer,
|
||||
{
|
||||
struct device_link *link;
|
||||
|
||||
if (!consumer || !supplier || flags & ~DL_ADD_VALID_FLAGS ||
|
||||
if (!consumer || !supplier || consumer == supplier ||
|
||||
flags & ~DL_ADD_VALID_FLAGS ||
|
||||
(flags & DL_FLAG_STATELESS && flags & DL_MANAGED_LINK_FLAGS) ||
|
||||
(flags & DL_FLAG_SYNC_STATE_ONLY &&
|
||||
flags != DL_FLAG_SYNC_STATE_ONLY) ||
|
||||
|
||||
@@ -1131,6 +1131,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
|
||||
blk_queue_physical_block_size(lo->lo_queue, bsize);
|
||||
blk_queue_io_min(lo->lo_queue, bsize);
|
||||
|
||||
loop_config_discard(lo);
|
||||
loop_update_dio(lo);
|
||||
loop_sysfs_init(lo);
|
||||
|
||||
|
||||
@@ -783,7 +783,13 @@ static int __init sunxi_rsb_init(void)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return platform_driver_register(&sunxi_rsb_driver);
|
||||
ret = platform_driver_register(&sunxi_rsb_driver);
|
||||
if (ret) {
|
||||
bus_unregister(&sunxi_rsb_bus);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
module_init(sunxi_rsb_init);
|
||||
|
||||
|
||||
@@ -1816,7 +1816,10 @@ static int rcar_dmac_probe(struct platform_device *pdev)
|
||||
dmac->dev = &pdev->dev;
|
||||
platform_set_drvdata(pdev, dmac);
|
||||
dmac->dev->dma_parms = &dmac->parms;
|
||||
dma_set_max_seg_size(dmac->dev, RCAR_DMATCR_MASK);
|
||||
ret = dma_set_max_seg_size(dmac->dev, RCAR_DMATCR_MASK);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = dma_set_mask_and_coherent(dmac->dev, DMA_BIT_MASK(40));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@@ -831,8 +831,10 @@ static int ioctl_send_response(struct client *client, union ioctl_arg *arg)
|
||||
|
||||
r = container_of(resource, struct inbound_transaction_resource,
|
||||
resource);
|
||||
if (is_fcp_request(r->request))
|
||||
if (is_fcp_request(r->request)) {
|
||||
kfree(r->data);
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (a->length != fw_get_response_length(r->request)) {
|
||||
ret = -EINVAL;
|
||||
|
||||
@@ -35,7 +35,7 @@ int __init efi_memattr_init(void)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
if (tbl->version > 1) {
|
||||
if (tbl->version > 2) {
|
||||
pr_warn("Unexpected EFI Memory Attributes table version %d\n",
|
||||
tbl->version);
|
||||
goto unmap;
|
||||
|
||||
@@ -1155,10 +1155,8 @@ static int split_2MB_gtt_entry(struct intel_vgpu *vgpu,
|
||||
for_each_shadow_entry(sub_spt, &sub_se, sub_index) {
|
||||
ret = intel_gvt_hypervisor_dma_map_guest_page(vgpu,
|
||||
start_gfn + sub_index, PAGE_SIZE, &dma_addr);
|
||||
if (ret) {
|
||||
ppgtt_invalidate_spt(spt);
|
||||
return ret;
|
||||
}
|
||||
if (ret)
|
||||
goto err;
|
||||
sub_se.val64 = se->val64;
|
||||
|
||||
/* Copy the PAT field from PDE. */
|
||||
@@ -1177,6 +1175,17 @@ static int split_2MB_gtt_entry(struct intel_vgpu *vgpu,
|
||||
ops->set_pfn(se, sub_spt->shadow_page.mfn);
|
||||
ppgtt_set_shadow_entry(spt, se, index);
|
||||
return 0;
|
||||
err:
|
||||
/* Cancel the existing addess mappings of DMA addr. */
|
||||
for_each_present_shadow_entry(sub_spt, &sub_se, sub_index) {
|
||||
gvt_vdbg_mm("invalidate 4K entry\n");
|
||||
ppgtt_invalidate_pte(sub_spt, &sub_se);
|
||||
}
|
||||
/* Release the new allocated spt. */
|
||||
trace_spt_change(sub_spt->vgpu->id, "release", sub_spt,
|
||||
sub_spt->guest_page.gfn, sub_spt->shadow_page.type);
|
||||
ppgtt_free_spt(sub_spt);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int split_64KB_gtt_entry(struct intel_vgpu *vgpu,
|
||||
|
||||
@@ -82,7 +82,7 @@ enum {
|
||||
#define DEFAULT_SCL_RATE (100 * 1000) /* Hz */
|
||||
|
||||
/**
|
||||
* struct i2c_spec_values:
|
||||
* struct i2c_spec_values - I2C specification values for various modes
|
||||
* @min_hold_start_ns: min hold time (repeated) START condition
|
||||
* @min_low_ns: min LOW period of the SCL clock
|
||||
* @min_high_ns: min HIGH period of the SCL cloc
|
||||
@@ -138,7 +138,7 @@ static const struct i2c_spec_values fast_mode_plus_spec = {
|
||||
};
|
||||
|
||||
/**
|
||||
* struct rk3x_i2c_calced_timings:
|
||||
* struct rk3x_i2c_calced_timings - calculated V1 timings
|
||||
* @div_low: Divider output for low
|
||||
* @div_high: Divider output for high
|
||||
* @tuning: Used to adjust setup/hold data time,
|
||||
@@ -161,7 +161,7 @@ enum rk3x_i2c_state {
|
||||
};
|
||||
|
||||
/**
|
||||
* struct rk3x_i2c_soc_data:
|
||||
* struct rk3x_i2c_soc_data - SOC-specific data
|
||||
* @grf_offset: offset inside the grf regmap for setting the i2c type
|
||||
* @calc_timings: Callback function for i2c timing information calculated
|
||||
*/
|
||||
@@ -241,7 +241,8 @@ static inline void rk3x_i2c_clean_ipd(struct rk3x_i2c *i2c)
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a START condition, which triggers a REG_INT_START interrupt.
|
||||
* rk3x_i2c_start - Generate a START condition, which triggers a REG_INT_START interrupt.
|
||||
* @i2c: target controller data
|
||||
*/
|
||||
static void rk3x_i2c_start(struct rk3x_i2c *i2c)
|
||||
{
|
||||
@@ -260,8 +261,8 @@ static void rk3x_i2c_start(struct rk3x_i2c *i2c)
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a STOP condition, which triggers a REG_INT_STOP interrupt.
|
||||
*
|
||||
* rk3x_i2c_stop - Generate a STOP condition, which triggers a REG_INT_STOP interrupt.
|
||||
* @i2c: target controller data
|
||||
* @error: Error code to return in rk3x_i2c_xfer
|
||||
*/
|
||||
static void rk3x_i2c_stop(struct rk3x_i2c *i2c, int error)
|
||||
@@ -300,7 +301,8 @@ static void rk3x_i2c_stop(struct rk3x_i2c *i2c, int error)
|
||||
}
|
||||
|
||||
/**
|
||||
* Setup a read according to i2c->msg
|
||||
* rk3x_i2c_prepare_read - Setup a read according to i2c->msg
|
||||
* @i2c: target controller data
|
||||
*/
|
||||
static void rk3x_i2c_prepare_read(struct rk3x_i2c *i2c)
|
||||
{
|
||||
@@ -331,7 +333,8 @@ static void rk3x_i2c_prepare_read(struct rk3x_i2c *i2c)
|
||||
}
|
||||
|
||||
/**
|
||||
* Fill the transmit buffer with data from i2c->msg
|
||||
* rk3x_i2c_fill_transmit_buf - Fill the transmit buffer with data from i2c->msg
|
||||
* @i2c: target controller data
|
||||
*/
|
||||
static void rk3x_i2c_fill_transmit_buf(struct rk3x_i2c *i2c)
|
||||
{
|
||||
@@ -534,11 +537,10 @@ static irqreturn_t rk3x_i2c_irq(int irqno, void *dev_id)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get timing values of I2C specification
|
||||
*
|
||||
* rk3x_i2c_get_spec - Get timing values of I2C specification
|
||||
* @speed: Desired SCL frequency
|
||||
*
|
||||
* Returns: Matched i2c spec values.
|
||||
* Return: Matched i2c_spec_values.
|
||||
*/
|
||||
static const struct i2c_spec_values *rk3x_i2c_get_spec(unsigned int speed)
|
||||
{
|
||||
@@ -551,13 +553,12 @@ static const struct i2c_spec_values *rk3x_i2c_get_spec(unsigned int speed)
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate divider values for desired SCL frequency
|
||||
*
|
||||
* rk3x_i2c_v0_calc_timings - Calculate divider values for desired SCL frequency
|
||||
* @clk_rate: I2C input clock rate
|
||||
* @t: Known I2C timing information
|
||||
* @t_calc: Caculated rk3x private timings that would be written into regs
|
||||
*
|
||||
* Returns: 0 on success, -EINVAL if the goal SCL rate is too slow. In that case
|
||||
* Return: %0 on success, -%EINVAL if the goal SCL rate is too slow. In that case
|
||||
* a best-effort divider value is returned in divs. If the target rate is
|
||||
* too high, we silently use the highest possible rate.
|
||||
*/
|
||||
@@ -712,13 +713,12 @@ static int rk3x_i2c_v0_calc_timings(unsigned long clk_rate,
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate timing values for desired SCL frequency
|
||||
*
|
||||
* rk3x_i2c_v1_calc_timings - Calculate timing values for desired SCL frequency
|
||||
* @clk_rate: I2C input clock rate
|
||||
* @t: Known I2C timing information
|
||||
* @t_calc: Caculated rk3x private timings that would be written into regs
|
||||
*
|
||||
* Returns: 0 on success, -EINVAL if the goal SCL rate is too slow. In that case
|
||||
* Return: %0 on success, -%EINVAL if the goal SCL rate is too slow. In that case
|
||||
* a best-effort divider value is returned in divs. If the target rate is
|
||||
* too high, we silently use the highest possible rate.
|
||||
* The following formulas are v1's method to calculate timings.
|
||||
@@ -962,14 +962,14 @@ static int rk3x_i2c_clk_notifier_cb(struct notifier_block *nb, unsigned long
|
||||
}
|
||||
|
||||
/**
|
||||
* Setup I2C registers for an I2C operation specified by msgs, num.
|
||||
*
|
||||
* Must be called with i2c->lock held.
|
||||
*
|
||||
* rk3x_i2c_setup - Setup I2C registers for an I2C operation specified by msgs, num.
|
||||
* @i2c: target controller data
|
||||
* @msgs: I2C msgs to process
|
||||
* @num: Number of msgs
|
||||
*
|
||||
* returns: Number of I2C msgs processed or negative in case of error
|
||||
* Must be called with i2c->lock held.
|
||||
*
|
||||
* Return: Number of I2C msgs processed or negative in case of error
|
||||
*/
|
||||
static int rk3x_i2c_setup(struct rk3x_i2c *i2c, struct i2c_msg *msgs, int num)
|
||||
{
|
||||
|
||||
@@ -292,6 +292,7 @@ static int accel_3d_capture_sample(struct hid_sensor_hub_device *hsdev,
|
||||
hid_sensor_convert_timestamp(
|
||||
&accel_state->common_attributes,
|
||||
*(int64_t *)raw_data);
|
||||
ret = 0;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
|
||||
@@ -289,8 +289,10 @@ static int berlin2_adc_probe(struct platform_device *pdev)
|
||||
int ret;
|
||||
|
||||
indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*priv));
|
||||
if (!indio_dev)
|
||||
if (!indio_dev) {
|
||||
of_node_put(parent_np);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
priv = iio_priv(indio_dev);
|
||||
platform_set_drvdata(pdev, indio_dev);
|
||||
|
||||
@@ -1099,6 +1099,7 @@ static const struct of_device_id stm32_dfsdm_adc_match[] = {
|
||||
},
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, stm32_dfsdm_adc_match);
|
||||
|
||||
static int stm32_dfsdm_adc_probe(struct platform_device *pdev)
|
||||
{
|
||||
|
||||
@@ -71,6 +71,18 @@
|
||||
#define TWL6030_GPADCS BIT(1)
|
||||
#define TWL6030_GPADCR BIT(0)
|
||||
|
||||
#define USB_VBUS_CTRL_SET 0x04
|
||||
#define USB_ID_CTRL_SET 0x06
|
||||
|
||||
#define TWL6030_MISC1 0xE4
|
||||
#define VBUS_MEAS 0x01
|
||||
#define ID_MEAS 0x01
|
||||
|
||||
#define VAC_MEAS 0x04
|
||||
#define VBAT_MEAS 0x02
|
||||
#define BB_MEAS 0x01
|
||||
|
||||
|
||||
/**
|
||||
* struct twl6030_chnl_calib - channel calibration
|
||||
* @gain: slope coefficient for ideal curve
|
||||
@@ -943,6 +955,26 @@ static int twl6030_gpadc_probe(struct platform_device *pdev)
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = twl_i2c_write_u8(TWL_MODULE_USB, VBUS_MEAS, USB_VBUS_CTRL_SET);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to wire up inputs\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = twl_i2c_write_u8(TWL_MODULE_USB, ID_MEAS, USB_ID_CTRL_SET);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to wire up inputs\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = twl_i2c_write_u8(TWL6030_MODULE_ID0,
|
||||
VBAT_MEAS | BB_MEAS | VAC_MEAS,
|
||||
TWL6030_MISC1);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to wire up inputs\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
indio_dev->name = DRIVER_NAME;
|
||||
indio_dev->dev.parent = dev;
|
||||
indio_dev->info = &twl6030_gpadc_iio_info;
|
||||
|
||||
@@ -1361,12 +1361,15 @@ static int user_exp_rcv_setup(struct hfi1_filedata *fd, unsigned long arg,
|
||||
addr = arg + offsetof(struct hfi1_tid_info, tidcnt);
|
||||
if (copy_to_user((void __user *)addr, &tinfo.tidcnt,
|
||||
sizeof(tinfo.tidcnt)))
|
||||
return -EFAULT;
|
||||
ret = -EFAULT;
|
||||
|
||||
addr = arg + offsetof(struct hfi1_tid_info, length);
|
||||
if (copy_to_user((void __user *)addr, &tinfo.length,
|
||||
if (!ret && copy_to_user((void __user *)addr, &tinfo.length,
|
||||
sizeof(tinfo.length)))
|
||||
ret = -EFAULT;
|
||||
|
||||
if (ret)
|
||||
hfi1_user_exp_rcv_invalid(fd, &tinfo);
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
||||
@@ -215,16 +215,11 @@ static void unpin_rcv_pages(struct hfi1_filedata *fd,
|
||||
static int pin_rcv_pages(struct hfi1_filedata *fd, struct tid_user_buf *tidbuf)
|
||||
{
|
||||
int pinned;
|
||||
unsigned int npages;
|
||||
unsigned int npages = tidbuf->npages;
|
||||
unsigned long vaddr = tidbuf->vaddr;
|
||||
struct page **pages = NULL;
|
||||
struct hfi1_devdata *dd = fd->uctxt->dd;
|
||||
|
||||
/* Get the number of pages the user buffer spans */
|
||||
npages = num_user_pages(vaddr, tidbuf->length);
|
||||
if (!npages)
|
||||
return -EINVAL;
|
||||
|
||||
if (npages > fd->uctxt->expected_count) {
|
||||
dd_dev_err(dd, "Expected buffer too big\n");
|
||||
return -EINVAL;
|
||||
@@ -258,7 +253,6 @@ static int pin_rcv_pages(struct hfi1_filedata *fd, struct tid_user_buf *tidbuf)
|
||||
return pinned;
|
||||
}
|
||||
tidbuf->pages = pages;
|
||||
tidbuf->npages = npages;
|
||||
fd->tid_n_pinned += pinned;
|
||||
return pinned;
|
||||
}
|
||||
@@ -334,6 +328,7 @@ int hfi1_user_exp_rcv_setup(struct hfi1_filedata *fd,
|
||||
|
||||
tidbuf->vaddr = tinfo->vaddr;
|
||||
tidbuf->length = tinfo->length;
|
||||
tidbuf->npages = num_user_pages(tidbuf->vaddr, tidbuf->length);
|
||||
tidbuf->psets = kcalloc(uctxt->expected_count, sizeof(*tidbuf->psets),
|
||||
GFP_KERNEL);
|
||||
if (!tidbuf->psets) {
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -267,6 +267,12 @@ static void sdio_release_func(struct device *dev)
|
||||
if (!(func->card->quirks & MMC_QUIRK_NONSTD_SDIO))
|
||||
sdio_free_func_cis(func);
|
||||
|
||||
/*
|
||||
* We have now removed the link to the tuples in the
|
||||
* card structure, so remove the reference.
|
||||
*/
|
||||
put_device(&func->card->dev);
|
||||
|
||||
kfree(func->info);
|
||||
kfree(func->tmpbuf);
|
||||
kfree(func);
|
||||
@@ -297,6 +303,12 @@ struct sdio_func *sdio_alloc_func(struct mmc_card *card)
|
||||
|
||||
device_initialize(&func->dev);
|
||||
|
||||
/*
|
||||
* We may link to tuples in the card structure,
|
||||
* we need make sure we have a reference to it.
|
||||
*/
|
||||
get_device(&func->card->dev);
|
||||
|
||||
func->dev.parent = &card->dev;
|
||||
func->dev.bus = &sdio_bus_type;
|
||||
func->dev.release = sdio_release_func;
|
||||
@@ -350,10 +362,9 @@ int sdio_add_func(struct sdio_func *func)
|
||||
*/
|
||||
void sdio_remove_func(struct sdio_func *func)
|
||||
{
|
||||
if (!sdio_func_present(func))
|
||||
return;
|
||||
if (sdio_func_present(func))
|
||||
device_del(&func->dev);
|
||||
|
||||
device_del(&func->dev);
|
||||
of_node_put(func->dev.of_node);
|
||||
put_device(&func->dev);
|
||||
}
|
||||
|
||||
@@ -394,12 +394,6 @@ int sdio_read_func_cis(struct sdio_func *func)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Since we've linked to tuples in the card structure,
|
||||
* we must make sure we have a reference to it.
|
||||
*/
|
||||
get_device(&func->card->dev);
|
||||
|
||||
/*
|
||||
* Vendor/device id is optional for function CIS, so
|
||||
* copy it from the card structure as needed.
|
||||
@@ -425,11 +419,5 @@ void sdio_free_func_cis(struct sdio_func *func)
|
||||
}
|
||||
|
||||
func->tuples = NULL;
|
||||
|
||||
/*
|
||||
* We have now removed the link to the tuples in the
|
||||
* card structure, so remove the reference.
|
||||
*/
|
||||
put_device(&func->card->dev);
|
||||
}
|
||||
|
||||
|
||||
@@ -85,13 +85,13 @@ config WIREGUARD
|
||||
select CRYPTO_CURVE25519_X86 if X86 && 64BIT
|
||||
select ARM_CRYPTO if ARM
|
||||
select ARM64_CRYPTO if ARM64
|
||||
select CRYPTO_CHACHA20_NEON if (ARM || ARM64) && KERNEL_MODE_NEON
|
||||
select CRYPTO_CHACHA20_NEON if ARM || (ARM64 && KERNEL_MODE_NEON)
|
||||
select CRYPTO_POLY1305_NEON if ARM64 && KERNEL_MODE_NEON
|
||||
select CRYPTO_POLY1305_ARM if ARM
|
||||
select CRYPTO_BLAKE2S_ARM if ARM
|
||||
select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON
|
||||
select CRYPTO_CHACHA_MIPS if CPU_MIPS32_R2
|
||||
select CRYPTO_POLY1305_MIPS if CPU_MIPS32 || (CPU_MIPS64 && 64BIT)
|
||||
select CRYPTO_POLY1305_MIPS if MIPS
|
||||
help
|
||||
WireGuard is a secure, fast, and easy to use replacement for IPSec
|
||||
that uses modern cryptography and clever networking tricks. It's
|
||||
|
||||
@@ -518,6 +518,7 @@ static int kvaser_usb_hydra_send_simple_cmd(struct kvaser_usb *dev,
|
||||
u8 cmd_no, int channel)
|
||||
{
|
||||
struct kvaser_cmd *cmd;
|
||||
size_t cmd_len;
|
||||
int err;
|
||||
|
||||
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
|
||||
@@ -525,6 +526,7 @@ static int kvaser_usb_hydra_send_simple_cmd(struct kvaser_usb *dev,
|
||||
return -ENOMEM;
|
||||
|
||||
cmd->header.cmd_no = cmd_no;
|
||||
cmd_len = kvaser_usb_hydra_cmd_size(cmd);
|
||||
if (channel < 0) {
|
||||
kvaser_usb_hydra_set_cmd_dest_he
|
||||
(cmd, KVASER_USB_HYDRA_HE_ADDRESS_ILLEGAL);
|
||||
@@ -541,7 +543,7 @@ static int kvaser_usb_hydra_send_simple_cmd(struct kvaser_usb *dev,
|
||||
kvaser_usb_hydra_set_cmd_transid
|
||||
(cmd, kvaser_usb_hydra_get_next_transid(dev));
|
||||
|
||||
err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
|
||||
err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
|
||||
if (err)
|
||||
goto end;
|
||||
|
||||
@@ -557,6 +559,7 @@ kvaser_usb_hydra_send_simple_cmd_async(struct kvaser_usb_net_priv *priv,
|
||||
{
|
||||
struct kvaser_cmd *cmd;
|
||||
struct kvaser_usb *dev = priv->dev;
|
||||
size_t cmd_len;
|
||||
int err;
|
||||
|
||||
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_ATOMIC);
|
||||
@@ -564,14 +567,14 @@ kvaser_usb_hydra_send_simple_cmd_async(struct kvaser_usb_net_priv *priv,
|
||||
return -ENOMEM;
|
||||
|
||||
cmd->header.cmd_no = cmd_no;
|
||||
cmd_len = kvaser_usb_hydra_cmd_size(cmd);
|
||||
|
||||
kvaser_usb_hydra_set_cmd_dest_he
|
||||
(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
|
||||
kvaser_usb_hydra_set_cmd_transid
|
||||
(cmd, kvaser_usb_hydra_get_next_transid(dev));
|
||||
|
||||
err = kvaser_usb_send_cmd_async(priv, cmd,
|
||||
kvaser_usb_hydra_cmd_size(cmd));
|
||||
err = kvaser_usb_send_cmd_async(priv, cmd, cmd_len);
|
||||
if (err)
|
||||
kfree(cmd);
|
||||
|
||||
@@ -715,6 +718,7 @@ static int kvaser_usb_hydra_get_single_capability(struct kvaser_usb *dev,
|
||||
{
|
||||
struct kvaser_usb_dev_card_data *card_data = &dev->card_data;
|
||||
struct kvaser_cmd *cmd;
|
||||
size_t cmd_len;
|
||||
u32 value = 0;
|
||||
u32 mask = 0;
|
||||
u16 cap_cmd_res;
|
||||
@@ -726,13 +730,14 @@ static int kvaser_usb_hydra_get_single_capability(struct kvaser_usb *dev,
|
||||
return -ENOMEM;
|
||||
|
||||
cmd->header.cmd_no = CMD_GET_CAPABILITIES_REQ;
|
||||
cmd_len = kvaser_usb_hydra_cmd_size(cmd);
|
||||
cmd->cap_req.cap_cmd = cpu_to_le16(cap_cmd_req);
|
||||
|
||||
kvaser_usb_hydra_set_cmd_dest_he(cmd, card_data->hydra.sysdbg_he);
|
||||
kvaser_usb_hydra_set_cmd_transid
|
||||
(cmd, kvaser_usb_hydra_get_next_transid(dev));
|
||||
|
||||
err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
|
||||
err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
|
||||
if (err)
|
||||
goto end;
|
||||
|
||||
@@ -1555,6 +1560,7 @@ static int kvaser_usb_hydra_get_busparams(struct kvaser_usb_net_priv *priv,
|
||||
struct kvaser_usb *dev = priv->dev;
|
||||
struct kvaser_usb_net_hydra_priv *hydra = priv->sub_priv;
|
||||
struct kvaser_cmd *cmd;
|
||||
size_t cmd_len;
|
||||
int err;
|
||||
|
||||
if (!hydra)
|
||||
@@ -1565,6 +1571,7 @@ static int kvaser_usb_hydra_get_busparams(struct kvaser_usb_net_priv *priv,
|
||||
return -ENOMEM;
|
||||
|
||||
cmd->header.cmd_no = CMD_GET_BUSPARAMS_REQ;
|
||||
cmd_len = kvaser_usb_hydra_cmd_size(cmd);
|
||||
kvaser_usb_hydra_set_cmd_dest_he
|
||||
(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
|
||||
kvaser_usb_hydra_set_cmd_transid
|
||||
@@ -1574,7 +1581,7 @@ static int kvaser_usb_hydra_get_busparams(struct kvaser_usb_net_priv *priv,
|
||||
|
||||
reinit_completion(&priv->get_busparams_comp);
|
||||
|
||||
err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
|
||||
err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
@@ -1601,6 +1608,7 @@ static int kvaser_usb_hydra_set_bittiming(const struct net_device *netdev,
|
||||
struct kvaser_cmd *cmd;
|
||||
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
|
||||
struct kvaser_usb *dev = priv->dev;
|
||||
size_t cmd_len;
|
||||
int err;
|
||||
|
||||
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
|
||||
@@ -1608,6 +1616,7 @@ static int kvaser_usb_hydra_set_bittiming(const struct net_device *netdev,
|
||||
return -ENOMEM;
|
||||
|
||||
cmd->header.cmd_no = CMD_SET_BUSPARAMS_REQ;
|
||||
cmd_len = kvaser_usb_hydra_cmd_size(cmd);
|
||||
memcpy(&cmd->set_busparams_req.busparams_nominal, busparams,
|
||||
sizeof(cmd->set_busparams_req.busparams_nominal));
|
||||
|
||||
@@ -1616,7 +1625,7 @@ static int kvaser_usb_hydra_set_bittiming(const struct net_device *netdev,
|
||||
kvaser_usb_hydra_set_cmd_transid
|
||||
(cmd, kvaser_usb_hydra_get_next_transid(dev));
|
||||
|
||||
err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
|
||||
err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
|
||||
|
||||
kfree(cmd);
|
||||
|
||||
@@ -1629,6 +1638,7 @@ static int kvaser_usb_hydra_set_data_bittiming(const struct net_device *netdev,
|
||||
struct kvaser_cmd *cmd;
|
||||
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
|
||||
struct kvaser_usb *dev = priv->dev;
|
||||
size_t cmd_len;
|
||||
int err;
|
||||
|
||||
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
|
||||
@@ -1636,6 +1646,7 @@ static int kvaser_usb_hydra_set_data_bittiming(const struct net_device *netdev,
|
||||
return -ENOMEM;
|
||||
|
||||
cmd->header.cmd_no = CMD_SET_BUSPARAMS_FD_REQ;
|
||||
cmd_len = kvaser_usb_hydra_cmd_size(cmd);
|
||||
memcpy(&cmd->set_busparams_req.busparams_data, busparams,
|
||||
sizeof(cmd->set_busparams_req.busparams_data));
|
||||
|
||||
@@ -1653,7 +1664,7 @@ static int kvaser_usb_hydra_set_data_bittiming(const struct net_device *netdev,
|
||||
kvaser_usb_hydra_set_cmd_transid
|
||||
(cmd, kvaser_usb_hydra_get_next_transid(dev));
|
||||
|
||||
err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
|
||||
err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
|
||||
|
||||
kfree(cmd);
|
||||
|
||||
@@ -1781,6 +1792,7 @@ static int kvaser_usb_hydra_get_software_info(struct kvaser_usb *dev)
|
||||
static int kvaser_usb_hydra_get_software_details(struct kvaser_usb *dev)
|
||||
{
|
||||
struct kvaser_cmd *cmd;
|
||||
size_t cmd_len;
|
||||
int err;
|
||||
u32 flags;
|
||||
struct kvaser_usb_dev_card_data *card_data = &dev->card_data;
|
||||
@@ -1790,6 +1802,7 @@ static int kvaser_usb_hydra_get_software_details(struct kvaser_usb *dev)
|
||||
return -ENOMEM;
|
||||
|
||||
cmd->header.cmd_no = CMD_GET_SOFTWARE_DETAILS_REQ;
|
||||
cmd_len = kvaser_usb_hydra_cmd_size(cmd);
|
||||
cmd->sw_detail_req.use_ext_cmd = 1;
|
||||
kvaser_usb_hydra_set_cmd_dest_he
|
||||
(cmd, KVASER_USB_HYDRA_HE_ADDRESS_ILLEGAL);
|
||||
@@ -1797,7 +1810,7 @@ static int kvaser_usb_hydra_get_software_details(struct kvaser_usb *dev)
|
||||
kvaser_usb_hydra_set_cmd_transid
|
||||
(cmd, kvaser_usb_hydra_get_next_transid(dev));
|
||||
|
||||
err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
|
||||
err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
|
||||
if (err)
|
||||
goto end;
|
||||
|
||||
@@ -1913,6 +1926,7 @@ static int kvaser_usb_hydra_set_opt_mode(const struct kvaser_usb_net_priv *priv)
|
||||
{
|
||||
struct kvaser_usb *dev = priv->dev;
|
||||
struct kvaser_cmd *cmd;
|
||||
size_t cmd_len;
|
||||
int err;
|
||||
|
||||
if ((priv->can.ctrlmode &
|
||||
@@ -1928,6 +1942,7 @@ static int kvaser_usb_hydra_set_opt_mode(const struct kvaser_usb_net_priv *priv)
|
||||
return -ENOMEM;
|
||||
|
||||
cmd->header.cmd_no = CMD_SET_DRIVERMODE_REQ;
|
||||
cmd_len = kvaser_usb_hydra_cmd_size(cmd);
|
||||
kvaser_usb_hydra_set_cmd_dest_he
|
||||
(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
|
||||
kvaser_usb_hydra_set_cmd_transid
|
||||
@@ -1937,7 +1952,7 @@ static int kvaser_usb_hydra_set_opt_mode(const struct kvaser_usb_net_priv *priv)
|
||||
else
|
||||
cmd->set_ctrlmode.mode = KVASER_USB_HYDRA_CTRLMODE_NORMAL;
|
||||
|
||||
err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
|
||||
err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
|
||||
kfree(cmd);
|
||||
|
||||
return err;
|
||||
|
||||
@@ -228,12 +228,12 @@ static int bgmac_probe(struct bcma_device *core)
|
||||
bgmac->feature_flags |= BGMAC_FEAT_CLKCTLST;
|
||||
bgmac->feature_flags |= BGMAC_FEAT_FLW_CTRL1;
|
||||
bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_PHY;
|
||||
if (ci->pkg == BCMA_PKG_ID_BCM47188 ||
|
||||
ci->pkg == BCMA_PKG_ID_BCM47186) {
|
||||
if ((ci->id == BCMA_CHIP_ID_BCM5357 && ci->pkg == BCMA_PKG_ID_BCM47186) ||
|
||||
(ci->id == BCMA_CHIP_ID_BCM53572 && ci->pkg == BCMA_PKG_ID_BCM47188)) {
|
||||
bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_RGMII;
|
||||
bgmac->feature_flags |= BGMAC_FEAT_IOST_ATTACHED;
|
||||
}
|
||||
if (ci->pkg == BCMA_PKG_ID_BCM5358)
|
||||
if (ci->id == BCMA_CHIP_ID_BCM5357 && ci->pkg == BCMA_PKG_ID_BCM5358)
|
||||
bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_EPHYRMII;
|
||||
break;
|
||||
case BCMA_CHIP_ID_BCM53573:
|
||||
|
||||
@@ -6118,10 +6118,14 @@ int bnxt_reserve_rings(struct bnxt *bp)
|
||||
netdev_err(bp->dev, "ring reservation/IRQ init failure rc: %d\n", rc);
|
||||
return rc;
|
||||
}
|
||||
if (tcs && (bp->tx_nr_rings_per_tc * tcs != bp->tx_nr_rings)) {
|
||||
if (tcs && (bp->tx_nr_rings_per_tc * tcs !=
|
||||
bp->tx_nr_rings - bp->tx_nr_rings_xdp)) {
|
||||
netdev_err(bp->dev, "tx ring reservation failure\n");
|
||||
netdev_reset_tc(bp->dev);
|
||||
bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
|
||||
if (bp->tx_nr_rings_xdp)
|
||||
bp->tx_nr_rings_per_tc = bp->tx_nr_rings_xdp;
|
||||
else
|
||||
bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
|
||||
return -ENOMEM;
|
||||
}
|
||||
bp->num_stat_ctxs = bp->cp_nr_rings;
|
||||
|
||||
@@ -2671,7 +2671,7 @@ static int i40e_change_mtu(struct net_device *netdev, int new_mtu)
|
||||
struct i40e_pf *pf = vsi->back;
|
||||
|
||||
if (i40e_enabled_xdp_vsi(vsi)) {
|
||||
int frame_size = new_mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;
|
||||
int frame_size = new_mtu + I40E_PACKET_HDR_PAD;
|
||||
|
||||
if (frame_size > i40e_max_xdp_frame_size(vsi))
|
||||
return -EINVAL;
|
||||
@@ -11834,6 +11834,8 @@ static int i40e_ndo_bridge_setlink(struct net_device *dev,
|
||||
}
|
||||
|
||||
br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC);
|
||||
if (!br_spec)
|
||||
return -EINVAL;
|
||||
|
||||
nla_for_each_nested(attr, br_spec, rem) {
|
||||
__u16 mode;
|
||||
|
||||
@@ -520,9 +520,9 @@ int dwmac5_flex_pps_config(void __iomem *ioaddr, int index,
|
||||
return 0;
|
||||
}
|
||||
|
||||
val |= PPSCMDx(index, 0x2);
|
||||
val |= TRGTMODSELx(index, 0x2);
|
||||
val |= PPSEN0;
|
||||
writel(val, ioaddr + MAC_PPS_CONTROL);
|
||||
|
||||
writel(cfg->start.tv_sec, ioaddr + MAC_PPSx_TARGET_TIME_SEC(index));
|
||||
|
||||
@@ -547,6 +547,7 @@ int dwmac5_flex_pps_config(void __iomem *ioaddr, int index,
|
||||
writel(period - 1, ioaddr + MAC_PPSx_WIDTH(index));
|
||||
|
||||
/* Finally, activate it */
|
||||
val |= PPSCMDx(index, 0x2);
|
||||
writel(val, ioaddr + MAC_PPS_CONTROL);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -518,7 +518,7 @@ stmmac_probe_config_dt(struct platform_device *pdev, const char **mac)
|
||||
dma_cfg->mixed_burst = of_property_read_bool(np, "snps,mixed-burst");
|
||||
|
||||
plat->force_thresh_dma_mode = of_property_read_bool(np, "snps,force_thresh_dma_mode");
|
||||
if (plat->force_thresh_dma_mode) {
|
||||
if (plat->force_thresh_dma_mode && plat->force_sf_dma_mode) {
|
||||
plat->force_sf_dma_mode = 0;
|
||||
pr_warn("force_sf_dma_mode is ignored if force_thresh_dma_mode is set.");
|
||||
}
|
||||
|
||||
@@ -246,11 +246,26 @@ static struct phy_driver meson_gxl_phy[] = {
|
||||
.config_intr = meson_gxl_config_intr,
|
||||
.suspend = genphy_suspend,
|
||||
.resume = genphy_resume,
|
||||
.read_mmd = genphy_read_mmd_unsupported,
|
||||
.write_mmd = genphy_write_mmd_unsupported,
|
||||
}, {
|
||||
PHY_ID_MATCH_EXACT(0x01803301),
|
||||
.name = "Meson G12A Internal PHY",
|
||||
.features = PHY_BASIC_FEATURES,
|
||||
.flags = PHY_IS_INTERNAL,
|
||||
.soft_reset = genphy_soft_reset,
|
||||
.ack_interrupt = meson_gxl_ack_interrupt,
|
||||
.config_intr = meson_gxl_config_intr,
|
||||
.suspend = genphy_suspend,
|
||||
.resume = genphy_resume,
|
||||
.read_mmd = genphy_read_mmd_unsupported,
|
||||
.write_mmd = genphy_write_mmd_unsupported,
|
||||
},
|
||||
};
|
||||
|
||||
static struct mdio_device_id __maybe_unused meson_gxl_tbl[] = {
|
||||
{ 0x01814400, 0xfffffff0 },
|
||||
{ PHY_ID_MATCH_VENDOR(0x01803301) },
|
||||
{ }
|
||||
};
|
||||
|
||||
|
||||
@@ -69,8 +69,8 @@ kalmia_send_init_packet(struct usbnet *dev, u8 *init_msg, u8 init_msg_len,
|
||||
init_msg, init_msg_len, &act_len, KALMIA_USB_TIMEOUT);
|
||||
if (status != 0) {
|
||||
netdev_err(dev->net,
|
||||
"Error sending init packet. Status %i, length %i\n",
|
||||
status, act_len);
|
||||
"Error sending init packet. Status %i\n",
|
||||
status);
|
||||
return status;
|
||||
}
|
||||
else if (act_len != init_msg_len) {
|
||||
@@ -87,8 +87,8 @@ kalmia_send_init_packet(struct usbnet *dev, u8 *init_msg, u8 init_msg_len,
|
||||
|
||||
if (status != 0)
|
||||
netdev_err(dev->net,
|
||||
"Error receiving init result. Status %i, length %i\n",
|
||||
status, act_len);
|
||||
"Error receiving init result. Status %i\n",
|
||||
status);
|
||||
else if (act_len != expected_len)
|
||||
netdev_err(dev->net, "Unexpected init result length: %i\n",
|
||||
act_len);
|
||||
|
||||
@@ -69,9 +69,7 @@
|
||||
static inline int
|
||||
pl_vendor_req(struct usbnet *dev, u8 req, u8 val, u8 index)
|
||||
{
|
||||
return usbnet_read_cmd(dev, req,
|
||||
USB_DIR_IN | USB_TYPE_VENDOR |
|
||||
USB_RECIP_DEVICE,
|
||||
return usbnet_write_cmd(dev, req, USB_TYPE_VENDOR | USB_RECIP_DEVICE,
|
||||
val, index, NULL, 0);
|
||||
}
|
||||
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
ccflags-y := -O3
|
||||
ccflags-y += -D'pr_fmt(fmt)=KBUILD_MODNAME ": " fmt'
|
||||
ccflags-y := -D'pr_fmt(fmt)=KBUILD_MODNAME ": " fmt'
|
||||
ccflags-$(CONFIG_WIREGUARD_DEBUG) += -DDEBUG
|
||||
wireguard-y := main.o
|
||||
wireguard-y += noise.o
|
||||
|
||||
@@ -6,6 +6,8 @@
|
||||
#include "allowedips.h"
|
||||
#include "peer.h"
|
||||
|
||||
static struct kmem_cache *node_cache;
|
||||
|
||||
static void swap_endian(u8 *dst, const u8 *src, u8 bits)
|
||||
{
|
||||
if (bits == 32) {
|
||||
@@ -28,8 +30,11 @@ static void copy_and_assign_cidr(struct allowedips_node *node, const u8 *src,
|
||||
node->bitlen = bits;
|
||||
memcpy(node->bits, src, bits / 8U);
|
||||
}
|
||||
#define CHOOSE_NODE(parent, key) \
|
||||
parent->bit[(key[parent->bit_at_a] >> parent->bit_at_b) & 1]
|
||||
|
||||
static inline u8 choose(struct allowedips_node *node, const u8 *key)
|
||||
{
|
||||
return (key[node->bit_at_a] >> node->bit_at_b) & 1;
|
||||
}
|
||||
|
||||
static void push_rcu(struct allowedips_node **stack,
|
||||
struct allowedips_node __rcu *p, unsigned int *len)
|
||||
@@ -40,6 +45,11 @@ static void push_rcu(struct allowedips_node **stack,
|
||||
}
|
||||
}
|
||||
|
||||
static void node_free_rcu(struct rcu_head *rcu)
|
||||
{
|
||||
kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
|
||||
}
|
||||
|
||||
static void root_free_rcu(struct rcu_head *rcu)
|
||||
{
|
||||
struct allowedips_node *node, *stack[128] = {
|
||||
@@ -49,7 +59,7 @@ static void root_free_rcu(struct rcu_head *rcu)
|
||||
while (len > 0 && (node = stack[--len])) {
|
||||
push_rcu(stack, node->bit[0], &len);
|
||||
push_rcu(stack, node->bit[1], &len);
|
||||
kfree(node);
|
||||
kmem_cache_free(node_cache, node);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -66,60 +76,6 @@ static void root_remove_peer_lists(struct allowedips_node *root)
|
||||
}
|
||||
}
|
||||
|
||||
static void walk_remove_by_peer(struct allowedips_node __rcu **top,
|
||||
struct wg_peer *peer, struct mutex *lock)
|
||||
{
|
||||
#define REF(p) rcu_access_pointer(p)
|
||||
#define DEREF(p) rcu_dereference_protected(*(p), lockdep_is_held(lock))
|
||||
#define PUSH(p) ({ \
|
||||
WARN_ON(IS_ENABLED(DEBUG) && len >= 128); \
|
||||
stack[len++] = p; \
|
||||
})
|
||||
|
||||
struct allowedips_node __rcu **stack[128], **nptr;
|
||||
struct allowedips_node *node, *prev;
|
||||
unsigned int len;
|
||||
|
||||
if (unlikely(!peer || !REF(*top)))
|
||||
return;
|
||||
|
||||
for (prev = NULL, len = 0, PUSH(top); len > 0; prev = node) {
|
||||
nptr = stack[len - 1];
|
||||
node = DEREF(nptr);
|
||||
if (!node) {
|
||||
--len;
|
||||
continue;
|
||||
}
|
||||
if (!prev || REF(prev->bit[0]) == node ||
|
||||
REF(prev->bit[1]) == node) {
|
||||
if (REF(node->bit[0]))
|
||||
PUSH(&node->bit[0]);
|
||||
else if (REF(node->bit[1]))
|
||||
PUSH(&node->bit[1]);
|
||||
} else if (REF(node->bit[0]) == prev) {
|
||||
if (REF(node->bit[1]))
|
||||
PUSH(&node->bit[1]);
|
||||
} else {
|
||||
if (rcu_dereference_protected(node->peer,
|
||||
lockdep_is_held(lock)) == peer) {
|
||||
RCU_INIT_POINTER(node->peer, NULL);
|
||||
list_del_init(&node->peer_list);
|
||||
if (!node->bit[0] || !node->bit[1]) {
|
||||
rcu_assign_pointer(*nptr, DEREF(
|
||||
&node->bit[!REF(node->bit[0])]));
|
||||
kfree_rcu(node, rcu);
|
||||
node = DEREF(nptr);
|
||||
}
|
||||
}
|
||||
--len;
|
||||
}
|
||||
}
|
||||
|
||||
#undef REF
|
||||
#undef DEREF
|
||||
#undef PUSH
|
||||
}
|
||||
|
||||
static unsigned int fls128(u64 a, u64 b)
|
||||
{
|
||||
return a ? fls64(a) + 64U : fls64(b);
|
||||
@@ -159,7 +115,7 @@ static struct allowedips_node *find_node(struct allowedips_node *trie, u8 bits,
|
||||
found = node;
|
||||
if (node->cidr == bits)
|
||||
break;
|
||||
node = rcu_dereference_bh(CHOOSE_NODE(node, key));
|
||||
node = rcu_dereference_bh(node->bit[choose(node, key)]);
|
||||
}
|
||||
return found;
|
||||
}
|
||||
@@ -191,8 +147,7 @@ static bool node_placement(struct allowedips_node __rcu *trie, const u8 *key,
|
||||
u8 cidr, u8 bits, struct allowedips_node **rnode,
|
||||
struct mutex *lock)
|
||||
{
|
||||
struct allowedips_node *node = rcu_dereference_protected(trie,
|
||||
lockdep_is_held(lock));
|
||||
struct allowedips_node *node = rcu_dereference_protected(trie, lockdep_is_held(lock));
|
||||
struct allowedips_node *parent = NULL;
|
||||
bool exact = false;
|
||||
|
||||
@@ -202,13 +157,24 @@ static bool node_placement(struct allowedips_node __rcu *trie, const u8 *key,
|
||||
exact = true;
|
||||
break;
|
||||
}
|
||||
node = rcu_dereference_protected(CHOOSE_NODE(parent, key),
|
||||
lockdep_is_held(lock));
|
||||
node = rcu_dereference_protected(parent->bit[choose(parent, key)], lockdep_is_held(lock));
|
||||
}
|
||||
*rnode = parent;
|
||||
return exact;
|
||||
}
|
||||
|
||||
static inline void connect_node(struct allowedips_node **parent, u8 bit, struct allowedips_node *node)
|
||||
{
|
||||
node->parent_bit_packed = (unsigned long)parent | bit;
|
||||
rcu_assign_pointer(*parent, node);
|
||||
}
|
||||
|
||||
static inline void choose_and_connect_node(struct allowedips_node *parent, struct allowedips_node *node)
|
||||
{
|
||||
u8 bit = choose(parent, node->bits);
|
||||
connect_node(&parent->bit[bit], bit, node);
|
||||
}
|
||||
|
||||
static int add(struct allowedips_node __rcu **trie, u8 bits, const u8 *key,
|
||||
u8 cidr, struct wg_peer *peer, struct mutex *lock)
|
||||
{
|
||||
@@ -218,13 +184,13 @@ static int add(struct allowedips_node __rcu **trie, u8 bits, const u8 *key,
|
||||
return -EINVAL;
|
||||
|
||||
if (!rcu_access_pointer(*trie)) {
|
||||
node = kzalloc(sizeof(*node), GFP_KERNEL);
|
||||
node = kmem_cache_zalloc(node_cache, GFP_KERNEL);
|
||||
if (unlikely(!node))
|
||||
return -ENOMEM;
|
||||
RCU_INIT_POINTER(node->peer, peer);
|
||||
list_add_tail(&node->peer_list, &peer->allowedips_list);
|
||||
copy_and_assign_cidr(node, key, cidr, bits);
|
||||
rcu_assign_pointer(*trie, node);
|
||||
connect_node(trie, 2, node);
|
||||
return 0;
|
||||
}
|
||||
if (node_placement(*trie, key, cidr, bits, &node, lock)) {
|
||||
@@ -233,7 +199,7 @@ static int add(struct allowedips_node __rcu **trie, u8 bits, const u8 *key,
|
||||
return 0;
|
||||
}
|
||||
|
||||
newnode = kzalloc(sizeof(*newnode), GFP_KERNEL);
|
||||
newnode = kmem_cache_zalloc(node_cache, GFP_KERNEL);
|
||||
if (unlikely(!newnode))
|
||||
return -ENOMEM;
|
||||
RCU_INIT_POINTER(newnode->peer, peer);
|
||||
@@ -243,10 +209,10 @@ static int add(struct allowedips_node __rcu **trie, u8 bits, const u8 *key,
|
||||
if (!node) {
|
||||
down = rcu_dereference_protected(*trie, lockdep_is_held(lock));
|
||||
} else {
|
||||
down = rcu_dereference_protected(CHOOSE_NODE(node, key),
|
||||
lockdep_is_held(lock));
|
||||
const u8 bit = choose(node, key);
|
||||
down = rcu_dereference_protected(node->bit[bit], lockdep_is_held(lock));
|
||||
if (!down) {
|
||||
rcu_assign_pointer(CHOOSE_NODE(node, key), newnode);
|
||||
connect_node(&node->bit[bit], bit, newnode);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
@@ -254,30 +220,29 @@ static int add(struct allowedips_node __rcu **trie, u8 bits, const u8 *key,
|
||||
parent = node;
|
||||
|
||||
if (newnode->cidr == cidr) {
|
||||
rcu_assign_pointer(CHOOSE_NODE(newnode, down->bits), down);
|
||||
choose_and_connect_node(newnode, down);
|
||||
if (!parent)
|
||||
rcu_assign_pointer(*trie, newnode);
|
||||
connect_node(trie, 2, newnode);
|
||||
else
|
||||
rcu_assign_pointer(CHOOSE_NODE(parent, newnode->bits),
|
||||
newnode);
|
||||
} else {
|
||||
node = kzalloc(sizeof(*node), GFP_KERNEL);
|
||||
if (unlikely(!node)) {
|
||||
list_del(&newnode->peer_list);
|
||||
kfree(newnode);
|
||||
return -ENOMEM;
|
||||
}
|
||||
INIT_LIST_HEAD(&node->peer_list);
|
||||
copy_and_assign_cidr(node, newnode->bits, cidr, bits);
|
||||
|
||||
rcu_assign_pointer(CHOOSE_NODE(node, down->bits), down);
|
||||
rcu_assign_pointer(CHOOSE_NODE(node, newnode->bits), newnode);
|
||||
if (!parent)
|
||||
rcu_assign_pointer(*trie, node);
|
||||
else
|
||||
rcu_assign_pointer(CHOOSE_NODE(parent, node->bits),
|
||||
node);
|
||||
choose_and_connect_node(parent, newnode);
|
||||
return 0;
|
||||
}
|
||||
|
||||
node = kmem_cache_zalloc(node_cache, GFP_KERNEL);
|
||||
if (unlikely(!node)) {
|
||||
list_del(&newnode->peer_list);
|
||||
kmem_cache_free(node_cache, newnode);
|
||||
return -ENOMEM;
|
||||
}
|
||||
INIT_LIST_HEAD(&node->peer_list);
|
||||
copy_and_assign_cidr(node, newnode->bits, cidr, bits);
|
||||
|
||||
choose_and_connect_node(node, down);
|
||||
choose_and_connect_node(node, newnode);
|
||||
if (!parent)
|
||||
connect_node(trie, 2, node);
|
||||
else
|
||||
choose_and_connect_node(parent, node);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -335,9 +300,41 @@ int wg_allowedips_insert_v6(struct allowedips *table, const struct in6_addr *ip,
|
||||
void wg_allowedips_remove_by_peer(struct allowedips *table,
|
||||
struct wg_peer *peer, struct mutex *lock)
|
||||
{
|
||||
struct allowedips_node *node, *child, **parent_bit, *parent, *tmp;
|
||||
bool free_parent;
|
||||
|
||||
if (list_empty(&peer->allowedips_list))
|
||||
return;
|
||||
++table->seq;
|
||||
walk_remove_by_peer(&table->root4, peer, lock);
|
||||
walk_remove_by_peer(&table->root6, peer, lock);
|
||||
list_for_each_entry_safe(node, tmp, &peer->allowedips_list, peer_list) {
|
||||
list_del_init(&node->peer_list);
|
||||
RCU_INIT_POINTER(node->peer, NULL);
|
||||
if (node->bit[0] && node->bit[1])
|
||||
continue;
|
||||
child = rcu_dereference_protected(node->bit[!rcu_access_pointer(node->bit[0])],
|
||||
lockdep_is_held(lock));
|
||||
if (child)
|
||||
child->parent_bit_packed = node->parent_bit_packed;
|
||||
parent_bit = (struct allowedips_node **)(node->parent_bit_packed & ~3UL);
|
||||
*parent_bit = child;
|
||||
parent = (void *)parent_bit -
|
||||
offsetof(struct allowedips_node, bit[node->parent_bit_packed & 1]);
|
||||
free_parent = !rcu_access_pointer(node->bit[0]) &&
|
||||
!rcu_access_pointer(node->bit[1]) &&
|
||||
(node->parent_bit_packed & 3) <= 1 &&
|
||||
!rcu_access_pointer(parent->peer);
|
||||
if (free_parent)
|
||||
child = rcu_dereference_protected(
|
||||
parent->bit[!(node->parent_bit_packed & 1)],
|
||||
lockdep_is_held(lock));
|
||||
call_rcu(&node->rcu, node_free_rcu);
|
||||
if (!free_parent)
|
||||
continue;
|
||||
if (child)
|
||||
child->parent_bit_packed = parent->parent_bit_packed;
|
||||
*(struct allowedips_node **)(parent->parent_bit_packed & ~3UL) = child;
|
||||
call_rcu(&parent->rcu, node_free_rcu);
|
||||
}
|
||||
}
|
||||
|
||||
int wg_allowedips_read_node(struct allowedips_node *node, u8 ip[16], u8 *cidr)
|
||||
@@ -374,4 +371,16 @@ struct wg_peer *wg_allowedips_lookup_src(struct allowedips *table,
|
||||
return NULL;
|
||||
}
|
||||
|
||||
int __init wg_allowedips_slab_init(void)
|
||||
{
|
||||
node_cache = KMEM_CACHE(allowedips_node, 0);
|
||||
return node_cache ? 0 : -ENOMEM;
|
||||
}
|
||||
|
||||
void wg_allowedips_slab_uninit(void)
|
||||
{
|
||||
rcu_barrier();
|
||||
kmem_cache_destroy(node_cache);
|
||||
}
|
||||
|
||||
#include "selftest/allowedips.c"
|
||||
|
||||
@@ -15,14 +15,11 @@ struct wg_peer;
|
||||
struct allowedips_node {
|
||||
struct wg_peer __rcu *peer;
|
||||
struct allowedips_node __rcu *bit[2];
|
||||
/* While it may seem scandalous that we waste space for v4,
|
||||
* we're alloc'ing to the nearest power of 2 anyway, so this
|
||||
* doesn't actually make a difference.
|
||||
*/
|
||||
u8 bits[16] __aligned(__alignof(u64));
|
||||
u8 cidr, bit_at_a, bit_at_b, bitlen;
|
||||
u8 bits[16] __aligned(__alignof(u64));
|
||||
|
||||
/* Keep rarely used list at bottom to be beyond cache line. */
|
||||
/* Keep rarely used members at bottom to be beyond cache line. */
|
||||
unsigned long parent_bit_packed;
|
||||
union {
|
||||
struct list_head peer_list;
|
||||
struct rcu_head rcu;
|
||||
@@ -33,7 +30,7 @@ struct allowedips {
|
||||
struct allowedips_node __rcu *root4;
|
||||
struct allowedips_node __rcu *root6;
|
||||
u64 seq;
|
||||
};
|
||||
} __aligned(4); /* We pack the lower 2 bits of &root, but m68k only gives 16-bit alignment. */
|
||||
|
||||
void wg_allowedips_init(struct allowedips *table);
|
||||
void wg_allowedips_free(struct allowedips *table, struct mutex *mutex);
|
||||
@@ -56,4 +53,7 @@ struct wg_peer *wg_allowedips_lookup_src(struct allowedips *table,
|
||||
bool wg_allowedips_selftest(void);
|
||||
#endif
|
||||
|
||||
int wg_allowedips_slab_init(void);
|
||||
void wg_allowedips_slab_uninit(void);
|
||||
|
||||
#endif /* _WG_ALLOWEDIPS_H */
|
||||
|
||||
@@ -98,6 +98,7 @@ static int wg_stop(struct net_device *dev)
|
||||
{
|
||||
struct wg_device *wg = netdev_priv(dev);
|
||||
struct wg_peer *peer;
|
||||
struct sk_buff *skb;
|
||||
|
||||
mutex_lock(&wg->device_update_lock);
|
||||
list_for_each_entry(peer, &wg->peer_list, peer_list) {
|
||||
@@ -108,7 +109,9 @@ static int wg_stop(struct net_device *dev)
|
||||
wg_noise_reset_last_sent_handshake(&peer->last_sent_handshake);
|
||||
}
|
||||
mutex_unlock(&wg->device_update_lock);
|
||||
skb_queue_purge(&wg->incoming_handshakes);
|
||||
while ((skb = ptr_ring_consume(&wg->handshake_queue.ring)) != NULL)
|
||||
kfree_skb(skb);
|
||||
atomic_set(&wg->handshake_queue_len, 0);
|
||||
wg_socket_reinit(wg, NULL, NULL);
|
||||
return 0;
|
||||
}
|
||||
@@ -138,7 +141,7 @@ static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
else if (skb->protocol == htons(ETH_P_IPV6))
|
||||
net_dbg_ratelimited("%s: No peer has allowed IPs matching %pI6\n",
|
||||
dev->name, &ipv6_hdr(skb)->daddr);
|
||||
goto err;
|
||||
goto err_icmp;
|
||||
}
|
||||
|
||||
family = READ_ONCE(peer->endpoint.addr.sa_family);
|
||||
@@ -201,12 +204,13 @@ static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
|
||||
err_peer:
|
||||
wg_peer_put(peer);
|
||||
err:
|
||||
++dev->stats.tx_errors;
|
||||
err_icmp:
|
||||
if (skb->protocol == htons(ETH_P_IP))
|
||||
icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0);
|
||||
else if (skb->protocol == htons(ETH_P_IPV6))
|
||||
icmpv6_ndo_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, 0);
|
||||
err:
|
||||
++dev->stats.tx_errors;
|
||||
kfree_skb(skb);
|
||||
return ret;
|
||||
}
|
||||
@@ -234,14 +238,13 @@ static void wg_destruct(struct net_device *dev)
|
||||
destroy_workqueue(wg->handshake_receive_wq);
|
||||
destroy_workqueue(wg->handshake_send_wq);
|
||||
destroy_workqueue(wg->packet_crypt_wq);
|
||||
wg_packet_queue_free(&wg->decrypt_queue, true);
|
||||
wg_packet_queue_free(&wg->encrypt_queue, true);
|
||||
wg_packet_queue_free(&wg->handshake_queue, true);
|
||||
wg_packet_queue_free(&wg->decrypt_queue, false);
|
||||
wg_packet_queue_free(&wg->encrypt_queue, false);
|
||||
rcu_barrier(); /* Wait for all the peers to be actually freed. */
|
||||
wg_ratelimiter_uninit();
|
||||
memzero_explicit(&wg->static_identity, sizeof(wg->static_identity));
|
||||
skb_queue_purge(&wg->incoming_handshakes);
|
||||
free_percpu(dev->tstats);
|
||||
free_percpu(wg->incoming_handshakes_worker);
|
||||
kvfree(wg->index_hashtable);
|
||||
kvfree(wg->peer_hashtable);
|
||||
mutex_unlock(&wg->device_update_lock);
|
||||
@@ -296,7 +299,6 @@ static int wg_newlink(struct net *src_net, struct net_device *dev,
|
||||
init_rwsem(&wg->static_identity.lock);
|
||||
mutex_init(&wg->socket_update_lock);
|
||||
mutex_init(&wg->device_update_lock);
|
||||
skb_queue_head_init(&wg->incoming_handshakes);
|
||||
wg_allowedips_init(&wg->peer_allowedips);
|
||||
wg_cookie_checker_init(&wg->cookie_checker, wg);
|
||||
INIT_LIST_HEAD(&wg->peer_list);
|
||||
@@ -314,16 +316,10 @@ static int wg_newlink(struct net *src_net, struct net_device *dev,
|
||||
if (!dev->tstats)
|
||||
goto err_free_index_hashtable;
|
||||
|
||||
wg->incoming_handshakes_worker =
|
||||
wg_packet_percpu_multicore_worker_alloc(
|
||||
wg_packet_handshake_receive_worker, wg);
|
||||
if (!wg->incoming_handshakes_worker)
|
||||
goto err_free_tstats;
|
||||
|
||||
wg->handshake_receive_wq = alloc_workqueue("wg-kex-%s",
|
||||
WQ_CPU_INTENSIVE | WQ_FREEZABLE, 0, dev->name);
|
||||
if (!wg->handshake_receive_wq)
|
||||
goto err_free_incoming_handshakes;
|
||||
goto err_free_tstats;
|
||||
|
||||
wg->handshake_send_wq = alloc_workqueue("wg-kex-%s",
|
||||
WQ_UNBOUND | WQ_FREEZABLE, 0, dev->name);
|
||||
@@ -336,19 +332,24 @@ static int wg_newlink(struct net *src_net, struct net_device *dev,
|
||||
goto err_destroy_handshake_send;
|
||||
|
||||
ret = wg_packet_queue_init(&wg->encrypt_queue, wg_packet_encrypt_worker,
|
||||
true, MAX_QUEUED_PACKETS);
|
||||
MAX_QUEUED_PACKETS);
|
||||
if (ret < 0)
|
||||
goto err_destroy_packet_crypt;
|
||||
|
||||
ret = wg_packet_queue_init(&wg->decrypt_queue, wg_packet_decrypt_worker,
|
||||
true, MAX_QUEUED_PACKETS);
|
||||
MAX_QUEUED_PACKETS);
|
||||
if (ret < 0)
|
||||
goto err_free_encrypt_queue;
|
||||
|
||||
ret = wg_ratelimiter_init();
|
||||
ret = wg_packet_queue_init(&wg->handshake_queue, wg_packet_handshake_receive_worker,
|
||||
MAX_QUEUED_INCOMING_HANDSHAKES);
|
||||
if (ret < 0)
|
||||
goto err_free_decrypt_queue;
|
||||
|
||||
ret = wg_ratelimiter_init();
|
||||
if (ret < 0)
|
||||
goto err_free_handshake_queue;
|
||||
|
||||
ret = register_netdevice(dev);
|
||||
if (ret < 0)
|
||||
goto err_uninit_ratelimiter;
|
||||
@@ -365,18 +366,18 @@ static int wg_newlink(struct net *src_net, struct net_device *dev,
|
||||
|
||||
err_uninit_ratelimiter:
|
||||
wg_ratelimiter_uninit();
|
||||
err_free_handshake_queue:
|
||||
wg_packet_queue_free(&wg->handshake_queue, false);
|
||||
err_free_decrypt_queue:
|
||||
wg_packet_queue_free(&wg->decrypt_queue, true);
|
||||
wg_packet_queue_free(&wg->decrypt_queue, false);
|
||||
err_free_encrypt_queue:
|
||||
wg_packet_queue_free(&wg->encrypt_queue, true);
|
||||
wg_packet_queue_free(&wg->encrypt_queue, false);
|
||||
err_destroy_packet_crypt:
|
||||
destroy_workqueue(wg->packet_crypt_wq);
|
||||
err_destroy_handshake_send:
|
||||
destroy_workqueue(wg->handshake_send_wq);
|
||||
err_destroy_handshake_receive:
|
||||
destroy_workqueue(wg->handshake_receive_wq);
|
||||
err_free_incoming_handshakes:
|
||||
free_percpu(wg->incoming_handshakes_worker);
|
||||
err_free_tstats:
|
||||
free_percpu(dev->tstats);
|
||||
err_free_index_hashtable:
|
||||
@@ -396,6 +397,7 @@ static struct rtnl_link_ops link_ops __read_mostly = {
|
||||
static void wg_netns_exit(struct net *net)
|
||||
{
|
||||
struct wg_device *wg;
|
||||
struct wg_peer *peer;
|
||||
|
||||
rtnl_lock();
|
||||
list_for_each_entry(wg, &device_list, device_list) {
|
||||
@@ -405,6 +407,8 @@ static void wg_netns_exit(struct net *net)
|
||||
mutex_lock(&wg->device_update_lock);
|
||||
rcu_assign_pointer(wg->creating_net, NULL);
|
||||
wg_socket_reinit(wg, NULL, NULL);
|
||||
list_for_each_entry(peer, &wg->peer_list, peer_list)
|
||||
wg_socket_clear_peer_endpoint_src(peer);
|
||||
mutex_unlock(&wg->device_update_lock);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -27,32 +27,30 @@ struct multicore_worker {
|
||||
|
||||
struct crypt_queue {
|
||||
struct ptr_ring ring;
|
||||
union {
|
||||
struct {
|
||||
struct multicore_worker __percpu *worker;
|
||||
int last_cpu;
|
||||
};
|
||||
struct work_struct work;
|
||||
};
|
||||
struct multicore_worker __percpu *worker;
|
||||
int last_cpu;
|
||||
};
|
||||
|
||||
struct prev_queue {
|
||||
struct sk_buff *head, *tail, *peeked;
|
||||
struct { struct sk_buff *next, *prev; } empty; // Match first 2 members of struct sk_buff.
|
||||
atomic_t count;
|
||||
};
|
||||
|
||||
struct wg_device {
|
||||
struct net_device *dev;
|
||||
struct crypt_queue encrypt_queue, decrypt_queue;
|
||||
struct crypt_queue encrypt_queue, decrypt_queue, handshake_queue;
|
||||
struct sock __rcu *sock4, *sock6;
|
||||
struct net __rcu *creating_net;
|
||||
struct noise_static_identity static_identity;
|
||||
struct workqueue_struct *handshake_receive_wq, *handshake_send_wq;
|
||||
struct workqueue_struct *packet_crypt_wq;
|
||||
struct sk_buff_head incoming_handshakes;
|
||||
int incoming_handshake_cpu;
|
||||
struct multicore_worker __percpu *incoming_handshakes_worker;
|
||||
struct workqueue_struct *packet_crypt_wq,*handshake_receive_wq, *handshake_send_wq;
|
||||
struct cookie_checker cookie_checker;
|
||||
struct pubkey_hashtable *peer_hashtable;
|
||||
struct index_hashtable *index_hashtable;
|
||||
struct allowedips peer_allowedips;
|
||||
struct mutex device_update_lock, socket_update_lock;
|
||||
struct list_head device_list, peer_list;
|
||||
atomic_t handshake_queue_len;
|
||||
unsigned int num_peers, device_update_gen;
|
||||
u32 fwmark;
|
||||
u16 incoming_port;
|
||||
|
||||
@@ -21,10 +21,15 @@ static int __init mod_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = wg_allowedips_slab_init();
|
||||
if (ret < 0)
|
||||
goto err_allowedips;
|
||||
|
||||
#ifdef DEBUG
|
||||
ret = -ENOTRECOVERABLE;
|
||||
if (!wg_allowedips_selftest() || !wg_packet_counter_selftest() ||
|
||||
!wg_ratelimiter_selftest())
|
||||
return -ENOTRECOVERABLE;
|
||||
goto err_device;
|
||||
#endif
|
||||
wg_noise_init();
|
||||
|
||||
@@ -44,6 +49,8 @@ static int __init mod_init(void)
|
||||
err_netlink:
|
||||
wg_device_uninit();
|
||||
err_device:
|
||||
wg_allowedips_slab_uninit();
|
||||
err_allowedips:
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -51,6 +58,7 @@ static void __exit mod_exit(void)
|
||||
{
|
||||
wg_genetlink_uninit();
|
||||
wg_device_uninit();
|
||||
wg_allowedips_slab_uninit();
|
||||
}
|
||||
|
||||
module_init(mod_init);
|
||||
|
||||
@@ -32,27 +32,22 @@ struct wg_peer *wg_peer_create(struct wg_device *wg,
|
||||
peer = kzalloc(sizeof(*peer), GFP_KERNEL);
|
||||
if (unlikely(!peer))
|
||||
return ERR_PTR(ret);
|
||||
peer->device = wg;
|
||||
if (dst_cache_init(&peer->endpoint_cache, GFP_KERNEL))
|
||||
goto err;
|
||||
|
||||
peer->device = wg;
|
||||
wg_noise_handshake_init(&peer->handshake, &wg->static_identity,
|
||||
public_key, preshared_key, peer);
|
||||
if (dst_cache_init(&peer->endpoint_cache, GFP_KERNEL))
|
||||
goto err_1;
|
||||
if (wg_packet_queue_init(&peer->tx_queue, wg_packet_tx_worker, false,
|
||||
MAX_QUEUED_PACKETS))
|
||||
goto err_2;
|
||||
if (wg_packet_queue_init(&peer->rx_queue, NULL, false,
|
||||
MAX_QUEUED_PACKETS))
|
||||
goto err_3;
|
||||
|
||||
peer->internal_id = atomic64_inc_return(&peer_counter);
|
||||
peer->serial_work_cpu = nr_cpumask_bits;
|
||||
wg_cookie_init(&peer->latest_cookie);
|
||||
wg_timers_init(peer);
|
||||
wg_cookie_checker_precompute_peer_keys(peer);
|
||||
spin_lock_init(&peer->keypairs.keypair_update_lock);
|
||||
INIT_WORK(&peer->transmit_handshake_work,
|
||||
wg_packet_handshake_send_worker);
|
||||
INIT_WORK(&peer->transmit_handshake_work, wg_packet_handshake_send_worker);
|
||||
INIT_WORK(&peer->transmit_packet_work, wg_packet_tx_worker);
|
||||
wg_prev_queue_init(&peer->tx_queue);
|
||||
wg_prev_queue_init(&peer->rx_queue);
|
||||
rwlock_init(&peer->endpoint_lock);
|
||||
kref_init(&peer->refcount);
|
||||
skb_queue_head_init(&peer->staged_packet_queue);
|
||||
@@ -68,11 +63,7 @@ struct wg_peer *wg_peer_create(struct wg_device *wg,
|
||||
pr_debug("%s: Peer %llu created\n", wg->dev->name, peer->internal_id);
|
||||
return peer;
|
||||
|
||||
err_3:
|
||||
wg_packet_queue_free(&peer->tx_queue, false);
|
||||
err_2:
|
||||
dst_cache_destroy(&peer->endpoint_cache);
|
||||
err_1:
|
||||
err:
|
||||
kfree(peer);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
@@ -97,7 +88,7 @@ static void peer_make_dead(struct wg_peer *peer)
|
||||
/* Mark as dead, so that we don't allow jumping contexts after. */
|
||||
WRITE_ONCE(peer->is_dead, true);
|
||||
|
||||
/* The caller must now synchronize_rcu() for this to take effect. */
|
||||
/* The caller must now synchronize_net() for this to take effect. */
|
||||
}
|
||||
|
||||
static void peer_remove_after_dead(struct wg_peer *peer)
|
||||
@@ -169,7 +160,7 @@ void wg_peer_remove(struct wg_peer *peer)
|
||||
lockdep_assert_held(&peer->device->device_update_lock);
|
||||
|
||||
peer_make_dead(peer);
|
||||
synchronize_rcu();
|
||||
synchronize_net();
|
||||
peer_remove_after_dead(peer);
|
||||
}
|
||||
|
||||
@@ -187,7 +178,7 @@ void wg_peer_remove_all(struct wg_device *wg)
|
||||
peer_make_dead(peer);
|
||||
list_add_tail(&peer->peer_list, &dead_peers);
|
||||
}
|
||||
synchronize_rcu();
|
||||
synchronize_net();
|
||||
list_for_each_entry_safe(peer, temp, &dead_peers, peer_list)
|
||||
peer_remove_after_dead(peer);
|
||||
}
|
||||
@@ -197,8 +188,7 @@ static void rcu_release(struct rcu_head *rcu)
|
||||
struct wg_peer *peer = container_of(rcu, struct wg_peer, rcu);
|
||||
|
||||
dst_cache_destroy(&peer->endpoint_cache);
|
||||
wg_packet_queue_free(&peer->rx_queue, false);
|
||||
wg_packet_queue_free(&peer->tx_queue, false);
|
||||
WARN_ON(wg_prev_queue_peek(&peer->tx_queue) || wg_prev_queue_peek(&peer->rx_queue));
|
||||
|
||||
/* The final zeroing takes care of clearing any remaining handshake key
|
||||
* material and other potentially sensitive information.
|
||||
|
||||
@@ -36,7 +36,7 @@ struct endpoint {
|
||||
|
||||
struct wg_peer {
|
||||
struct wg_device *device;
|
||||
struct crypt_queue tx_queue, rx_queue;
|
||||
struct prev_queue tx_queue, rx_queue;
|
||||
struct sk_buff_head staged_packet_queue;
|
||||
int serial_work_cpu;
|
||||
struct noise_keypairs keypairs;
|
||||
@@ -45,7 +45,7 @@ struct wg_peer {
|
||||
rwlock_t endpoint_lock;
|
||||
struct noise_handshake handshake;
|
||||
atomic64_t last_sent_handshake;
|
||||
struct work_struct transmit_handshake_work, clear_peer_work;
|
||||
struct work_struct transmit_handshake_work, clear_peer_work, transmit_packet_work;
|
||||
struct cookie latest_cookie;
|
||||
struct hlist_node pubkey_hash;
|
||||
u64 rx_bytes, tx_bytes;
|
||||
|
||||
@@ -9,8 +9,7 @@ struct multicore_worker __percpu *
|
||||
wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr)
|
||||
{
|
||||
int cpu;
|
||||
struct multicore_worker __percpu *worker =
|
||||
alloc_percpu(struct multicore_worker);
|
||||
struct multicore_worker __percpu *worker = alloc_percpu(struct multicore_worker);
|
||||
|
||||
if (!worker)
|
||||
return NULL;
|
||||
@@ -23,7 +22,7 @@ wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr)
|
||||
}
|
||||
|
||||
int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
|
||||
bool multicore, unsigned int len)
|
||||
unsigned int len)
|
||||
{
|
||||
int ret;
|
||||
|
||||
@@ -31,25 +30,78 @@ int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
|
||||
ret = ptr_ring_init(&queue->ring, len, GFP_KERNEL);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (function) {
|
||||
if (multicore) {
|
||||
queue->worker = wg_packet_percpu_multicore_worker_alloc(
|
||||
function, queue);
|
||||
if (!queue->worker) {
|
||||
ptr_ring_cleanup(&queue->ring, NULL);
|
||||
return -ENOMEM;
|
||||
}
|
||||
} else {
|
||||
INIT_WORK(&queue->work, function);
|
||||
}
|
||||
queue->worker = wg_packet_percpu_multicore_worker_alloc(function, queue);
|
||||
if (!queue->worker) {
|
||||
ptr_ring_cleanup(&queue->ring, NULL);
|
||||
return -ENOMEM;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
void wg_packet_queue_free(struct crypt_queue *queue, bool multicore)
|
||||
void wg_packet_queue_free(struct crypt_queue *queue, bool purge)
|
||||
{
|
||||
if (multicore)
|
||||
free_percpu(queue->worker);
|
||||
WARN_ON(!__ptr_ring_empty(&queue->ring));
|
||||
ptr_ring_cleanup(&queue->ring, NULL);
|
||||
free_percpu(queue->worker);
|
||||
WARN_ON(!purge && !__ptr_ring_empty(&queue->ring));
|
||||
ptr_ring_cleanup(&queue->ring, purge ? (void(*)(void*))kfree_skb : NULL);
|
||||
}
|
||||
|
||||
#define NEXT(skb) ((skb)->prev)
|
||||
#define STUB(queue) ((struct sk_buff *)&queue->empty)
|
||||
|
||||
void wg_prev_queue_init(struct prev_queue *queue)
|
||||
{
|
||||
NEXT(STUB(queue)) = NULL;
|
||||
queue->head = queue->tail = STUB(queue);
|
||||
queue->peeked = NULL;
|
||||
atomic_set(&queue->count, 0);
|
||||
BUILD_BUG_ON(
|
||||
offsetof(struct sk_buff, next) != offsetof(struct prev_queue, empty.next) -
|
||||
offsetof(struct prev_queue, empty) ||
|
||||
offsetof(struct sk_buff, prev) != offsetof(struct prev_queue, empty.prev) -
|
||||
offsetof(struct prev_queue, empty));
|
||||
}
|
||||
|
||||
static void __wg_prev_queue_enqueue(struct prev_queue *queue, struct sk_buff *skb)
|
||||
{
|
||||
WRITE_ONCE(NEXT(skb), NULL);
|
||||
WRITE_ONCE(NEXT(xchg_release(&queue->head, skb)), skb);
|
||||
}
|
||||
|
||||
bool wg_prev_queue_enqueue(struct prev_queue *queue, struct sk_buff *skb)
|
||||
{
|
||||
if (!atomic_add_unless(&queue->count, 1, MAX_QUEUED_PACKETS))
|
||||
return false;
|
||||
__wg_prev_queue_enqueue(queue, skb);
|
||||
return true;
|
||||
}
|
||||
|
||||
struct sk_buff *wg_prev_queue_dequeue(struct prev_queue *queue)
|
||||
{
|
||||
struct sk_buff *tail = queue->tail, *next = smp_load_acquire(&NEXT(tail));
|
||||
|
||||
if (tail == STUB(queue)) {
|
||||
if (!next)
|
||||
return NULL;
|
||||
queue->tail = next;
|
||||
tail = next;
|
||||
next = smp_load_acquire(&NEXT(next));
|
||||
}
|
||||
if (next) {
|
||||
queue->tail = next;
|
||||
atomic_dec(&queue->count);
|
||||
return tail;
|
||||
}
|
||||
if (tail != READ_ONCE(queue->head))
|
||||
return NULL;
|
||||
__wg_prev_queue_enqueue(queue, STUB(queue));
|
||||
next = smp_load_acquire(&NEXT(tail));
|
||||
if (next) {
|
||||
queue->tail = next;
|
||||
atomic_dec(&queue->count);
|
||||
return tail;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
#undef NEXT
|
||||
#undef STUB
|
||||
|
||||
@@ -17,12 +17,13 @@ struct wg_device;
|
||||
struct wg_peer;
|
||||
struct multicore_worker;
|
||||
struct crypt_queue;
|
||||
struct prev_queue;
|
||||
struct sk_buff;
|
||||
|
||||
/* queueing.c APIs: */
|
||||
int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
|
||||
bool multicore, unsigned int len);
|
||||
void wg_packet_queue_free(struct crypt_queue *queue, bool multicore);
|
||||
unsigned int len);
|
||||
void wg_packet_queue_free(struct crypt_queue *queue, bool purge);
|
||||
struct multicore_worker __percpu *
|
||||
wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr);
|
||||
|
||||
@@ -135,8 +136,31 @@ static inline int wg_cpumask_next_online(int *next)
|
||||
return cpu;
|
||||
}
|
||||
|
||||
void wg_prev_queue_init(struct prev_queue *queue);
|
||||
|
||||
/* Multi producer */
|
||||
bool wg_prev_queue_enqueue(struct prev_queue *queue, struct sk_buff *skb);
|
||||
|
||||
/* Single consumer */
|
||||
struct sk_buff *wg_prev_queue_dequeue(struct prev_queue *queue);
|
||||
|
||||
/* Single consumer */
|
||||
static inline struct sk_buff *wg_prev_queue_peek(struct prev_queue *queue)
|
||||
{
|
||||
if (queue->peeked)
|
||||
return queue->peeked;
|
||||
queue->peeked = wg_prev_queue_dequeue(queue);
|
||||
return queue->peeked;
|
||||
}
|
||||
|
||||
/* Single consumer */
|
||||
static inline void wg_prev_queue_drop_peeked(struct prev_queue *queue)
|
||||
{
|
||||
queue->peeked = NULL;
|
||||
}
|
||||
|
||||
static inline int wg_queue_enqueue_per_device_and_peer(
|
||||
struct crypt_queue *device_queue, struct crypt_queue *peer_queue,
|
||||
struct crypt_queue *device_queue, struct prev_queue *peer_queue,
|
||||
struct sk_buff *skb, struct workqueue_struct *wq, int *next_cpu)
|
||||
{
|
||||
int cpu;
|
||||
@@ -145,8 +169,9 @@ static inline int wg_queue_enqueue_per_device_and_peer(
|
||||
/* We first queue this up for the peer ingestion, but the consumer
|
||||
* will wait for the state to change to CRYPTED or DEAD before.
|
||||
*/
|
||||
if (unlikely(ptr_ring_produce_bh(&peer_queue->ring, skb)))
|
||||
if (unlikely(!wg_prev_queue_enqueue(peer_queue, skb)))
|
||||
return -ENOSPC;
|
||||
|
||||
/* Then we queue it up in the device queue, which consumes the
|
||||
* packet as soon as it can.
|
||||
*/
|
||||
@@ -157,9 +182,7 @@ static inline int wg_queue_enqueue_per_device_and_peer(
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void wg_queue_enqueue_per_peer(struct crypt_queue *queue,
|
||||
struct sk_buff *skb,
|
||||
enum packet_state state)
|
||||
static inline void wg_queue_enqueue_per_peer_tx(struct sk_buff *skb, enum packet_state state)
|
||||
{
|
||||
/* We take a reference, because as soon as we call atomic_set, the
|
||||
* peer can be freed from below us.
|
||||
@@ -167,14 +190,12 @@ static inline void wg_queue_enqueue_per_peer(struct crypt_queue *queue,
|
||||
struct wg_peer *peer = wg_peer_get(PACKET_PEER(skb));
|
||||
|
||||
atomic_set_release(&PACKET_CB(skb)->state, state);
|
||||
queue_work_on(wg_cpumask_choose_online(&peer->serial_work_cpu,
|
||||
peer->internal_id),
|
||||
peer->device->packet_crypt_wq, &queue->work);
|
||||
queue_work_on(wg_cpumask_choose_online(&peer->serial_work_cpu, peer->internal_id),
|
||||
peer->device->packet_crypt_wq, &peer->transmit_packet_work);
|
||||
wg_peer_put(peer);
|
||||
}
|
||||
|
||||
static inline void wg_queue_enqueue_per_peer_napi(struct sk_buff *skb,
|
||||
enum packet_state state)
|
||||
static inline void wg_queue_enqueue_per_peer_rx(struct sk_buff *skb, enum packet_state state)
|
||||
{
|
||||
/* We take a reference, because as soon as we call atomic_set, the
|
||||
* peer can be freed from below us.
|
||||
|
||||
@@ -176,12 +176,12 @@ int wg_ratelimiter_init(void)
|
||||
(1U << 14) / sizeof(struct hlist_head)));
|
||||
max_entries = table_size * 8;
|
||||
|
||||
table_v4 = kvzalloc(table_size * sizeof(*table_v4), GFP_KERNEL);
|
||||
table_v4 = kvcalloc(table_size, sizeof(*table_v4), GFP_KERNEL);
|
||||
if (unlikely(!table_v4))
|
||||
goto err_kmemcache;
|
||||
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
table_v6 = kvzalloc(table_size * sizeof(*table_v6), GFP_KERNEL);
|
||||
table_v6 = kvcalloc(table_size, sizeof(*table_v6), GFP_KERNEL);
|
||||
if (unlikely(!table_v6)) {
|
||||
kvfree(table_v4);
|
||||
goto err_kmemcache;
|
||||
|
||||
@@ -116,8 +116,8 @@ static void wg_receive_handshake_packet(struct wg_device *wg,
|
||||
return;
|
||||
}
|
||||
|
||||
under_load = skb_queue_len(&wg->incoming_handshakes) >=
|
||||
MAX_QUEUED_INCOMING_HANDSHAKES / 8;
|
||||
under_load = atomic_read(&wg->handshake_queue_len) >=
|
||||
MAX_QUEUED_INCOMING_HANDSHAKES / 8;
|
||||
if (under_load) {
|
||||
last_under_load = ktime_get_coarse_boottime_ns();
|
||||
} else if (last_under_load) {
|
||||
@@ -212,13 +212,14 @@ static void wg_receive_handshake_packet(struct wg_device *wg,
|
||||
|
||||
void wg_packet_handshake_receive_worker(struct work_struct *work)
|
||||
{
|
||||
struct wg_device *wg = container_of(work, struct multicore_worker,
|
||||
work)->ptr;
|
||||
struct crypt_queue *queue = container_of(work, struct multicore_worker, work)->ptr;
|
||||
struct wg_device *wg = container_of(queue, struct wg_device, handshake_queue);
|
||||
struct sk_buff *skb;
|
||||
|
||||
while ((skb = skb_dequeue(&wg->incoming_handshakes)) != NULL) {
|
||||
while ((skb = ptr_ring_consume_bh(&queue->ring)) != NULL) {
|
||||
wg_receive_handshake_packet(wg, skb);
|
||||
dev_kfree_skb(skb);
|
||||
atomic_dec(&wg->handshake_queue_len);
|
||||
cond_resched();
|
||||
}
|
||||
}
|
||||
@@ -444,7 +445,6 @@ static void wg_packet_consume_data_done(struct wg_peer *peer,
|
||||
int wg_packet_rx_poll(struct napi_struct *napi, int budget)
|
||||
{
|
||||
struct wg_peer *peer = container_of(napi, struct wg_peer, napi);
|
||||
struct crypt_queue *queue = &peer->rx_queue;
|
||||
struct noise_keypair *keypair;
|
||||
struct endpoint endpoint;
|
||||
enum packet_state state;
|
||||
@@ -455,11 +455,10 @@ int wg_packet_rx_poll(struct napi_struct *napi, int budget)
|
||||
if (unlikely(budget <= 0))
|
||||
return 0;
|
||||
|
||||
while ((skb = __ptr_ring_peek(&queue->ring)) != NULL &&
|
||||
while ((skb = wg_prev_queue_peek(&peer->rx_queue)) != NULL &&
|
||||
(state = atomic_read_acquire(&PACKET_CB(skb)->state)) !=
|
||||
PACKET_STATE_UNCRYPTED) {
|
||||
__ptr_ring_discard_one(&queue->ring);
|
||||
peer = PACKET_PEER(skb);
|
||||
wg_prev_queue_drop_peeked(&peer->rx_queue);
|
||||
keypair = PACKET_CB(skb)->keypair;
|
||||
free = true;
|
||||
|
||||
@@ -508,7 +507,7 @@ void wg_packet_decrypt_worker(struct work_struct *work)
|
||||
enum packet_state state =
|
||||
likely(decrypt_packet(skb, PACKET_CB(skb)->keypair)) ?
|
||||
PACKET_STATE_CRYPTED : PACKET_STATE_DEAD;
|
||||
wg_queue_enqueue_per_peer_napi(skb, state);
|
||||
wg_queue_enqueue_per_peer_rx(skb, state);
|
||||
if (need_resched())
|
||||
cond_resched();
|
||||
}
|
||||
@@ -531,12 +530,10 @@ static void wg_packet_consume_data(struct wg_device *wg, struct sk_buff *skb)
|
||||
if (unlikely(READ_ONCE(peer->is_dead)))
|
||||
goto err;
|
||||
|
||||
ret = wg_queue_enqueue_per_device_and_peer(&wg->decrypt_queue,
|
||||
&peer->rx_queue, skb,
|
||||
wg->packet_crypt_wq,
|
||||
&wg->decrypt_queue.last_cpu);
|
||||
ret = wg_queue_enqueue_per_device_and_peer(&wg->decrypt_queue, &peer->rx_queue, skb,
|
||||
wg->packet_crypt_wq, &wg->decrypt_queue.last_cpu);
|
||||
if (unlikely(ret == -EPIPE))
|
||||
wg_queue_enqueue_per_peer_napi(skb, PACKET_STATE_DEAD);
|
||||
wg_queue_enqueue_per_peer_rx(skb, PACKET_STATE_DEAD);
|
||||
if (likely(!ret || ret == -EPIPE)) {
|
||||
rcu_read_unlock_bh();
|
||||
return;
|
||||
@@ -557,22 +554,28 @@ void wg_packet_receive(struct wg_device *wg, struct sk_buff *skb)
|
||||
case cpu_to_le32(MESSAGE_HANDSHAKE_INITIATION):
|
||||
case cpu_to_le32(MESSAGE_HANDSHAKE_RESPONSE):
|
||||
case cpu_to_le32(MESSAGE_HANDSHAKE_COOKIE): {
|
||||
int cpu;
|
||||
int cpu, ret = -EBUSY;
|
||||
|
||||
if (skb_queue_len(&wg->incoming_handshakes) >
|
||||
MAX_QUEUED_INCOMING_HANDSHAKES ||
|
||||
unlikely(!rng_is_initialized())) {
|
||||
if (unlikely(!rng_is_initialized()))
|
||||
goto drop;
|
||||
if (atomic_read(&wg->handshake_queue_len) > MAX_QUEUED_INCOMING_HANDSHAKES / 2) {
|
||||
if (spin_trylock_bh(&wg->handshake_queue.ring.producer_lock)) {
|
||||
ret = __ptr_ring_produce(&wg->handshake_queue.ring, skb);
|
||||
spin_unlock_bh(&wg->handshake_queue.ring.producer_lock);
|
||||
}
|
||||
} else
|
||||
ret = ptr_ring_produce_bh(&wg->handshake_queue.ring, skb);
|
||||
if (ret) {
|
||||
drop:
|
||||
net_dbg_skb_ratelimited("%s: Dropping handshake packet from %pISpfsc\n",
|
||||
wg->dev->name, skb);
|
||||
goto err;
|
||||
}
|
||||
skb_queue_tail(&wg->incoming_handshakes, skb);
|
||||
/* Queues up a call to packet_process_queued_handshake_
|
||||
* packets(skb):
|
||||
*/
|
||||
cpu = wg_cpumask_next_online(&wg->incoming_handshake_cpu);
|
||||
atomic_inc(&wg->handshake_queue_len);
|
||||
cpu = wg_cpumask_next_online(&wg->handshake_queue.last_cpu);
|
||||
/* Queues up a call to packet_process_queued_handshake_packets(skb): */
|
||||
queue_work_on(cpu, wg->handshake_receive_wq,
|
||||
&per_cpu_ptr(wg->incoming_handshakes_worker, cpu)->work);
|
||||
&per_cpu_ptr(wg->handshake_queue.worker, cpu)->work);
|
||||
break;
|
||||
}
|
||||
case cpu_to_le32(MESSAGE_DATA):
|
||||
|
||||
@@ -19,32 +19,22 @@
|
||||
|
||||
#include <linux/siphash.h>
|
||||
|
||||
static __init void swap_endian_and_apply_cidr(u8 *dst, const u8 *src, u8 bits,
|
||||
u8 cidr)
|
||||
{
|
||||
swap_endian(dst, src, bits);
|
||||
memset(dst + (cidr + 7) / 8, 0, bits / 8 - (cidr + 7) / 8);
|
||||
if (cidr)
|
||||
dst[(cidr + 7) / 8 - 1] &= ~0U << ((8 - (cidr % 8)) % 8);
|
||||
}
|
||||
|
||||
static __init void print_node(struct allowedips_node *node, u8 bits)
|
||||
{
|
||||
char *fmt_connection = KERN_DEBUG "\t\"%p/%d\" -> \"%p/%d\";\n";
|
||||
char *fmt_declaration = KERN_DEBUG
|
||||
"\t\"%p/%d\"[style=%s, color=\"#%06x\"];\n";
|
||||
char *fmt_declaration = KERN_DEBUG "\t\"%p/%d\"[style=%s, color=\"#%06x\"];\n";
|
||||
u8 ip1[16], ip2[16], cidr1, cidr2;
|
||||
char *style = "dotted";
|
||||
u8 ip1[16], ip2[16];
|
||||
u32 color = 0;
|
||||
|
||||
if (node == NULL)
|
||||
return;
|
||||
if (bits == 32) {
|
||||
fmt_connection = KERN_DEBUG "\t\"%pI4/%d\" -> \"%pI4/%d\";\n";
|
||||
fmt_declaration = KERN_DEBUG
|
||||
"\t\"%pI4/%d\"[style=%s, color=\"#%06x\"];\n";
|
||||
fmt_declaration = KERN_DEBUG "\t\"%pI4/%d\"[style=%s, color=\"#%06x\"];\n";
|
||||
} else if (bits == 128) {
|
||||
fmt_connection = KERN_DEBUG "\t\"%pI6/%d\" -> \"%pI6/%d\";\n";
|
||||
fmt_declaration = KERN_DEBUG
|
||||
"\t\"%pI6/%d\"[style=%s, color=\"#%06x\"];\n";
|
||||
fmt_declaration = KERN_DEBUG "\t\"%pI6/%d\"[style=%s, color=\"#%06x\"];\n";
|
||||
}
|
||||
if (node->peer) {
|
||||
hsiphash_key_t key = { { 0 } };
|
||||
@@ -55,24 +45,20 @@ static __init void print_node(struct allowedips_node *node, u8 bits)
|
||||
hsiphash_1u32(0xabad1dea, &key) % 200;
|
||||
style = "bold";
|
||||
}
|
||||
swap_endian_and_apply_cidr(ip1, node->bits, bits, node->cidr);
|
||||
printk(fmt_declaration, ip1, node->cidr, style, color);
|
||||
wg_allowedips_read_node(node, ip1, &cidr1);
|
||||
printk(fmt_declaration, ip1, cidr1, style, color);
|
||||
if (node->bit[0]) {
|
||||
swap_endian_and_apply_cidr(ip2,
|
||||
rcu_dereference_raw(node->bit[0])->bits, bits,
|
||||
node->cidr);
|
||||
printk(fmt_connection, ip1, node->cidr, ip2,
|
||||
rcu_dereference_raw(node->bit[0])->cidr);
|
||||
print_node(rcu_dereference_raw(node->bit[0]), bits);
|
||||
wg_allowedips_read_node(rcu_dereference_raw(node->bit[0]), ip2, &cidr2);
|
||||
printk(fmt_connection, ip1, cidr1, ip2, cidr2);
|
||||
}
|
||||
if (node->bit[1]) {
|
||||
swap_endian_and_apply_cidr(ip2,
|
||||
rcu_dereference_raw(node->bit[1])->bits,
|
||||
bits, node->cidr);
|
||||
printk(fmt_connection, ip1, node->cidr, ip2,
|
||||
rcu_dereference_raw(node->bit[1])->cidr);
|
||||
print_node(rcu_dereference_raw(node->bit[1]), bits);
|
||||
wg_allowedips_read_node(rcu_dereference_raw(node->bit[1]), ip2, &cidr2);
|
||||
printk(fmt_connection, ip1, cidr1, ip2, cidr2);
|
||||
}
|
||||
if (node->bit[0])
|
||||
print_node(rcu_dereference_raw(node->bit[0]), bits);
|
||||
if (node->bit[1])
|
||||
print_node(rcu_dereference_raw(node->bit[1]), bits);
|
||||
}
|
||||
|
||||
static __init void print_tree(struct allowedips_node __rcu *top, u8 bits)
|
||||
@@ -121,8 +107,8 @@ static __init inline union nf_inet_addr horrible_cidr_to_mask(u8 cidr)
|
||||
{
|
||||
union nf_inet_addr mask;
|
||||
|
||||
memset(&mask, 0x00, 128 / 8);
|
||||
memset(&mask, 0xff, cidr / 8);
|
||||
memset(&mask, 0, sizeof(mask));
|
||||
memset(&mask.all, 0xff, cidr / 8);
|
||||
if (cidr % 32)
|
||||
mask.all[cidr / 32] = (__force u32)htonl(
|
||||
(0xFFFFFFFFUL << (32 - (cidr % 32))) & 0xFFFFFFFFUL);
|
||||
@@ -149,42 +135,36 @@ horrible_mask_self(struct horrible_allowedips_node *node)
|
||||
}
|
||||
|
||||
static __init inline bool
|
||||
horrible_match_v4(const struct horrible_allowedips_node *node,
|
||||
struct in_addr *ip)
|
||||
horrible_match_v4(const struct horrible_allowedips_node *node, struct in_addr *ip)
|
||||
{
|
||||
return (ip->s_addr & node->mask.ip) == node->ip.ip;
|
||||
}
|
||||
|
||||
static __init inline bool
|
||||
horrible_match_v6(const struct horrible_allowedips_node *node,
|
||||
struct in6_addr *ip)
|
||||
horrible_match_v6(const struct horrible_allowedips_node *node, struct in6_addr *ip)
|
||||
{
|
||||
return (ip->in6_u.u6_addr32[0] & node->mask.ip6[0]) ==
|
||||
node->ip.ip6[0] &&
|
||||
(ip->in6_u.u6_addr32[1] & node->mask.ip6[1]) ==
|
||||
node->ip.ip6[1] &&
|
||||
(ip->in6_u.u6_addr32[2] & node->mask.ip6[2]) ==
|
||||
node->ip.ip6[2] &&
|
||||
return (ip->in6_u.u6_addr32[0] & node->mask.ip6[0]) == node->ip.ip6[0] &&
|
||||
(ip->in6_u.u6_addr32[1] & node->mask.ip6[1]) == node->ip.ip6[1] &&
|
||||
(ip->in6_u.u6_addr32[2] & node->mask.ip6[2]) == node->ip.ip6[2] &&
|
||||
(ip->in6_u.u6_addr32[3] & node->mask.ip6[3]) == node->ip.ip6[3];
|
||||
}
|
||||
|
||||
static __init void
|
||||
horrible_insert_ordered(struct horrible_allowedips *table,
|
||||
struct horrible_allowedips_node *node)
|
||||
horrible_insert_ordered(struct horrible_allowedips *table, struct horrible_allowedips_node *node)
|
||||
{
|
||||
struct horrible_allowedips_node *other = NULL, *where = NULL;
|
||||
u8 my_cidr = horrible_mask_to_cidr(node->mask);
|
||||
|
||||
hlist_for_each_entry(other, &table->head, table) {
|
||||
if (!memcmp(&other->mask, &node->mask,
|
||||
sizeof(union nf_inet_addr)) &&
|
||||
!memcmp(&other->ip, &node->ip,
|
||||
sizeof(union nf_inet_addr)) &&
|
||||
other->ip_version == node->ip_version) {
|
||||
if (other->ip_version == node->ip_version &&
|
||||
!memcmp(&other->mask, &node->mask, sizeof(union nf_inet_addr)) &&
|
||||
!memcmp(&other->ip, &node->ip, sizeof(union nf_inet_addr))) {
|
||||
other->value = node->value;
|
||||
kfree(node);
|
||||
return;
|
||||
}
|
||||
}
|
||||
hlist_for_each_entry(other, &table->head, table) {
|
||||
where = other;
|
||||
if (horrible_mask_to_cidr(other->mask) <= my_cidr)
|
||||
break;
|
||||
@@ -201,8 +181,7 @@ static __init int
|
||||
horrible_allowedips_insert_v4(struct horrible_allowedips *table,
|
||||
struct in_addr *ip, u8 cidr, void *value)
|
||||
{
|
||||
struct horrible_allowedips_node *node = kzalloc(sizeof(*node),
|
||||
GFP_KERNEL);
|
||||
struct horrible_allowedips_node *node = kzalloc(sizeof(*node), GFP_KERNEL);
|
||||
|
||||
if (unlikely(!node))
|
||||
return -ENOMEM;
|
||||
@@ -219,8 +198,7 @@ static __init int
|
||||
horrible_allowedips_insert_v6(struct horrible_allowedips *table,
|
||||
struct in6_addr *ip, u8 cidr, void *value)
|
||||
{
|
||||
struct horrible_allowedips_node *node = kzalloc(sizeof(*node),
|
||||
GFP_KERNEL);
|
||||
struct horrible_allowedips_node *node = kzalloc(sizeof(*node), GFP_KERNEL);
|
||||
|
||||
if (unlikely(!node))
|
||||
return -ENOMEM;
|
||||
@@ -234,39 +212,43 @@ horrible_allowedips_insert_v6(struct horrible_allowedips *table,
|
||||
}
|
||||
|
||||
static __init void *
|
||||
horrible_allowedips_lookup_v4(struct horrible_allowedips *table,
|
||||
struct in_addr *ip)
|
||||
horrible_allowedips_lookup_v4(struct horrible_allowedips *table, struct in_addr *ip)
|
||||
{
|
||||
struct horrible_allowedips_node *node;
|
||||
void *ret = NULL;
|
||||
|
||||
hlist_for_each_entry(node, &table->head, table) {
|
||||
if (node->ip_version != 4)
|
||||
continue;
|
||||
if (horrible_match_v4(node, ip)) {
|
||||
ret = node->value;
|
||||
break;
|
||||
}
|
||||
if (node->ip_version == 4 && horrible_match_v4(node, ip))
|
||||
return node->value;
|
||||
}
|
||||
return ret;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static __init void *
|
||||
horrible_allowedips_lookup_v6(struct horrible_allowedips *table,
|
||||
struct in6_addr *ip)
|
||||
horrible_allowedips_lookup_v6(struct horrible_allowedips *table, struct in6_addr *ip)
|
||||
{
|
||||
struct horrible_allowedips_node *node;
|
||||
void *ret = NULL;
|
||||
|
||||
hlist_for_each_entry(node, &table->head, table) {
|
||||
if (node->ip_version != 6)
|
||||
continue;
|
||||
if (horrible_match_v6(node, ip)) {
|
||||
ret = node->value;
|
||||
break;
|
||||
}
|
||||
if (node->ip_version == 6 && horrible_match_v6(node, ip))
|
||||
return node->value;
|
||||
}
|
||||
return ret;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
||||
static __init void
|
||||
horrible_allowedips_remove_by_value(struct horrible_allowedips *table, void *value)
|
||||
{
|
||||
struct horrible_allowedips_node *node;
|
||||
struct hlist_node *h;
|
||||
|
||||
hlist_for_each_entry_safe(node, h, &table->head, table) {
|
||||
if (node->value != value)
|
||||
continue;
|
||||
hlist_del(&node->table);
|
||||
kfree(node);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
static __init bool randomized_test(void)
|
||||
@@ -296,6 +278,7 @@ static __init bool randomized_test(void)
|
||||
goto free;
|
||||
}
|
||||
kref_init(&peers[i]->refcount);
|
||||
INIT_LIST_HEAD(&peers[i]->allowedips_list);
|
||||
}
|
||||
|
||||
mutex_lock(&mutex);
|
||||
@@ -333,7 +316,7 @@ static __init bool randomized_test(void)
|
||||
if (wg_allowedips_insert_v4(&t,
|
||||
(struct in_addr *)mutated,
|
||||
cidr, peer, &mutex) < 0) {
|
||||
pr_err("allowedips random malloc: FAIL\n");
|
||||
pr_err("allowedips random self-test malloc: FAIL\n");
|
||||
goto free_locked;
|
||||
}
|
||||
if (horrible_allowedips_insert_v4(&h,
|
||||
@@ -396,23 +379,33 @@ static __init bool randomized_test(void)
|
||||
print_tree(t.root6, 128);
|
||||
}
|
||||
|
||||
for (i = 0; i < NUM_QUERIES; ++i) {
|
||||
prandom_bytes(ip, 4);
|
||||
if (lookup(t.root4, 32, ip) !=
|
||||
horrible_allowedips_lookup_v4(&h, (struct in_addr *)ip)) {
|
||||
pr_err("allowedips random self-test: FAIL\n");
|
||||
goto free;
|
||||
for (j = 0;; ++j) {
|
||||
for (i = 0; i < NUM_QUERIES; ++i) {
|
||||
prandom_bytes(ip, 4);
|
||||
if (lookup(t.root4, 32, ip) != horrible_allowedips_lookup_v4(&h, (struct in_addr *)ip)) {
|
||||
horrible_allowedips_lookup_v4(&h, (struct in_addr *)ip);
|
||||
pr_err("allowedips random v4 self-test: FAIL\n");
|
||||
goto free;
|
||||
}
|
||||
prandom_bytes(ip, 16);
|
||||
if (lookup(t.root6, 128, ip) != horrible_allowedips_lookup_v6(&h, (struct in6_addr *)ip)) {
|
||||
pr_err("allowedips random v6 self-test: FAIL\n");
|
||||
goto free;
|
||||
}
|
||||
}
|
||||
if (j >= NUM_PEERS)
|
||||
break;
|
||||
mutex_lock(&mutex);
|
||||
wg_allowedips_remove_by_peer(&t, peers[j], &mutex);
|
||||
mutex_unlock(&mutex);
|
||||
horrible_allowedips_remove_by_value(&h, peers[j]);
|
||||
}
|
||||
|
||||
for (i = 0; i < NUM_QUERIES; ++i) {
|
||||
prandom_bytes(ip, 16);
|
||||
if (lookup(t.root6, 128, ip) !=
|
||||
horrible_allowedips_lookup_v6(&h, (struct in6_addr *)ip)) {
|
||||
pr_err("allowedips random self-test: FAIL\n");
|
||||
goto free;
|
||||
}
|
||||
if (t.root4 || t.root6) {
|
||||
pr_err("allowedips random self-test removal: FAIL\n");
|
||||
goto free;
|
||||
}
|
||||
|
||||
ret = true;
|
||||
|
||||
free:
|
||||
|
||||
@@ -239,8 +239,7 @@ void wg_packet_send_keepalive(struct wg_peer *peer)
|
||||
wg_packet_send_staged_packets(peer);
|
||||
}
|
||||
|
||||
static void wg_packet_create_data_done(struct sk_buff *first,
|
||||
struct wg_peer *peer)
|
||||
static void wg_packet_create_data_done(struct wg_peer *peer, struct sk_buff *first)
|
||||
{
|
||||
struct sk_buff *skb, *next;
|
||||
bool is_keepalive, data_sent = false;
|
||||
@@ -262,22 +261,19 @@ static void wg_packet_create_data_done(struct sk_buff *first,
|
||||
|
||||
void wg_packet_tx_worker(struct work_struct *work)
|
||||
{
|
||||
struct crypt_queue *queue = container_of(work, struct crypt_queue,
|
||||
work);
|
||||
struct wg_peer *peer = container_of(work, struct wg_peer, transmit_packet_work);
|
||||
struct noise_keypair *keypair;
|
||||
enum packet_state state;
|
||||
struct sk_buff *first;
|
||||
struct wg_peer *peer;
|
||||
|
||||
while ((first = __ptr_ring_peek(&queue->ring)) != NULL &&
|
||||
while ((first = wg_prev_queue_peek(&peer->tx_queue)) != NULL &&
|
||||
(state = atomic_read_acquire(&PACKET_CB(first)->state)) !=
|
||||
PACKET_STATE_UNCRYPTED) {
|
||||
__ptr_ring_discard_one(&queue->ring);
|
||||
peer = PACKET_PEER(first);
|
||||
wg_prev_queue_drop_peeked(&peer->tx_queue);
|
||||
keypair = PACKET_CB(first)->keypair;
|
||||
|
||||
if (likely(state == PACKET_STATE_CRYPTED))
|
||||
wg_packet_create_data_done(first, peer);
|
||||
wg_packet_create_data_done(peer, first);
|
||||
else
|
||||
kfree_skb_list(first);
|
||||
|
||||
@@ -306,16 +302,14 @@ void wg_packet_encrypt_worker(struct work_struct *work)
|
||||
break;
|
||||
}
|
||||
}
|
||||
wg_queue_enqueue_per_peer(&PACKET_PEER(first)->tx_queue, first,
|
||||
state);
|
||||
wg_queue_enqueue_per_peer_tx(first, state);
|
||||
if (need_resched())
|
||||
cond_resched();
|
||||
}
|
||||
}
|
||||
|
||||
static void wg_packet_create_data(struct sk_buff *first)
|
||||
static void wg_packet_create_data(struct wg_peer *peer, struct sk_buff *first)
|
||||
{
|
||||
struct wg_peer *peer = PACKET_PEER(first);
|
||||
struct wg_device *wg = peer->device;
|
||||
int ret = -EINVAL;
|
||||
|
||||
@@ -323,13 +317,10 @@ static void wg_packet_create_data(struct sk_buff *first)
|
||||
if (unlikely(READ_ONCE(peer->is_dead)))
|
||||
goto err;
|
||||
|
||||
ret = wg_queue_enqueue_per_device_and_peer(&wg->encrypt_queue,
|
||||
&peer->tx_queue, first,
|
||||
wg->packet_crypt_wq,
|
||||
&wg->encrypt_queue.last_cpu);
|
||||
ret = wg_queue_enqueue_per_device_and_peer(&wg->encrypt_queue, &peer->tx_queue, first,
|
||||
wg->packet_crypt_wq, &wg->encrypt_queue.last_cpu);
|
||||
if (unlikely(ret == -EPIPE))
|
||||
wg_queue_enqueue_per_peer(&peer->tx_queue, first,
|
||||
PACKET_STATE_DEAD);
|
||||
wg_queue_enqueue_per_peer_tx(first, PACKET_STATE_DEAD);
|
||||
err:
|
||||
rcu_read_unlock_bh();
|
||||
if (likely(!ret || ret == -EPIPE))
|
||||
@@ -393,7 +384,7 @@ void wg_packet_send_staged_packets(struct wg_peer *peer)
|
||||
packets.prev->next = NULL;
|
||||
wg_peer_get(keypair->entry.peer);
|
||||
PACKET_CB(packets.next)->keypair = keypair;
|
||||
wg_packet_create_data(packets.next);
|
||||
wg_packet_create_data(peer, packets.next);
|
||||
return;
|
||||
|
||||
out_invalid:
|
||||
|
||||
@@ -308,7 +308,7 @@ void wg_socket_clear_peer_endpoint_src(struct wg_peer *peer)
|
||||
{
|
||||
write_lock_bh(&peer->endpoint_lock);
|
||||
memset(&peer->endpoint.src6, 0, sizeof(peer->endpoint.src6));
|
||||
dst_cache_reset(&peer->endpoint_cache);
|
||||
dst_cache_reset_now(&peer->endpoint_cache);
|
||||
write_unlock_bh(&peer->endpoint_lock);
|
||||
}
|
||||
|
||||
@@ -430,7 +430,7 @@ void wg_socket_reinit(struct wg_device *wg, struct sock *new4,
|
||||
if (new4)
|
||||
wg->incoming_port = ntohs(inet_sk(new4)->inet_sport);
|
||||
mutex_unlock(&wg->socket_update_lock);
|
||||
synchronize_rcu();
|
||||
synchronize_net();
|
||||
sock_free(old4);
|
||||
sock_free(old6);
|
||||
}
|
||||
|
||||
@@ -1080,6 +1080,47 @@ static int hwsim_unicast_netgroup(struct mac80211_hwsim_data *data,
|
||||
return res;
|
||||
}
|
||||
|
||||
static void mac80211_hwsim_config_mac_nl(struct ieee80211_hw *hw,
|
||||
const u8 *addr, bool add)
|
||||
{
|
||||
struct mac80211_hwsim_data *data = hw->priv;
|
||||
u32 _portid = READ_ONCE(data->wmediumd);
|
||||
struct sk_buff *skb;
|
||||
void *msg_head;
|
||||
|
||||
if (!_portid && !hwsim_virtio_enabled)
|
||||
return;
|
||||
|
||||
skb = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_ATOMIC);
|
||||
if (!skb)
|
||||
return;
|
||||
|
||||
msg_head = genlmsg_put(skb, 0, 0, &hwsim_genl_family, 0,
|
||||
add ? HWSIM_CMD_ADD_MAC_ADDR :
|
||||
HWSIM_CMD_DEL_MAC_ADDR);
|
||||
if (!msg_head) {
|
||||
pr_debug("mac80211_hwsim: problem with msg_head\n");
|
||||
goto nla_put_failure;
|
||||
}
|
||||
|
||||
if (nla_put(skb, HWSIM_ATTR_ADDR_TRANSMITTER,
|
||||
ETH_ALEN, data->addresses[1].addr))
|
||||
goto nla_put_failure;
|
||||
|
||||
if (nla_put(skb, HWSIM_ATTR_ADDR_RECEIVER, ETH_ALEN, addr))
|
||||
goto nla_put_failure;
|
||||
|
||||
genlmsg_end(skb, msg_head);
|
||||
|
||||
if (hwsim_virtio_enabled)
|
||||
hwsim_tx_virtio(data, skb);
|
||||
else
|
||||
hwsim_unicast_netgroup(data, skb, _portid);
|
||||
return;
|
||||
nla_put_failure:
|
||||
nlmsg_free(skb);
|
||||
}
|
||||
|
||||
static inline u16 trans_tx_rate_flags_ieee2hwsim(struct ieee80211_tx_rate *rate)
|
||||
{
|
||||
u16 result = 0;
|
||||
@@ -1168,7 +1209,7 @@ static void mac80211_hwsim_tx_frame_nl(struct ieee80211_hw *hw,
|
||||
if (nla_put_u32(skb, HWSIM_ATTR_FLAGS, hwsim_flags))
|
||||
goto nla_put_failure;
|
||||
|
||||
if (nla_put_u32(skb, HWSIM_ATTR_FREQ, data->channel->center_freq))
|
||||
if (nla_put_u32(skb, HWSIM_ATTR_FREQ, channel->center_freq))
|
||||
goto nla_put_failure;
|
||||
|
||||
/* We get the tx control (rate and retries) info*/
|
||||
@@ -1555,6 +1596,9 @@ static int mac80211_hwsim_add_interface(struct ieee80211_hw *hw,
|
||||
vif->addr);
|
||||
hwsim_set_magic(vif);
|
||||
|
||||
if (vif->type != NL80211_IFTYPE_MONITOR)
|
||||
mac80211_hwsim_config_mac_nl(hw, vif->addr, true);
|
||||
|
||||
vif->cab_queue = 0;
|
||||
vif->hw_queue[IEEE80211_AC_VO] = 0;
|
||||
vif->hw_queue[IEEE80211_AC_VI] = 1;
|
||||
@@ -1594,6 +1638,8 @@ static void mac80211_hwsim_remove_interface(
|
||||
vif->addr);
|
||||
hwsim_check_magic(vif);
|
||||
hwsim_clear_magic(vif);
|
||||
if (vif->type != NL80211_IFTYPE_MONITOR)
|
||||
mac80211_hwsim_config_mac_nl(hw, vif->addr, false);
|
||||
}
|
||||
|
||||
static void mac80211_hwsim_tx_frame(struct ieee80211_hw *hw,
|
||||
@@ -2111,6 +2157,8 @@ static void hw_scan_work(struct work_struct *work)
|
||||
hwsim->hw_scan_vif = NULL;
|
||||
hwsim->tmp_chan = NULL;
|
||||
mutex_unlock(&hwsim->mutex);
|
||||
mac80211_hwsim_config_mac_nl(hwsim->hw, hwsim->scan_addr,
|
||||
false);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -2196,6 +2244,7 @@ static int mac80211_hwsim_hw_scan(struct ieee80211_hw *hw,
|
||||
memset(hwsim->survey_data, 0, sizeof(hwsim->survey_data));
|
||||
mutex_unlock(&hwsim->mutex);
|
||||
|
||||
mac80211_hwsim_config_mac_nl(hw, hwsim->scan_addr, true);
|
||||
wiphy_dbg(hw->wiphy, "hwsim hw_scan request\n");
|
||||
|
||||
ieee80211_queue_delayed_work(hwsim->hw, &hwsim->hw_scan, 0);
|
||||
@@ -2239,6 +2288,7 @@ static void mac80211_hwsim_sw_scan(struct ieee80211_hw *hw,
|
||||
pr_debug("hwsim sw_scan request, prepping stuff\n");
|
||||
|
||||
memcpy(hwsim->scan_addr, mac_addr, ETH_ALEN);
|
||||
mac80211_hwsim_config_mac_nl(hw, hwsim->scan_addr, true);
|
||||
hwsim->scanning = true;
|
||||
memset(hwsim->survey_data, 0, sizeof(hwsim->survey_data));
|
||||
|
||||
@@ -2255,6 +2305,7 @@ static void mac80211_hwsim_sw_scan_complete(struct ieee80211_hw *hw,
|
||||
|
||||
pr_debug("hwsim sw_scan_complete\n");
|
||||
hwsim->scanning = false;
|
||||
mac80211_hwsim_config_mac_nl(hw, hwsim->scan_addr, false);
|
||||
eth_zero_addr(hwsim->scan_addr);
|
||||
|
||||
mutex_unlock(&hwsim->mutex);
|
||||
@@ -2355,10 +2406,10 @@ static void mac80211_hwsim_remove_chanctx(struct ieee80211_hw *hw,
|
||||
mutex_lock(&hwsim->mutex);
|
||||
hwsim->chanctx = NULL;
|
||||
mutex_unlock(&hwsim->mutex);
|
||||
wiphy_debug(hw->wiphy,
|
||||
"remove channel context control: %d MHz/width: %d/cfreqs:%d/%d MHz\n",
|
||||
ctx->def.chan->center_freq, ctx->def.width,
|
||||
ctx->def.center_freq1, ctx->def.center_freq2);
|
||||
wiphy_dbg(hw->wiphy,
|
||||
"remove channel context control: %d MHz/width: %d/cfreqs:%d/%d MHz\n",
|
||||
ctx->def.chan->center_freq, ctx->def.width,
|
||||
ctx->def.center_freq1, ctx->def.center_freq2);
|
||||
hwsim_check_chanctx_magic(ctx);
|
||||
hwsim_clear_chanctx_magic(ctx);
|
||||
}
|
||||
@@ -3324,6 +3375,17 @@ static int hwsim_cloned_frame_received_nl(struct sk_buff *skb_2,
|
||||
if (!channel)
|
||||
goto out;
|
||||
|
||||
if (data2->use_chanctx) {
|
||||
if (data2->tmp_chan)
|
||||
channel = data2->tmp_chan;
|
||||
else if (data2->chanctx)
|
||||
channel = data2->chanctx->def.chan;
|
||||
} else {
|
||||
channel = data2->channel;
|
||||
}
|
||||
if (!channel)
|
||||
goto out;
|
||||
|
||||
if (!hwsim_virtio_enabled) {
|
||||
if (hwsim_net_get_netgroup(genl_info_net(info)) !=
|
||||
data2->netgroup)
|
||||
@@ -3358,8 +3420,6 @@ static int hwsim_cloned_frame_received_nl(struct sk_buff *skb_2,
|
||||
}
|
||||
|
||||
rx_status.band = channel->band;
|
||||
rx_status.freq = data2->channel->center_freq;
|
||||
rx_status.band = data2->channel->band;
|
||||
rx_status.rate_idx = nla_get_u32(info->attrs[HWSIM_ATTR_RX_RATE]);
|
||||
if (rx_status.rate_idx >= data2->hw->wiphy->bands[rx_status.band]->n_bitrates)
|
||||
goto out;
|
||||
|
||||
@@ -74,6 +74,12 @@ enum hwsim_tx_control_flags {
|
||||
* @HWSIM_CMD_DEL_RADIO: destroy a radio, reply is multicasted
|
||||
* @HWSIM_CMD_GET_RADIO: fetch information about existing radios, uses:
|
||||
* %HWSIM_ATTR_RADIO_ID
|
||||
* @HWSIM_CMD_ADD_MAC_ADDR: add a receive MAC address (given in the
|
||||
* %HWSIM_ATTR_ADDR_RECEIVER attribute) to a device identified by
|
||||
* %HWSIM_ATTR_ADDR_TRANSMITTER. This lets wmediumd forward frames
|
||||
* to this receiver address for a given station.
|
||||
* @HWSIM_CMD_DEL_MAC_ADDR: remove the MAC address again, the attributes
|
||||
* are the same as to @HWSIM_CMD_ADD_MAC_ADDR.
|
||||
* @__HWSIM_CMD_MAX: enum limit
|
||||
*/
|
||||
enum {
|
||||
@@ -84,6 +90,8 @@ enum {
|
||||
HWSIM_CMD_NEW_RADIO,
|
||||
HWSIM_CMD_DEL_RADIO,
|
||||
HWSIM_CMD_GET_RADIO,
|
||||
HWSIM_CMD_ADD_MAC_ADDR,
|
||||
HWSIM_CMD_DEL_MAC_ADDR,
|
||||
__HWSIM_CMD_MAX,
|
||||
};
|
||||
#define HWSIM_CMD_MAX (_HWSIM_CMD_MAX - 1)
|
||||
|
||||
@@ -58,6 +58,7 @@ static struct memory_type_mapping mem_type_mapping_tbl[] = {
|
||||
};
|
||||
|
||||
static const struct of_device_id mwifiex_sdio_of_match_table[] = {
|
||||
{ .compatible = "marvell,sd8787" },
|
||||
{ .compatible = "marvell,sd8897" },
|
||||
{ .compatible = "marvell,sd8997" },
|
||||
{ }
|
||||
|
||||
@@ -4375,12 +4375,9 @@ void rtl8xxxu_gen1_report_connect(struct rtl8xxxu_priv *priv,
|
||||
void rtl8xxxu_gen2_report_connect(struct rtl8xxxu_priv *priv,
|
||||
u8 macid, bool connect)
|
||||
{
|
||||
#ifdef RTL8XXXU_GEN2_REPORT_CONNECT
|
||||
/*
|
||||
* Barry Day reports this causes issues with 8192eu and 8723bu
|
||||
* devices reconnecting. The reason for this is unclear, but
|
||||
* until it is better understood, leave the code in place but
|
||||
* disabled, so it is not lost.
|
||||
* The firmware turns on the rate control when it knows it's
|
||||
* connected to a network.
|
||||
*/
|
||||
struct h2c_cmd h2c;
|
||||
|
||||
@@ -4393,7 +4390,6 @@ void rtl8xxxu_gen2_report_connect(struct rtl8xxxu_priv *priv,
|
||||
h2c.media_status_rpt.parm &= ~BIT(0);
|
||||
|
||||
rtl8xxxu_gen2_h2c_cmd(priv, &h2c, sizeof(h2c.media_status_rpt));
|
||||
#endif
|
||||
}
|
||||
|
||||
void rtl8xxxu_gen1_init_aggregation(struct rtl8xxxu_priv *priv)
|
||||
|
||||
@@ -1186,6 +1186,11 @@ int nvdimm_has_flush(struct nd_region *nd_region)
|
||||
|| !IS_ENABLED(CONFIG_ARCH_HAS_PMEM_API))
|
||||
return -ENXIO;
|
||||
|
||||
/* Test if an explicit flush function is defined */
|
||||
if (test_bit(ND_REGION_ASYNC, &nd_region->flags) && nd_region->flush)
|
||||
return 1;
|
||||
|
||||
/* Test if any flush hints for the region are available */
|
||||
for (i = 0; i < nd_region->ndr_mappings; i++) {
|
||||
struct nd_mapping *nd_mapping = &nd_region->mapping[i];
|
||||
struct nvdimm *nvdimm = nd_mapping->nvdimm;
|
||||
@@ -1196,8 +1201,8 @@ int nvdimm_has_flush(struct nd_region *nd_region)
|
||||
}
|
||||
|
||||
/*
|
||||
* The platform defines dimm devices without hints, assume
|
||||
* platform persistence mechanism like ADR
|
||||
* The platform defines dimm devices without hints nor explicit flush,
|
||||
* assume platform persistence mechanism like ADR
|
||||
*/
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -1325,8 +1325,10 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
|
||||
else {
|
||||
queue = nvmet_fc_alloc_target_queue(iod->assoc, 0,
|
||||
be16_to_cpu(rqst->assoc_cmd.sqsize));
|
||||
if (!queue)
|
||||
if (!queue) {
|
||||
ret = VERR_QUEUE_ALLOC_FAIL;
|
||||
nvmet_fc_tgt_a_put(iod->assoc);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -290,15 +290,17 @@ static int nvmem_add_cells_from_of(struct nvmem_device *nvmem)
|
||||
continue;
|
||||
if (len < 2 * sizeof(u32)) {
|
||||
dev_err(dev, "nvmem: invalid reg on %pOF\n", child);
|
||||
of_node_put(child);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
cell = kzalloc(sizeof(*cell), GFP_KERNEL);
|
||||
if (!cell)
|
||||
if (!cell) {
|
||||
of_node_put(child);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
cell->nvmem = nvmem;
|
||||
cell->np = of_node_get(child);
|
||||
cell->offset = be32_to_cpup(addr++);
|
||||
cell->bytes = be32_to_cpup(addr);
|
||||
cell->name = child->name;
|
||||
@@ -318,11 +320,12 @@ static int nvmem_add_cells_from_of(struct nvmem_device *nvmem)
|
||||
dev_err(dev, "cell %s unaligned to nvmem stride %d\n",
|
||||
cell->name, nvmem->stride);
|
||||
/* Cells already added will be freed later. */
|
||||
of_node_put(cell->np);
|
||||
kfree(cell);
|
||||
of_node_put(child);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
cell->np = of_node_get(child);
|
||||
nvmem_cell_add(cell);
|
||||
}
|
||||
|
||||
|
||||
@@ -1244,7 +1244,16 @@ DEFINE_SIMPLE_PROP(pinctrl2, "pinctrl-2", NULL)
|
||||
DEFINE_SIMPLE_PROP(pinctrl3, "pinctrl-3", NULL)
|
||||
DEFINE_SUFFIX_PROP(regulators, "-supply", NULL)
|
||||
DEFINE_SUFFIX_PROP(gpio, "-gpio", "#gpio-cells")
|
||||
DEFINE_SUFFIX_PROP(gpios, "-gpios", "#gpio-cells")
|
||||
|
||||
static struct device_node *parse_gpios(struct device_node *np,
|
||||
const char *prop_name, int index)
|
||||
{
|
||||
if (!strcmp_suffix(prop_name, ",nr-gpios"))
|
||||
return NULL;
|
||||
|
||||
return parse_suffix_prop_cells(np, prop_name, index, "-gpios",
|
||||
"#gpio-cells");
|
||||
}
|
||||
|
||||
static struct device_node *parse_iommu_maps(struct device_node *np,
|
||||
const char *prop_name, int index)
|
||||
|
||||
@@ -295,7 +295,7 @@ static int aspeed_disable_sig(const struct aspeed_sig_expr **exprs,
|
||||
int ret = 0;
|
||||
|
||||
if (!exprs)
|
||||
return true;
|
||||
return -EINVAL;
|
||||
|
||||
while (*exprs && !ret) {
|
||||
ret = aspeed_sig_expr_disable(*exprs, maps);
|
||||
|
||||
@@ -116,7 +116,7 @@ struct intel_pinctrl {
|
||||
#define padgroup_offset(g, p) ((p) - (g)->base)
|
||||
|
||||
static struct intel_community *intel_get_community(struct intel_pinctrl *pctrl,
|
||||
unsigned pin)
|
||||
unsigned int pin)
|
||||
{
|
||||
struct intel_community *community;
|
||||
int i;
|
||||
@@ -134,7 +134,7 @@ static struct intel_community *intel_get_community(struct intel_pinctrl *pctrl,
|
||||
|
||||
static const struct intel_padgroup *
|
||||
intel_community_get_padgroup(const struct intel_community *community,
|
||||
unsigned pin)
|
||||
unsigned int pin)
|
||||
{
|
||||
int i;
|
||||
|
||||
@@ -148,11 +148,11 @@ intel_community_get_padgroup(const struct intel_community *community,
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void __iomem *intel_get_padcfg(struct intel_pinctrl *pctrl, unsigned pin,
|
||||
unsigned reg)
|
||||
static void __iomem *intel_get_padcfg(struct intel_pinctrl *pctrl,
|
||||
unsigned int pin, unsigned int reg)
|
||||
{
|
||||
const struct intel_community *community;
|
||||
unsigned padno;
|
||||
unsigned int padno;
|
||||
size_t nregs;
|
||||
|
||||
community = intel_get_community(pctrl, pin);
|
||||
@@ -168,11 +168,11 @@ static void __iomem *intel_get_padcfg(struct intel_pinctrl *pctrl, unsigned pin,
|
||||
return community->pad_regs + reg + padno * nregs * 4;
|
||||
}
|
||||
|
||||
static bool intel_pad_owned_by_host(struct intel_pinctrl *pctrl, unsigned pin)
|
||||
static bool intel_pad_owned_by_host(struct intel_pinctrl *pctrl, unsigned int pin)
|
||||
{
|
||||
const struct intel_community *community;
|
||||
const struct intel_padgroup *padgrp;
|
||||
unsigned gpp, offset, gpp_offset;
|
||||
unsigned int gpp, offset, gpp_offset;
|
||||
void __iomem *padown;
|
||||
|
||||
community = intel_get_community(pctrl, pin);
|
||||
@@ -193,11 +193,11 @@ static bool intel_pad_owned_by_host(struct intel_pinctrl *pctrl, unsigned pin)
|
||||
return !(readl(padown) & PADOWN_MASK(gpp_offset));
|
||||
}
|
||||
|
||||
static bool intel_pad_acpi_mode(struct intel_pinctrl *pctrl, unsigned pin)
|
||||
static bool intel_pad_acpi_mode(struct intel_pinctrl *pctrl, unsigned int pin)
|
||||
{
|
||||
const struct intel_community *community;
|
||||
const struct intel_padgroup *padgrp;
|
||||
unsigned offset, gpp_offset;
|
||||
unsigned int offset, gpp_offset;
|
||||
void __iomem *hostown;
|
||||
|
||||
community = intel_get_community(pctrl, pin);
|
||||
@@ -217,11 +217,11 @@ static bool intel_pad_acpi_mode(struct intel_pinctrl *pctrl, unsigned pin)
|
||||
return !(readl(hostown) & BIT(gpp_offset));
|
||||
}
|
||||
|
||||
static bool intel_pad_locked(struct intel_pinctrl *pctrl, unsigned pin)
|
||||
static bool intel_pad_locked(struct intel_pinctrl *pctrl, unsigned int pin)
|
||||
{
|
||||
struct intel_community *community;
|
||||
const struct intel_padgroup *padgrp;
|
||||
unsigned offset, gpp_offset;
|
||||
unsigned int offset, gpp_offset;
|
||||
u32 value;
|
||||
|
||||
community = intel_get_community(pctrl, pin);
|
||||
@@ -254,7 +254,7 @@ static bool intel_pad_locked(struct intel_pinctrl *pctrl, unsigned pin)
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool intel_pad_usable(struct intel_pinctrl *pctrl, unsigned pin)
|
||||
static bool intel_pad_usable(struct intel_pinctrl *pctrl, unsigned int pin)
|
||||
{
|
||||
return intel_pad_owned_by_host(pctrl, pin) &&
|
||||
!intel_pad_locked(pctrl, pin);
|
||||
@@ -268,15 +268,15 @@ static int intel_get_groups_count(struct pinctrl_dev *pctldev)
|
||||
}
|
||||
|
||||
static const char *intel_get_group_name(struct pinctrl_dev *pctldev,
|
||||
unsigned group)
|
||||
unsigned int group)
|
||||
{
|
||||
struct intel_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
|
||||
|
||||
return pctrl->soc->groups[group].name;
|
||||
}
|
||||
|
||||
static int intel_get_group_pins(struct pinctrl_dev *pctldev, unsigned group,
|
||||
const unsigned **pins, unsigned *npins)
|
||||
static int intel_get_group_pins(struct pinctrl_dev *pctldev, unsigned int group,
|
||||
const unsigned int **pins, unsigned int *npins)
|
||||
{
|
||||
struct intel_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
|
||||
|
||||
@@ -286,7 +286,7 @@ static int intel_get_group_pins(struct pinctrl_dev *pctldev, unsigned group,
|
||||
}
|
||||
|
||||
static void intel_pin_dbg_show(struct pinctrl_dev *pctldev, struct seq_file *s,
|
||||
unsigned pin)
|
||||
unsigned int pin)
|
||||
{
|
||||
struct intel_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
|
||||
void __iomem *padcfg;
|
||||
@@ -345,7 +345,7 @@ static int intel_get_functions_count(struct pinctrl_dev *pctldev)
|
||||
}
|
||||
|
||||
static const char *intel_get_function_name(struct pinctrl_dev *pctldev,
|
||||
unsigned function)
|
||||
unsigned int function)
|
||||
{
|
||||
struct intel_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
|
||||
|
||||
@@ -353,9 +353,9 @@ static const char *intel_get_function_name(struct pinctrl_dev *pctldev,
|
||||
}
|
||||
|
||||
static int intel_get_function_groups(struct pinctrl_dev *pctldev,
|
||||
unsigned function,
|
||||
unsigned int function,
|
||||
const char * const **groups,
|
||||
unsigned * const ngroups)
|
||||
unsigned int * const ngroups)
|
||||
{
|
||||
struct intel_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
|
||||
|
||||
@@ -364,8 +364,8 @@ static int intel_get_function_groups(struct pinctrl_dev *pctldev,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int intel_pinmux_set_mux(struct pinctrl_dev *pctldev, unsigned function,
|
||||
unsigned group)
|
||||
static int intel_pinmux_set_mux(struct pinctrl_dev *pctldev,
|
||||
unsigned int function, unsigned int group)
|
||||
{
|
||||
struct intel_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
|
||||
const struct intel_pingroup *grp = &pctrl->soc->groups[group];
|
||||
@@ -447,7 +447,7 @@ static void intel_gpio_set_gpio_mode(void __iomem *padcfg0)
|
||||
|
||||
static int intel_gpio_request_enable(struct pinctrl_dev *pctldev,
|
||||
struct pinctrl_gpio_range *range,
|
||||
unsigned pin)
|
||||
unsigned int pin)
|
||||
{
|
||||
struct intel_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
|
||||
void __iomem *padcfg0;
|
||||
@@ -485,7 +485,7 @@ static int intel_gpio_request_enable(struct pinctrl_dev *pctldev,
|
||||
|
||||
static int intel_gpio_set_direction(struct pinctrl_dev *pctldev,
|
||||
struct pinctrl_gpio_range *range,
|
||||
unsigned pin, bool input)
|
||||
unsigned int pin, bool input)
|
||||
{
|
||||
struct intel_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
|
||||
void __iomem *padcfg0;
|
||||
@@ -510,7 +510,7 @@ static const struct pinmux_ops intel_pinmux_ops = {
|
||||
.gpio_set_direction = intel_gpio_set_direction,
|
||||
};
|
||||
|
||||
static int intel_config_get(struct pinctrl_dev *pctldev, unsigned pin,
|
||||
static int intel_config_get(struct pinctrl_dev *pctldev, unsigned int pin,
|
||||
unsigned long *config)
|
||||
{
|
||||
struct intel_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
|
||||
@@ -599,11 +599,11 @@ static int intel_config_get(struct pinctrl_dev *pctldev, unsigned pin,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int intel_config_set_pull(struct intel_pinctrl *pctrl, unsigned pin,
|
||||
static int intel_config_set_pull(struct intel_pinctrl *pctrl, unsigned int pin,
|
||||
unsigned long config)
|
||||
{
|
||||
unsigned param = pinconf_to_config_param(config);
|
||||
unsigned arg = pinconf_to_config_argument(config);
|
||||
unsigned int param = pinconf_to_config_param(config);
|
||||
unsigned int arg = pinconf_to_config_argument(config);
|
||||
const struct intel_community *community;
|
||||
void __iomem *padcfg1;
|
||||
unsigned long flags;
|
||||
@@ -685,8 +685,8 @@ static int intel_config_set_pull(struct intel_pinctrl *pctrl, unsigned pin,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int intel_config_set_debounce(struct intel_pinctrl *pctrl, unsigned pin,
|
||||
unsigned debounce)
|
||||
static int intel_config_set_debounce(struct intel_pinctrl *pctrl,
|
||||
unsigned int pin, unsigned int debounce)
|
||||
{
|
||||
void __iomem *padcfg0, *padcfg2;
|
||||
unsigned long flags;
|
||||
@@ -732,8 +732,8 @@ static int intel_config_set_debounce(struct intel_pinctrl *pctrl, unsigned pin,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int intel_config_set(struct pinctrl_dev *pctldev, unsigned pin,
|
||||
unsigned long *configs, unsigned nconfigs)
|
||||
static int intel_config_set(struct pinctrl_dev *pctldev, unsigned int pin,
|
||||
unsigned long *configs, unsigned int nconfigs)
|
||||
{
|
||||
struct intel_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
|
||||
int i, ret;
|
||||
@@ -790,7 +790,7 @@ static const struct pinctrl_desc intel_pinctrl_desc = {
|
||||
* automatically translated to pinctrl pin number. This function can be
|
||||
* used to find out the corresponding pinctrl pin.
|
||||
*/
|
||||
static int intel_gpio_to_pin(struct intel_pinctrl *pctrl, unsigned offset,
|
||||
static int intel_gpio_to_pin(struct intel_pinctrl *pctrl, unsigned int offset,
|
||||
const struct intel_community **community,
|
||||
const struct intel_padgroup **padgrp)
|
||||
{
|
||||
@@ -824,7 +824,7 @@ static int intel_gpio_to_pin(struct intel_pinctrl *pctrl, unsigned offset,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static int intel_gpio_get(struct gpio_chip *chip, unsigned offset)
|
||||
static int intel_gpio_get(struct gpio_chip *chip, unsigned int offset)
|
||||
{
|
||||
struct intel_pinctrl *pctrl = gpiochip_get_data(chip);
|
||||
void __iomem *reg;
|
||||
@@ -846,7 +846,8 @@ static int intel_gpio_get(struct gpio_chip *chip, unsigned offset)
|
||||
return !!(padcfg0 & PADCFG0_GPIORXSTATE);
|
||||
}
|
||||
|
||||
static void intel_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
|
||||
static void intel_gpio_set(struct gpio_chip *chip, unsigned int offset,
|
||||
int value)
|
||||
{
|
||||
struct intel_pinctrl *pctrl = gpiochip_get_data(chip);
|
||||
unsigned long flags;
|
||||
@@ -895,12 +896,12 @@ static int intel_gpio_get_direction(struct gpio_chip *chip, unsigned int offset)
|
||||
return !!(padcfg0 & PADCFG0_GPIOTXDIS);
|
||||
}
|
||||
|
||||
static int intel_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
|
||||
static int intel_gpio_direction_input(struct gpio_chip *chip, unsigned int offset)
|
||||
{
|
||||
return pinctrl_gpio_direction_input(chip->base + offset);
|
||||
}
|
||||
|
||||
static int intel_gpio_direction_output(struct gpio_chip *chip, unsigned offset,
|
||||
static int intel_gpio_direction_output(struct gpio_chip *chip, unsigned int offset,
|
||||
int value)
|
||||
{
|
||||
intel_gpio_set(chip, offset, value);
|
||||
@@ -929,7 +930,7 @@ static void intel_gpio_irq_ack(struct irq_data *d)
|
||||
|
||||
pin = intel_gpio_to_pin(pctrl, irqd_to_hwirq(d), &community, &padgrp);
|
||||
if (pin >= 0) {
|
||||
unsigned gpp, gpp_offset, is_offset;
|
||||
unsigned int gpp, gpp_offset, is_offset;
|
||||
|
||||
gpp = padgrp->reg_num;
|
||||
gpp_offset = padgroup_offset(padgrp, pin);
|
||||
@@ -951,7 +952,7 @@ static void intel_gpio_irq_enable(struct irq_data *d)
|
||||
|
||||
pin = intel_gpio_to_pin(pctrl, irqd_to_hwirq(d), &community, &padgrp);
|
||||
if (pin >= 0) {
|
||||
unsigned gpp, gpp_offset, is_offset;
|
||||
unsigned int gpp, gpp_offset, is_offset;
|
||||
unsigned long flags;
|
||||
u32 value;
|
||||
|
||||
@@ -980,7 +981,7 @@ static void intel_gpio_irq_mask_unmask(struct irq_data *d, bool mask)
|
||||
|
||||
pin = intel_gpio_to_pin(pctrl, irqd_to_hwirq(d), &community, &padgrp);
|
||||
if (pin >= 0) {
|
||||
unsigned gpp, gpp_offset;
|
||||
unsigned int gpp, gpp_offset;
|
||||
unsigned long flags;
|
||||
void __iomem *reg;
|
||||
u32 value;
|
||||
@@ -1011,11 +1012,11 @@ static void intel_gpio_irq_unmask(struct irq_data *d)
|
||||
intel_gpio_irq_mask_unmask(d, false);
|
||||
}
|
||||
|
||||
static int intel_gpio_irq_type(struct irq_data *d, unsigned type)
|
||||
static int intel_gpio_irq_type(struct irq_data *d, unsigned int type)
|
||||
{
|
||||
struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
|
||||
struct intel_pinctrl *pctrl = gpiochip_get_data(gc);
|
||||
unsigned pin = intel_gpio_to_pin(pctrl, irqd_to_hwirq(d), NULL, NULL);
|
||||
unsigned int pin = intel_gpio_to_pin(pctrl, irqd_to_hwirq(d), NULL, NULL);
|
||||
unsigned long flags;
|
||||
void __iomem *reg;
|
||||
u32 value;
|
||||
@@ -1072,7 +1073,7 @@ static int intel_gpio_irq_wake(struct irq_data *d, unsigned int on)
|
||||
{
|
||||
struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
|
||||
struct intel_pinctrl *pctrl = gpiochip_get_data(gc);
|
||||
unsigned pin = intel_gpio_to_pin(pctrl, irqd_to_hwirq(d), NULL, NULL);
|
||||
unsigned int pin = intel_gpio_to_pin(pctrl, irqd_to_hwirq(d), NULL, NULL);
|
||||
|
||||
if (on)
|
||||
enable_irq_wake(pctrl->irq);
|
||||
@@ -1167,7 +1168,7 @@ static int intel_gpio_add_pin_ranges(struct intel_pinctrl *pctrl,
|
||||
static unsigned intel_gpio_ngpio(const struct intel_pinctrl *pctrl)
|
||||
{
|
||||
const struct intel_community *community;
|
||||
unsigned ngpio = 0;
|
||||
unsigned int ngpio = 0;
|
||||
int i, j;
|
||||
|
||||
for (i = 0; i < pctrl->ncommunities; i++) {
|
||||
@@ -1243,8 +1244,8 @@ static int intel_pinctrl_add_padgroups(struct intel_pinctrl *pctrl,
|
||||
struct intel_community *community)
|
||||
{
|
||||
struct intel_padgroup *gpps;
|
||||
unsigned npins = community->npins;
|
||||
unsigned padown_num = 0;
|
||||
unsigned int npins = community->npins;
|
||||
unsigned int padown_num = 0;
|
||||
size_t ngpps, i;
|
||||
|
||||
if (community->gpps)
|
||||
@@ -1260,7 +1261,7 @@ static int intel_pinctrl_add_padgroups(struct intel_pinctrl *pctrl,
|
||||
if (community->gpps) {
|
||||
gpps[i] = community->gpps[i];
|
||||
} else {
|
||||
unsigned gpp_size = community->gpp_size;
|
||||
unsigned int gpp_size = community->gpp_size;
|
||||
|
||||
gpps[i].reg_num = i;
|
||||
gpps[i].base = community->pin_base + i * gpp_size;
|
||||
@@ -1431,7 +1432,13 @@ int intel_pinctrl_probe(struct platform_device *pdev,
|
||||
EXPORT_SYMBOL_GPL(intel_pinctrl_probe);
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
static bool intel_pinctrl_should_save(struct intel_pinctrl *pctrl, unsigned pin)
|
||||
static bool __intel_gpio_is_direct_irq(u32 value)
|
||||
{
|
||||
return (value & PADCFG0_GPIROUTIOXAPIC) && (value & PADCFG0_GPIOTXDIS) &&
|
||||
(__intel_gpio_get_gpio_mode(value) == PADCFG0_PMODE_GPIO);
|
||||
}
|
||||
|
||||
static bool intel_pinctrl_should_save(struct intel_pinctrl *pctrl, unsigned int pin)
|
||||
{
|
||||
const struct pin_desc *pd = pin_desc_get(pctrl->pctldev, pin);
|
||||
u32 value;
|
||||
@@ -1464,8 +1471,7 @@ static bool intel_pinctrl_should_save(struct intel_pinctrl *pctrl, unsigned pin)
|
||||
* See https://bugzilla.kernel.org/show_bug.cgi?id=214749.
|
||||
*/
|
||||
value = readl(intel_get_padcfg(pctrl, pin, PADCFG0));
|
||||
if ((value & PADCFG0_GPIROUTIOXAPIC) && (value & PADCFG0_GPIOTXDIS) &&
|
||||
(__intel_gpio_get_gpio_mode(value) == PADCFG0_PMODE_GPIO))
|
||||
if (__intel_gpio_is_direct_irq(value))
|
||||
return true;
|
||||
|
||||
return false;
|
||||
@@ -1502,7 +1508,7 @@ int intel_pinctrl_suspend(struct device *dev)
|
||||
for (i = 0; i < pctrl->ncommunities; i++) {
|
||||
struct intel_community *community = &pctrl->communities[i];
|
||||
void __iomem *base;
|
||||
unsigned gpp;
|
||||
unsigned int gpp;
|
||||
|
||||
base = community->regs + community->ie_offset;
|
||||
for (gpp = 0; gpp < community->ngpps; gpp++)
|
||||
@@ -1520,7 +1526,7 @@ static void intel_gpio_irq_init(struct intel_pinctrl *pctrl)
|
||||
for (i = 0; i < pctrl->ncommunities; i++) {
|
||||
const struct intel_community *community;
|
||||
void __iomem *base;
|
||||
unsigned gpp;
|
||||
unsigned int gpp;
|
||||
|
||||
community = &pctrl->communities[i];
|
||||
base = community->regs;
|
||||
@@ -1550,7 +1556,12 @@ int intel_pinctrl_resume(struct device *dev)
|
||||
void __iomem *padcfg;
|
||||
u32 val;
|
||||
|
||||
if (!intel_pinctrl_should_save(pctrl, desc->number))
|
||||
if (!(intel_pinctrl_should_save(pctrl, desc->number) ||
|
||||
/*
|
||||
* If the firmware mangled the register contents too much,
|
||||
* check the saved value for the Direct IRQ mode.
|
||||
*/
|
||||
__intel_gpio_is_direct_irq(pads[i].padcfg0)))
|
||||
continue;
|
||||
|
||||
padcfg = intel_get_padcfg(pctrl, desc->number, PADCFG0);
|
||||
@@ -1584,7 +1595,7 @@ int intel_pinctrl_resume(struct device *dev)
|
||||
for (i = 0; i < pctrl->ncommunities; i++) {
|
||||
struct intel_community *community = &pctrl->communities[i];
|
||||
void __iomem *base;
|
||||
unsigned gpp;
|
||||
unsigned int gpp;
|
||||
|
||||
base = community->regs + community->ie_offset;
|
||||
for (gpp = 0; gpp < community->ngpps; gpp++) {
|
||||
|
||||
@@ -25,10 +25,10 @@ struct device;
|
||||
*/
|
||||
struct intel_pingroup {
|
||||
const char *name;
|
||||
const unsigned *pins;
|
||||
const unsigned int *pins;
|
||||
size_t npins;
|
||||
unsigned short mode;
|
||||
const unsigned *modes;
|
||||
const unsigned int *modes;
|
||||
};
|
||||
|
||||
/**
|
||||
@@ -56,11 +56,11 @@ struct intel_function {
|
||||
* to specify them.
|
||||
*/
|
||||
struct intel_padgroup {
|
||||
unsigned reg_num;
|
||||
unsigned base;
|
||||
unsigned size;
|
||||
unsigned int reg_num;
|
||||
unsigned int base;
|
||||
unsigned int size;
|
||||
int gpio_base;
|
||||
unsigned padown_num;
|
||||
unsigned int padown_num;
|
||||
};
|
||||
|
||||
/**
|
||||
@@ -96,17 +96,17 @@ struct intel_padgroup {
|
||||
* pass custom @gpps and @ngpps instead.
|
||||
*/
|
||||
struct intel_community {
|
||||
unsigned barno;
|
||||
unsigned padown_offset;
|
||||
unsigned padcfglock_offset;
|
||||
unsigned hostown_offset;
|
||||
unsigned is_offset;
|
||||
unsigned ie_offset;
|
||||
unsigned pin_base;
|
||||
unsigned gpp_size;
|
||||
unsigned gpp_num_padown_regs;
|
||||
unsigned int barno;
|
||||
unsigned int padown_offset;
|
||||
unsigned int padcfglock_offset;
|
||||
unsigned int hostown_offset;
|
||||
unsigned int is_offset;
|
||||
unsigned int ie_offset;
|
||||
unsigned int pin_base;
|
||||
unsigned int gpp_size;
|
||||
unsigned int gpp_num_padown_regs;
|
||||
size_t npins;
|
||||
unsigned features;
|
||||
unsigned int features;
|
||||
const struct intel_padgroup *gpps;
|
||||
size_t ngpps;
|
||||
/* Reserved for the core driver */
|
||||
|
||||
@@ -345,6 +345,8 @@ static int pcs_set_mux(struct pinctrl_dev *pctldev, unsigned fselector,
|
||||
if (!pcs->fmask)
|
||||
return 0;
|
||||
function = pinmux_generic_get_function(pctldev, fselector);
|
||||
if (!function)
|
||||
return -EINVAL;
|
||||
func = function->data;
|
||||
if (!func)
|
||||
return -EINVAL;
|
||||
|
||||
@@ -775,7 +775,7 @@ static int iscsi_sw_tcp_host_get_param(struct Scsi_Host *shost,
|
||||
enum iscsi_host_param param, char *buf)
|
||||
{
|
||||
struct iscsi_sw_tcp_host *tcp_sw_host = iscsi_host_priv(shost);
|
||||
struct iscsi_session *session = tcp_sw_host->session;
|
||||
struct iscsi_session *session;
|
||||
struct iscsi_conn *conn;
|
||||
struct iscsi_tcp_conn *tcp_conn;
|
||||
struct iscsi_sw_tcp_conn *tcp_sw_conn;
|
||||
@@ -784,6 +784,7 @@ static int iscsi_sw_tcp_host_get_param(struct Scsi_Host *shost,
|
||||
|
||||
switch (param) {
|
||||
case ISCSI_HOST_PARAM_IPADDRESS:
|
||||
session = tcp_sw_host->session;
|
||||
if (!session)
|
||||
return -ENOTCONN;
|
||||
|
||||
@@ -872,12 +873,14 @@ iscsi_sw_tcp_session_create(struct iscsi_endpoint *ep, uint16_t cmds_max,
|
||||
if (!cls_session)
|
||||
goto remove_host;
|
||||
session = cls_session->dd_data;
|
||||
tcp_sw_host = iscsi_host_priv(shost);
|
||||
tcp_sw_host->session = session;
|
||||
|
||||
shost->can_queue = session->scsi_cmds_max;
|
||||
if (iscsi_tcp_r2tpool_alloc(session))
|
||||
goto remove_session;
|
||||
|
||||
/* We are now fully setup so expose the session to sysfs. */
|
||||
tcp_sw_host = iscsi_host_priv(shost);
|
||||
tcp_sw_host->session = session;
|
||||
return cls_session;
|
||||
|
||||
remove_session:
|
||||
|
||||
@@ -95,8 +95,8 @@ static bool __target_check_io_state(struct se_cmd *se_cmd,
|
||||
{
|
||||
struct se_session *sess = se_cmd->se_sess;
|
||||
|
||||
assert_spin_locked(&sess->sess_cmd_lock);
|
||||
WARN_ON_ONCE(!irqs_disabled());
|
||||
lockdep_assert_held(&sess->sess_cmd_lock);
|
||||
|
||||
/*
|
||||
* If command already reached CMD_T_COMPLETE state within
|
||||
* target_complete_cmd() or CMD_T_FABRIC_STOP due to shutdown,
|
||||
|
||||
@@ -52,11 +52,13 @@ static int int340x_thermal_get_trip_temp(struct thermal_zone_device *zone,
|
||||
int trip, int *temp)
|
||||
{
|
||||
struct int34x_thermal_zone *d = zone->devdata;
|
||||
int i;
|
||||
int i, ret = 0;
|
||||
|
||||
if (d->override_ops && d->override_ops->get_trip_temp)
|
||||
return d->override_ops->get_trip_temp(zone, trip, temp);
|
||||
|
||||
mutex_lock(&d->trip_mutex);
|
||||
|
||||
if (trip < d->aux_trip_nr)
|
||||
*temp = d->aux_trips[trip];
|
||||
else if (trip == d->crt_trip_id)
|
||||
@@ -74,10 +76,12 @@ static int int340x_thermal_get_trip_temp(struct thermal_zone_device *zone,
|
||||
}
|
||||
}
|
||||
if (i == INT340X_THERMAL_MAX_ACT_TRIP_COUNT)
|
||||
return -EINVAL;
|
||||
ret = -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
mutex_unlock(&d->trip_mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int int340x_thermal_get_trip_type(struct thermal_zone_device *zone,
|
||||
@@ -85,11 +89,13 @@ static int int340x_thermal_get_trip_type(struct thermal_zone_device *zone,
|
||||
enum thermal_trip_type *type)
|
||||
{
|
||||
struct int34x_thermal_zone *d = zone->devdata;
|
||||
int i;
|
||||
int i, ret = 0;
|
||||
|
||||
if (d->override_ops && d->override_ops->get_trip_type)
|
||||
return d->override_ops->get_trip_type(zone, trip, type);
|
||||
|
||||
mutex_lock(&d->trip_mutex);
|
||||
|
||||
if (trip < d->aux_trip_nr)
|
||||
*type = THERMAL_TRIP_PASSIVE;
|
||||
else if (trip == d->crt_trip_id)
|
||||
@@ -107,10 +113,12 @@ static int int340x_thermal_get_trip_type(struct thermal_zone_device *zone,
|
||||
}
|
||||
}
|
||||
if (i == INT340X_THERMAL_MAX_ACT_TRIP_COUNT)
|
||||
return -EINVAL;
|
||||
ret = -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
mutex_unlock(&d->trip_mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int int340x_thermal_set_trip_temp(struct thermal_zone_device *zone,
|
||||
@@ -182,6 +190,8 @@ int int340x_thermal_read_trips(struct int34x_thermal_zone *int34x_zone)
|
||||
int trip_cnt = int34x_zone->aux_trip_nr;
|
||||
int i;
|
||||
|
||||
mutex_lock(&int34x_zone->trip_mutex);
|
||||
|
||||
int34x_zone->crt_trip_id = -1;
|
||||
if (!int340x_thermal_get_trip_config(int34x_zone->adev->handle, "_CRT",
|
||||
&int34x_zone->crt_temp))
|
||||
@@ -209,6 +219,8 @@ int int340x_thermal_read_trips(struct int34x_thermal_zone *int34x_zone)
|
||||
int34x_zone->act_trips[i].valid = true;
|
||||
}
|
||||
|
||||
mutex_unlock(&int34x_zone->trip_mutex);
|
||||
|
||||
return trip_cnt;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(int340x_thermal_read_trips);
|
||||
@@ -232,6 +244,8 @@ struct int34x_thermal_zone *int340x_thermal_zone_add(struct acpi_device *adev,
|
||||
if (!int34x_thermal_zone)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
mutex_init(&int34x_thermal_zone->trip_mutex);
|
||||
|
||||
int34x_thermal_zone->adev = adev;
|
||||
int34x_thermal_zone->override_ops = override_ops;
|
||||
|
||||
@@ -274,6 +288,7 @@ struct int34x_thermal_zone *int340x_thermal_zone_add(struct acpi_device *adev,
|
||||
acpi_lpat_free_conversion_table(int34x_thermal_zone->lpat_table);
|
||||
kfree(int34x_thermal_zone->aux_trips);
|
||||
err_trip_alloc:
|
||||
mutex_destroy(&int34x_thermal_zone->trip_mutex);
|
||||
kfree(int34x_thermal_zone);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
@@ -285,6 +300,7 @@ void int340x_thermal_zone_remove(struct int34x_thermal_zone
|
||||
thermal_zone_device_unregister(int34x_thermal_zone->zone);
|
||||
acpi_lpat_free_conversion_table(int34x_thermal_zone->lpat_table);
|
||||
kfree(int34x_thermal_zone->aux_trips);
|
||||
mutex_destroy(&int34x_thermal_zone->trip_mutex);
|
||||
kfree(int34x_thermal_zone);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(int340x_thermal_zone_remove);
|
||||
|
||||
@@ -41,6 +41,7 @@ struct int34x_thermal_zone {
|
||||
struct thermal_zone_device_ops *override_ops;
|
||||
void *priv_data;
|
||||
struct acpi_lpat_conversion_table *lpat_table;
|
||||
struct mutex trip_mutex;
|
||||
};
|
||||
|
||||
struct int34x_thermal_zone *int340x_thermal_zone_add(struct acpi_device *,
|
||||
|
||||
@@ -48,19 +48,39 @@ static void __dma_rx_complete(void *param)
|
||||
struct uart_8250_dma *dma = p->dma;
|
||||
struct tty_port *tty_port = &p->port.state->port;
|
||||
struct dma_tx_state state;
|
||||
enum dma_status dma_status;
|
||||
int count;
|
||||
|
||||
dma->rx_running = 0;
|
||||
dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
|
||||
/*
|
||||
* New DMA Rx can be started during the completion handler before it
|
||||
* could acquire port's lock and it might still be ongoing. Don't to
|
||||
* anything in such case.
|
||||
*/
|
||||
dma_status = dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
|
||||
if (dma_status == DMA_IN_PROGRESS)
|
||||
return;
|
||||
|
||||
count = dma->rx_size - state.residue;
|
||||
|
||||
tty_insert_flip_string(tty_port, dma->rx_buf, count);
|
||||
p->port.icount.rx += count;
|
||||
dma->rx_running = 0;
|
||||
|
||||
tty_flip_buffer_push(tty_port);
|
||||
}
|
||||
|
||||
static void dma_rx_complete(void *param)
|
||||
{
|
||||
struct uart_8250_port *p = param;
|
||||
struct uart_8250_dma *dma = p->dma;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&p->port.lock, flags);
|
||||
if (dma->rx_running)
|
||||
__dma_rx_complete(p);
|
||||
spin_unlock_irqrestore(&p->port.lock, flags);
|
||||
}
|
||||
|
||||
int serial8250_tx_dma(struct uart_8250_port *p)
|
||||
{
|
||||
struct uart_8250_dma *dma = p->dma;
|
||||
@@ -126,7 +146,7 @@ int serial8250_rx_dma(struct uart_8250_port *p)
|
||||
return -EBUSY;
|
||||
|
||||
dma->rx_running = 1;
|
||||
desc->callback = __dma_rx_complete;
|
||||
desc->callback = dma_rx_complete;
|
||||
desc->callback_param = p;
|
||||
|
||||
dma->rx_cookie = dmaengine_submit(desc);
|
||||
|
||||
@@ -247,10 +247,6 @@ vcs_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
|
||||
|
||||
uni_mode = use_unicode(inode);
|
||||
attr = use_attributes(inode);
|
||||
ret = -ENXIO;
|
||||
vc = vcs_vc(inode, &viewed);
|
||||
if (!vc)
|
||||
goto unlock_out;
|
||||
|
||||
ret = -EINVAL;
|
||||
if (pos < 0)
|
||||
@@ -270,6 +266,12 @@ vcs_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
|
||||
ssize_t orig_count;
|
||||
long p = pos;
|
||||
|
||||
vc = vcs_vc(inode, &viewed);
|
||||
if (!vc) {
|
||||
ret = -ENXIO;
|
||||
break;
|
||||
}
|
||||
|
||||
/* Check whether we are above size each round,
|
||||
* as copy_to_user at the end of this loop
|
||||
* could sleep.
|
||||
|
||||
@@ -2357,9 +2357,8 @@ static int usb_enumerate_device_otg(struct usb_device *udev)
|
||||
* usb_enumerate_device - Read device configs/intfs/otg (usbcore-internal)
|
||||
* @udev: newly addressed device (in ADDRESS state)
|
||||
*
|
||||
* This is only called by usb_new_device() and usb_authorize_device()
|
||||
* and FIXME -- all comments that apply to them apply here wrt to
|
||||
* environment.
|
||||
* This is only called by usb_new_device() -- all comments that apply there
|
||||
* apply here wrt to environment.
|
||||
*
|
||||
* If the device is WUSB and not authorized, we don't attempt to read
|
||||
* the string descriptors, as they will be errored out by the device
|
||||
|
||||
@@ -527,6 +527,9 @@ static const struct usb_device_id usb_quirk_list[] = {
|
||||
/* DJI CineSSD */
|
||||
{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
|
||||
|
||||
/* Alcor Link AK9563 SC Reader used in 2022 Lenovo ThinkPads */
|
||||
{ USB_DEVICE(0x2ce3, 0x9563), .driver_info = USB_QUIRK_NO_LPM },
|
||||
|
||||
/* DELL USB GEN2 */
|
||||
{ USB_DEVICE(0x413c, 0xb062), .driver_info = USB_QUIRK_NO_LPM | USB_QUIRK_RESET_RESUME },
|
||||
|
||||
|
||||
@@ -888,11 +888,7 @@ read_descriptors(struct file *filp, struct kobject *kobj,
|
||||
size_t srclen, n;
|
||||
int cfgno;
|
||||
void *src;
|
||||
int retval;
|
||||
|
||||
retval = usb_lock_device_interruptible(udev);
|
||||
if (retval < 0)
|
||||
return -EINTR;
|
||||
/* The binary attribute begins with the device descriptor.
|
||||
* Following that are the raw descriptor entries for all the
|
||||
* configurations (config plus subsidiary descriptors).
|
||||
@@ -917,7 +913,6 @@ read_descriptors(struct file *filp, struct kobject *kobj,
|
||||
off -= srclen;
|
||||
}
|
||||
}
|
||||
usb_unlock_device(udev);
|
||||
return count - nleft;
|
||||
}
|
||||
|
||||
|
||||
@@ -85,7 +85,7 @@ static inline void dwc3_qcom_clrbits(void __iomem *base, u32 offset, u32 val)
|
||||
readl(base + offset);
|
||||
}
|
||||
|
||||
static void dwc3_qcom_vbus_overrride_enable(struct dwc3_qcom *qcom, bool enable)
|
||||
static void dwc3_qcom_vbus_override_enable(struct dwc3_qcom *qcom, bool enable)
|
||||
{
|
||||
if (enable) {
|
||||
dwc3_qcom_setbits(qcom->qscratch_base, QSCRATCH_SS_PHY_CTRL,
|
||||
@@ -106,7 +106,7 @@ static int dwc3_qcom_vbus_notifier(struct notifier_block *nb,
|
||||
struct dwc3_qcom *qcom = container_of(nb, struct dwc3_qcom, vbus_nb);
|
||||
|
||||
/* enable vbus override for device mode */
|
||||
dwc3_qcom_vbus_overrride_enable(qcom, event);
|
||||
dwc3_qcom_vbus_override_enable(qcom, event);
|
||||
qcom->mode = event ? USB_DR_MODE_PERIPHERAL : USB_DR_MODE_HOST;
|
||||
|
||||
return NOTIFY_DONE;
|
||||
@@ -118,7 +118,7 @@ static int dwc3_qcom_host_notifier(struct notifier_block *nb,
|
||||
struct dwc3_qcom *qcom = container_of(nb, struct dwc3_qcom, host_nb);
|
||||
|
||||
/* disable vbus override in host mode */
|
||||
dwc3_qcom_vbus_overrride_enable(qcom, !event);
|
||||
dwc3_qcom_vbus_override_enable(qcom, !event);
|
||||
qcom->mode = event ? USB_DR_MODE_HOST : USB_DR_MODE_PERIPHERAL;
|
||||
|
||||
return NOTIFY_DONE;
|
||||
@@ -512,8 +512,8 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
|
||||
qcom->mode = usb_get_dr_mode(&qcom->dwc3->dev);
|
||||
|
||||
/* enable vbus override for device mode */
|
||||
if (qcom->mode == USB_DR_MODE_PERIPHERAL)
|
||||
dwc3_qcom_vbus_overrride_enable(qcom, true);
|
||||
if (qcom->mode != USB_DR_MODE_HOST)
|
||||
dwc3_qcom_vbus_override_enable(qcom, true);
|
||||
|
||||
/* register extcon to override sw_vbus on Vbus change later */
|
||||
ret = dwc3_qcom_register_extcon(qcom);
|
||||
|
||||
@@ -285,8 +285,10 @@ static int __ffs_ep0_queue_wait(struct ffs_data *ffs, char *data, size_t len)
|
||||
struct usb_request *req = ffs->ep0req;
|
||||
int ret;
|
||||
|
||||
if (!req)
|
||||
if (!req) {
|
||||
spin_unlock_irq(&ffs->ev.waitq.lock);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
req->zero = len < le16_to_cpu(ffs->ev.setup.wLength);
|
||||
|
||||
|
||||
@@ -119,7 +119,7 @@ config USB_MUSB_MEDIATEK
|
||||
tristate "MediaTek platforms"
|
||||
depends on ARCH_MEDIATEK || COMPILE_TEST
|
||||
depends on NOP_USB_XCEIV
|
||||
depends on GENERIC_PHY
|
||||
select GENERIC_PHY
|
||||
select USB_ROLE_SWITCH
|
||||
|
||||
config USB_MUSB_AM335X_CHILD
|
||||
|
||||
@@ -513,8 +513,8 @@ static int mtk_musb_probe(struct platform_device *pdev)
|
||||
|
||||
glue->xceiv = devm_usb_get_phy(dev, USB_PHY_TYPE_USB2);
|
||||
if (IS_ERR(glue->xceiv)) {
|
||||
dev_err(dev, "fail to getting usb-phy %d\n", ret);
|
||||
ret = PTR_ERR(glue->xceiv);
|
||||
dev_err(dev, "fail to getting usb-phy %d\n", ret);
|
||||
goto err_unregister_usb_phy;
|
||||
}
|
||||
|
||||
|
||||
@@ -402,6 +402,8 @@ static void option_instat_callback(struct urb *urb);
|
||||
#define LONGCHEER_VENDOR_ID 0x1c9e
|
||||
|
||||
/* 4G Systems products */
|
||||
/* This one was sold as the VW and Skoda "Carstick LTE" */
|
||||
#define FOUR_G_SYSTEMS_PRODUCT_CARSTICK_LTE 0x7605
|
||||
/* This is the 4G XS Stick W14 a.k.a. Mobilcom Debitel Surf-Stick *
|
||||
* It seems to contain a Qualcomm QSC6240/6290 chipset */
|
||||
#define FOUR_G_SYSTEMS_PRODUCT_W14 0x9603
|
||||
@@ -1976,6 +1978,8 @@ static const struct usb_device_id option_ids[] = {
|
||||
.driver_info = RSVD(2) },
|
||||
{ USB_DEVICE(AIRPLUS_VENDOR_ID, AIRPLUS_PRODUCT_MCD650) },
|
||||
{ USB_DEVICE(TLAYTECH_VENDOR_ID, TLAYTECH_PRODUCT_TEU800) },
|
||||
{ USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_CARSTICK_LTE),
|
||||
.driver_info = RSVD(0) },
|
||||
{ USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W14),
|
||||
.driver_info = NCTRL(0) | NCTRL(1) },
|
||||
{ USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W100),
|
||||
|
||||
@@ -526,10 +526,10 @@ static int dp_altmode_probe(struct typec_altmode *alt)
|
||||
/* FIXME: Port can only be DFP_U. */
|
||||
|
||||
/* Make sure we have compatiple pin configurations */
|
||||
if (!(DP_CAP_DFP_D_PIN_ASSIGN(port->vdo) &
|
||||
DP_CAP_UFP_D_PIN_ASSIGN(alt->vdo)) &&
|
||||
!(DP_CAP_UFP_D_PIN_ASSIGN(port->vdo) &
|
||||
DP_CAP_DFP_D_PIN_ASSIGN(alt->vdo)))
|
||||
if (!(DP_CAP_PIN_ASSIGN_DFP_D(port->vdo) &
|
||||
DP_CAP_PIN_ASSIGN_UFP_D(alt->vdo)) &&
|
||||
!(DP_CAP_PIN_ASSIGN_UFP_D(port->vdo) &
|
||||
DP_CAP_PIN_ASSIGN_DFP_D(alt->vdo)))
|
||||
return -ENODEV;
|
||||
|
||||
ret = sysfs_create_group(&alt->dev.kobj, &dp_altmode_group);
|
||||
|
||||
@@ -189,6 +189,7 @@ static void *typec_mux_match(struct device_connection *con, int ep, void *data)
|
||||
bool match;
|
||||
int nval;
|
||||
u16 *val;
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
if (!con->fwnode) {
|
||||
@@ -223,10 +224,10 @@ static void *typec_mux_match(struct device_connection *con, int ep, void *data)
|
||||
if (!val)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
nval = fwnode_property_read_u16_array(con->fwnode, "svid", val, nval);
|
||||
if (nval < 0) {
|
||||
ret = fwnode_property_read_u16_array(con->fwnode, "svid", val, nval);
|
||||
if (ret < 0) {
|
||||
kfree(val);
|
||||
return ERR_PTR(nval);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
for (i = 0; i < nval; i++) {
|
||||
@@ -243,7 +244,7 @@ static void *typec_mux_match(struct device_connection *con, int ep, void *data)
|
||||
dev = class_find_device(&typec_mux_class, NULL, con->fwnode,
|
||||
mux_fwnode_match);
|
||||
|
||||
return dev ? to_typec_switch(dev) : ERR_PTR(-EPROBE_DEFER);
|
||||
return dev ? to_typec_mux(dev) : ERR_PTR(-EPROBE_DEFER);
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -2475,9 +2475,12 @@ static int fbcon_set_font(struct vc_data *vc, struct console_font *font,
|
||||
h > FBCON_SWAP(info->var.rotate, info->var.yres, info->var.xres))
|
||||
return -EINVAL;
|
||||
|
||||
if (font->width > 32 || font->height > 32)
|
||||
return -EINVAL;
|
||||
|
||||
/* Make sure drawing engine can handle the font */
|
||||
if (!(info->pixmap.blit_x & (1 << (font->width - 1))) ||
|
||||
!(info->pixmap.blit_y & (1 << (font->height - 1))))
|
||||
if (!(info->pixmap.blit_x & BIT(font->width - 1)) ||
|
||||
!(info->pixmap.blit_y & BIT(font->height - 1)))
|
||||
return -EINVAL;
|
||||
|
||||
/* Make sure driver can handle the font length */
|
||||
|
||||
@@ -88,7 +88,7 @@ static int __diag288(unsigned int func, unsigned int timeout,
|
||||
"1:\n"
|
||||
EX_TABLE(0b, 1b)
|
||||
: "+d" (err) : "d"(__func), "d"(__timeout),
|
||||
"d"(__action), "d"(__len) : "1", "cc");
|
||||
"d"(__action), "d"(__len) : "1", "cc", "memory");
|
||||
return err;
|
||||
}
|
||||
|
||||
@@ -274,12 +274,21 @@ static int __init diag288_init(void)
|
||||
char ebc_begin[] = {
|
||||
194, 197, 199, 201, 213
|
||||
};
|
||||
char *ebc_cmd;
|
||||
|
||||
watchdog_set_nowayout(&wdt_dev, nowayout_info);
|
||||
|
||||
if (MACHINE_IS_VM) {
|
||||
if (__diag288_vm(WDT_FUNC_INIT, 15,
|
||||
ebc_begin, sizeof(ebc_begin)) != 0) {
|
||||
ebc_cmd = kmalloc(sizeof(ebc_begin), GFP_KERNEL);
|
||||
if (!ebc_cmd) {
|
||||
pr_err("The watchdog cannot be initialized\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
memcpy(ebc_cmd, ebc_begin, sizeof(ebc_begin));
|
||||
ret = __diag288_vm(WDT_FUNC_INIT, 15,
|
||||
ebc_cmd, sizeof(ebc_begin));
|
||||
kfree(ebc_cmd);
|
||||
if (ret != 0) {
|
||||
pr_err("The watchdog cannot be initialized\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
4
fs/aio.c
4
fs/aio.c
@@ -332,6 +332,9 @@ static int aio_ring_mremap(struct vm_area_struct *vma)
|
||||
spin_lock(&mm->ioctx_lock);
|
||||
rcu_read_lock();
|
||||
table = rcu_dereference(mm->ioctx_table);
|
||||
if (!table)
|
||||
goto out_unlock;
|
||||
|
||||
for (i = 0; i < table->nr; i++) {
|
||||
struct kioctx *ctx;
|
||||
|
||||
@@ -345,6 +348,7 @@ static int aio_ring_mremap(struct vm_area_struct *vma)
|
||||
}
|
||||
}
|
||||
|
||||
out_unlock:
|
||||
rcu_read_unlock();
|
||||
spin_unlock(&mm->ioctx_lock);
|
||||
return res;
|
||||
|
||||
@@ -6826,10 +6826,10 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
|
||||
/*
|
||||
* Check that we don't overflow at later allocations, we request
|
||||
* clone_sources_count + 1 items, and compare to unsigned long inside
|
||||
* access_ok.
|
||||
* access_ok. Also set an upper limit for allocation size so this can't
|
||||
* easily exhaust memory. Max number of clone sources is about 200K.
|
||||
*/
|
||||
if (arg->clone_sources_count >
|
||||
ULONG_MAX / sizeof(struct clone_root) - 1) {
|
||||
if (arg->clone_sources_count > SZ_8M / sizeof(struct clone_root)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user