Merge 4.19.41 into android-4.19
Changes in 4.19.41 iwlwifi: fix driver operation for 5350 mwifiex: Make resume actually do something useful again on SDIO cards mac80211: don't attempt to rename ERR_PTR() debugfs dirs i2c: synquacer: fix enumeration of slave devices i2c: imx: correct the method of getting private data in notifier_call i2c: Remove unnecessary call to irq_find_mapping i2c: Clear client->irq in i2c_device_remove i2c: Allow recovery of the initial IRQ by an I2C client device. i2c: Prevent runtime suspend of adapter when Host Notify is required ALSA: hda/realtek - Add new Dell platform for headset mode ALSA: hda/realtek - Fixed Dell AIO speaker noise ALSA: hda/realtek - Apply the fixup for ASUS Q325UAR USB: yurex: Fix protection fault after device removal USB: w1 ds2490: Fix bug caused by improper use of altsetting array USB: dummy-hcd: Fix failure to give back unlinked URBs usb: usbip: fix isoc packet num validation in get_pipe USB: core: Fix unterminated string returned by usb_string() USB: core: Fix bug caused by duplicate interface PM usage counter nvme-loop: init nvmet_ctrl fatal_err_work when allocate efi: Fix debugobjects warning on 'efi_rts_work' arm64: dts: rockchip: fix rk3328-roc-cc gmac2io tx/rx_delay HID: logitech: check the return value of create_singlethread_workqueue HID: debug: fix race condition with between rdesc_show() and device removal rtc: cros-ec: Fail suspend/resume if wake IRQ can't be configured rtc: sh: Fix invalid alarm warning for non-enabled alarm batman-adv: Reduce claim hash refcnt only for removed entry batman-adv: Reduce tt_local hash refcnt only for removed entry batman-adv: Reduce tt_global hash refcnt only for removed entry batman-adv: fix warning in function batadv_v_elp_get_throughput ARM: dts: rockchip: Fix gpu opp node names for rk3288 reset: meson-audio-arb: Fix missing .owner setting of reset_controller_dev igb: Fix WARN_ONCE on runtime suspend riscv: fix accessing 8-byte variable from RV32 HID: quirks: Fix keyboard + touchpad on Lenovo Miix 630 net: hns3: fix compile error net/mlx5: E-Switch, Fix esw manager vport indication for more vport commands bonding: show full hw address in sysfs for slave entries net: stmmac: use correct DMA buffer size in the RX descriptor net: stmmac: ratelimit RX error logs net: stmmac: don't stop NAPI processing when dropping a packet net: stmmac: don't overwrite discard_frame status net: stmmac: fix dropping of multi-descriptor RX frames net: stmmac: don't log oversized frames jffs2: fix use-after-free on symlink traversal debugfs: fix use-after-free on symlink traversal mfd: twl-core: Disable IRQ while suspended block: use blk_free_flush_queue() to free hctx->fq in blk_mq_init_hctx rtc: da9063: set uie_unsupported when relevant HID: input: add mapping for Assistant key vfio/pci: use correct format characters scsi: core: add new RDAC LENOVO/DE_Series device scsi: storvsc: Fix calculation of sub-channel count arm/mach-at91/pm : fix possible object reference leak arm64: fix wrong check of on_sdei_stack in nmi context net: hns: fix KASAN: use-after-free in hns_nic_net_xmit_hw() net: hns: Use NAPI_POLL_WEIGHT for hns driver net: hns: Fix probabilistic memory overwrite when HNS driver initialized net: hns: fix ICMP6 neighbor solicitation messages discard problem net: hns: Fix WARNING when remove HNS driver with SMMU enabled libcxgb: fix incorrect ppmax calculation KVM: SVM: prevent DBG_DECRYPT and DBG_ENCRYPT overflow kmemleak: powerpc: skip scanning holes in the .bss section hugetlbfs: fix memory leak for resv_map sh: fix multiple function definition build errors xsysace: Fix error handling in ace_setup fs: stream_open - opener for stream-like files so that read and write can run simultaneously without deadlock ARM: orion: don't use using 64-bit DMA masks ARM: iop: don't use using 64-bit DMA masks block: pass no-op callback to INIT_WORK(). perf/x86/amd: Update generic hardware cache events for Family 17h Bluetooth: btusb: request wake pin with NOAUTOEN Bluetooth: mediatek: fix up an error path to restore bdev->tx_state clk: qcom: Add missing freq for usb30_master_clk on 8998 staging: iio: adt7316: allow adt751x to use internal vref for all dacs staging: iio: adt7316: fix the dac read calculation staging: iio: adt7316: fix the dac write calculation scsi: RDMA/srpt: Fix a credit leak for aborted commands ASoC: Intel: bytcr_rt5651: Revert "Fix DMIC map headsetmic mapping" ASoC: wm_adsp: Correct handling of compressed streams that restart ASoC: stm32: fix sai driver name initialisation platform/x86: intel_pmc_core: Fix PCH IP name platform/x86: intel_pmc_core: Handle CFL regmap properly IB/core: Unregister notifier before freeing MAD security IB/core: Fix potential memory leak while creating MAD agents IB/core: Destroy QP if XRC QP fails Input: snvs_pwrkey - initialize necessary driver data before enabling IRQ Input: stmfts - acknowledge that setting brightness is a blocking call gpio: mxc: add check to return defer probe if clock tree NOT ready selinux: avoid silent denials in permissive mode under RCU walk selinux: never allow relabeling on context mounts mac80211: Honor SW_CRYPTO_CONTROL for unicast keys in AP VLAN mode powerpc/mm/hash: Handle mmap_min_addr correctly in get_unmapped_area topdown search x86/mce: Improve error message when kernel cannot recover, p2 clk: x86: Add system specific quirk to mark clocks as critical x86/mm/KASLR: Fix the size of the direct mapping section x86/mm: Fix a crash with kmemleak_scan() x86/mm/tlb: Revert "x86/mm: Align TLB invalidation info" i2c: i2c-stm32f7: Fix SDADEL minimum formula media: v4l2: i2c: ov7670: Fix PLL bypass register values ASoC: wm_adsp: Check for buffer in trigger stop mm/kmemleak.c: fix unused-function warning Linux 4.19.41 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
@@ -370,11 +370,15 @@ autosuspend the interface's device. When the usage counter is = 0
|
||||
then the interface is considered to be idle, and the kernel may
|
||||
autosuspend the device.
|
||||
|
||||
Drivers need not be concerned about balancing changes to the usage
|
||||
counter; the USB core will undo any remaining "get"s when a driver
|
||||
is unbound from its interface. As a corollary, drivers must not call
|
||||
any of the ``usb_autopm_*`` functions after their ``disconnect``
|
||||
routine has returned.
|
||||
Drivers must be careful to balance their overall changes to the usage
|
||||
counter. Unbalanced "get"s will remain in effect when a driver is
|
||||
unbound from its interface, preventing the device from going into
|
||||
runtime suspend should the interface be bound to a driver again. On
|
||||
the other hand, drivers are allowed to achieve this balance by calling
|
||||
the ``usb_autopm_*`` functions even after their ``disconnect`` routine
|
||||
has returned -- say from within a work-queue routine -- provided they
|
||||
retain an active reference to the interface (via ``usb_get_intf`` and
|
||||
``usb_put_intf``).
|
||||
|
||||
Drivers using the async routines are responsible for their own
|
||||
synchronization and mutual exclusion.
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 19
|
||||
SUBLEVEL = 40
|
||||
SUBLEVEL = 41
|
||||
EXTRAVERSION =
|
||||
NAME = "People's Front"
|
||||
|
||||
|
||||
@@ -1261,27 +1261,27 @@
|
||||
gpu_opp_table: gpu-opp-table {
|
||||
compatible = "operating-points-v2";
|
||||
|
||||
opp@100000000 {
|
||||
opp-100000000 {
|
||||
opp-hz = /bits/ 64 <100000000>;
|
||||
opp-microvolt = <950000>;
|
||||
};
|
||||
opp@200000000 {
|
||||
opp-200000000 {
|
||||
opp-hz = /bits/ 64 <200000000>;
|
||||
opp-microvolt = <950000>;
|
||||
};
|
||||
opp@300000000 {
|
||||
opp-300000000 {
|
||||
opp-hz = /bits/ 64 <300000000>;
|
||||
opp-microvolt = <1000000>;
|
||||
};
|
||||
opp@400000000 {
|
||||
opp-400000000 {
|
||||
opp-hz = /bits/ 64 <400000000>;
|
||||
opp-microvolt = <1100000>;
|
||||
};
|
||||
opp@500000000 {
|
||||
opp-500000000 {
|
||||
opp-hz = /bits/ 64 <500000000>;
|
||||
opp-microvolt = <1200000>;
|
||||
};
|
||||
opp@600000000 {
|
||||
opp-600000000 {
|
||||
opp-hz = /bits/ 64 <600000000>;
|
||||
opp-microvolt = <1250000>;
|
||||
};
|
||||
|
||||
@@ -594,13 +594,13 @@ static int __init at91_pm_backup_init(void)
|
||||
|
||||
np = of_find_compatible_node(NULL, NULL, "atmel,sama5d2-securam");
|
||||
if (!np)
|
||||
goto securam_fail;
|
||||
goto securam_fail_no_ref_dev;
|
||||
|
||||
pdev = of_find_device_by_node(np);
|
||||
of_node_put(np);
|
||||
if (!pdev) {
|
||||
pr_warn("%s: failed to find securam device!\n", __func__);
|
||||
goto securam_fail;
|
||||
goto securam_fail_no_ref_dev;
|
||||
}
|
||||
|
||||
sram_pool = gen_pool_get(&pdev->dev, NULL);
|
||||
@@ -623,6 +623,8 @@ static int __init at91_pm_backup_init(void)
|
||||
return 0;
|
||||
|
||||
securam_fail:
|
||||
put_device(&pdev->dev);
|
||||
securam_fail_no_ref_dev:
|
||||
iounmap(pm_data.sfrbu);
|
||||
pm_data.sfrbu = NULL;
|
||||
return ret;
|
||||
|
||||
@@ -300,7 +300,7 @@ static struct resource iop13xx_adma_2_resources[] = {
|
||||
}
|
||||
};
|
||||
|
||||
static u64 iop13xx_adma_dmamask = DMA_BIT_MASK(64);
|
||||
static u64 iop13xx_adma_dmamask = DMA_BIT_MASK(32);
|
||||
static struct iop_adma_platform_data iop13xx_adma_0_data = {
|
||||
.hw_id = 0,
|
||||
.pool_size = PAGE_SIZE,
|
||||
@@ -324,7 +324,7 @@ static struct platform_device iop13xx_adma_0_channel = {
|
||||
.resource = iop13xx_adma_0_resources,
|
||||
.dev = {
|
||||
.dma_mask = &iop13xx_adma_dmamask,
|
||||
.coherent_dma_mask = DMA_BIT_MASK(64),
|
||||
.coherent_dma_mask = DMA_BIT_MASK(32),
|
||||
.platform_data = (void *) &iop13xx_adma_0_data,
|
||||
},
|
||||
};
|
||||
@@ -336,7 +336,7 @@ static struct platform_device iop13xx_adma_1_channel = {
|
||||
.resource = iop13xx_adma_1_resources,
|
||||
.dev = {
|
||||
.dma_mask = &iop13xx_adma_dmamask,
|
||||
.coherent_dma_mask = DMA_BIT_MASK(64),
|
||||
.coherent_dma_mask = DMA_BIT_MASK(32),
|
||||
.platform_data = (void *) &iop13xx_adma_1_data,
|
||||
},
|
||||
};
|
||||
@@ -348,7 +348,7 @@ static struct platform_device iop13xx_adma_2_channel = {
|
||||
.resource = iop13xx_adma_2_resources,
|
||||
.dev = {
|
||||
.dma_mask = &iop13xx_adma_dmamask,
|
||||
.coherent_dma_mask = DMA_BIT_MASK(64),
|
||||
.coherent_dma_mask = DMA_BIT_MASK(32),
|
||||
.platform_data = (void *) &iop13xx_adma_2_data,
|
||||
},
|
||||
};
|
||||
|
||||
@@ -152,7 +152,7 @@ static struct resource iop13xx_tpmi_3_resources[] = {
|
||||
}
|
||||
};
|
||||
|
||||
u64 iop13xx_tpmi_mask = DMA_BIT_MASK(64);
|
||||
u64 iop13xx_tpmi_mask = DMA_BIT_MASK(32);
|
||||
static struct platform_device iop13xx_tpmi_0_device = {
|
||||
.name = "iop-tpmi",
|
||||
.id = 0,
|
||||
@@ -160,7 +160,7 @@ static struct platform_device iop13xx_tpmi_0_device = {
|
||||
.resource = iop13xx_tpmi_0_resources,
|
||||
.dev = {
|
||||
.dma_mask = &iop13xx_tpmi_mask,
|
||||
.coherent_dma_mask = DMA_BIT_MASK(64),
|
||||
.coherent_dma_mask = DMA_BIT_MASK(32),
|
||||
},
|
||||
};
|
||||
|
||||
@@ -171,7 +171,7 @@ static struct platform_device iop13xx_tpmi_1_device = {
|
||||
.resource = iop13xx_tpmi_1_resources,
|
||||
.dev = {
|
||||
.dma_mask = &iop13xx_tpmi_mask,
|
||||
.coherent_dma_mask = DMA_BIT_MASK(64),
|
||||
.coherent_dma_mask = DMA_BIT_MASK(32),
|
||||
},
|
||||
};
|
||||
|
||||
@@ -182,7 +182,7 @@ static struct platform_device iop13xx_tpmi_2_device = {
|
||||
.resource = iop13xx_tpmi_2_resources,
|
||||
.dev = {
|
||||
.dma_mask = &iop13xx_tpmi_mask,
|
||||
.coherent_dma_mask = DMA_BIT_MASK(64),
|
||||
.coherent_dma_mask = DMA_BIT_MASK(32),
|
||||
},
|
||||
};
|
||||
|
||||
@@ -193,7 +193,7 @@ static struct platform_device iop13xx_tpmi_3_device = {
|
||||
.resource = iop13xx_tpmi_3_resources,
|
||||
.dev = {
|
||||
.dma_mask = &iop13xx_tpmi_mask,
|
||||
.coherent_dma_mask = DMA_BIT_MASK(64),
|
||||
.coherent_dma_mask = DMA_BIT_MASK(32),
|
||||
},
|
||||
};
|
||||
|
||||
|
||||
@@ -143,7 +143,7 @@ struct platform_device iop3xx_dma_0_channel = {
|
||||
.resource = iop3xx_dma_0_resources,
|
||||
.dev = {
|
||||
.dma_mask = &iop3xx_adma_dmamask,
|
||||
.coherent_dma_mask = DMA_BIT_MASK(64),
|
||||
.coherent_dma_mask = DMA_BIT_MASK(32),
|
||||
.platform_data = (void *) &iop3xx_dma_0_data,
|
||||
},
|
||||
};
|
||||
@@ -155,7 +155,7 @@ struct platform_device iop3xx_dma_1_channel = {
|
||||
.resource = iop3xx_dma_1_resources,
|
||||
.dev = {
|
||||
.dma_mask = &iop3xx_adma_dmamask,
|
||||
.coherent_dma_mask = DMA_BIT_MASK(64),
|
||||
.coherent_dma_mask = DMA_BIT_MASK(32),
|
||||
.platform_data = (void *) &iop3xx_dma_1_data,
|
||||
},
|
||||
};
|
||||
@@ -167,7 +167,7 @@ struct platform_device iop3xx_aau_channel = {
|
||||
.resource = iop3xx_aau_resources,
|
||||
.dev = {
|
||||
.dma_mask = &iop3xx_adma_dmamask,
|
||||
.coherent_dma_mask = DMA_BIT_MASK(64),
|
||||
.coherent_dma_mask = DMA_BIT_MASK(32),
|
||||
.platform_data = (void *) &iop3xx_aau_data,
|
||||
},
|
||||
};
|
||||
|
||||
@@ -622,7 +622,7 @@ static struct platform_device orion_xor0_shared = {
|
||||
.resource = orion_xor0_shared_resources,
|
||||
.dev = {
|
||||
.dma_mask = &orion_xor_dmamask,
|
||||
.coherent_dma_mask = DMA_BIT_MASK(64),
|
||||
.coherent_dma_mask = DMA_BIT_MASK(32),
|
||||
.platform_data = &orion_xor0_pdata,
|
||||
},
|
||||
};
|
||||
@@ -683,7 +683,7 @@ static struct platform_device orion_xor1_shared = {
|
||||
.resource = orion_xor1_shared_resources,
|
||||
.dev = {
|
||||
.dma_mask = &orion_xor_dmamask,
|
||||
.coherent_dma_mask = DMA_BIT_MASK(64),
|
||||
.coherent_dma_mask = DMA_BIT_MASK(32),
|
||||
.platform_data = &orion_xor1_pdata,
|
||||
},
|
||||
};
|
||||
|
||||
@@ -94,8 +94,8 @@
|
||||
snps,reset-gpio = <&gpio1 RK_PC2 GPIO_ACTIVE_LOW>;
|
||||
snps,reset-active-low;
|
||||
snps,reset-delays-us = <0 10000 50000>;
|
||||
tx_delay = <0x25>;
|
||||
rx_delay = <0x11>;
|
||||
tx_delay = <0x24>;
|
||||
rx_delay = <0x18>;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
|
||||
@@ -94,6 +94,9 @@ static bool on_sdei_normal_stack(unsigned long sp, struct stack_info *info)
|
||||
unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_normal_ptr);
|
||||
unsigned long high = low + SDEI_STACK_SIZE;
|
||||
|
||||
if (!low)
|
||||
return false;
|
||||
|
||||
if (sp < low || sp >= high)
|
||||
return false;
|
||||
|
||||
@@ -111,6 +114,9 @@ static bool on_sdei_critical_stack(unsigned long sp, struct stack_info *info)
|
||||
unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_critical_ptr);
|
||||
unsigned long high = low + SDEI_STACK_SIZE;
|
||||
|
||||
if (!low)
|
||||
return false;
|
||||
|
||||
if (sp < low || sp >= high)
|
||||
return false;
|
||||
|
||||
|
||||
@@ -22,6 +22,7 @@
|
||||
#include <linux/kvm_host.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/kmemleak.h>
|
||||
#include <linux/kvm_para.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/of.h>
|
||||
@@ -712,6 +713,12 @@ static void kvm_use_magic_page(void)
|
||||
|
||||
static __init void kvm_free_tmp(void)
|
||||
{
|
||||
/*
|
||||
* Inform kmemleak about the hole in the .bss section since the
|
||||
* corresponding pages will be unmapped with DEBUG_PAGEALLOC=y.
|
||||
*/
|
||||
kmemleak_free_part(&kvm_tmp[kvm_tmp_index],
|
||||
ARRAY_SIZE(kvm_tmp) - kvm_tmp_index);
|
||||
free_reserved_area(&kvm_tmp[kvm_tmp_index],
|
||||
&kvm_tmp[ARRAY_SIZE(kvm_tmp)], -1, NULL);
|
||||
}
|
||||
|
||||
@@ -31,6 +31,7 @@
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/hugetlb.h>
|
||||
#include <linux/security.h>
|
||||
#include <asm/mman.h>
|
||||
#include <asm/mmu.h>
|
||||
#include <asm/copro.h>
|
||||
@@ -376,6 +377,7 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm,
|
||||
int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
|
||||
unsigned long addr, found, prev;
|
||||
struct vm_unmapped_area_info info;
|
||||
unsigned long min_addr = max(PAGE_SIZE, mmap_min_addr);
|
||||
|
||||
info.flags = VM_UNMAPPED_AREA_TOPDOWN;
|
||||
info.length = len;
|
||||
@@ -392,7 +394,7 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm,
|
||||
if (high_limit > DEFAULT_MAP_WINDOW)
|
||||
addr += mm->context.slb_addr_limit - DEFAULT_MAP_WINDOW;
|
||||
|
||||
while (addr > PAGE_SIZE) {
|
||||
while (addr > min_addr) {
|
||||
info.high_limit = addr;
|
||||
if (!slice_scan_available(addr - 1, available, 0, &addr))
|
||||
continue;
|
||||
@@ -404,8 +406,8 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm,
|
||||
* Check if we need to reduce the range, or if we can
|
||||
* extend it to cover the previous available slice.
|
||||
*/
|
||||
if (addr < PAGE_SIZE)
|
||||
addr = PAGE_SIZE;
|
||||
if (addr < min_addr)
|
||||
addr = min_addr;
|
||||
else if (slice_scan_available(addr - 1, available, 0, &prev)) {
|
||||
addr = prev;
|
||||
goto prev_slice;
|
||||
@@ -527,7 +529,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
|
||||
addr = _ALIGN_UP(addr, page_size);
|
||||
slice_dbg(" aligned addr=%lx\n", addr);
|
||||
/* Ignore hint if it's too large or overlaps a VMA */
|
||||
if (addr > high_limit - len ||
|
||||
if (addr > high_limit - len || addr < mmap_min_addr ||
|
||||
!slice_area_is_free(mm, addr, len))
|
||||
addr = 0;
|
||||
}
|
||||
|
||||
@@ -307,7 +307,7 @@ do { \
|
||||
" .balign 4\n" \
|
||||
"4:\n" \
|
||||
" li %0, %6\n" \
|
||||
" jump 2b, %1\n" \
|
||||
" jump 3b, %1\n" \
|
||||
" .previous\n" \
|
||||
" .section __ex_table,\"a\"\n" \
|
||||
" .balign " RISCV_SZPTR "\n" \
|
||||
|
||||
@@ -175,10 +175,10 @@ static struct sh_machine_vector __initmv sh_of_generic_mv = {
|
||||
|
||||
struct sh_clk_ops;
|
||||
|
||||
void __init arch_init_clk_ops(struct sh_clk_ops **ops, int idx)
|
||||
void __init __weak arch_init_clk_ops(struct sh_clk_ops **ops, int idx)
|
||||
{
|
||||
}
|
||||
|
||||
void __init plat_irq_setup(void)
|
||||
void __init __weak plat_irq_setup(void)
|
||||
{
|
||||
}
|
||||
|
||||
@@ -116,6 +116,110 @@ static __initconst const u64 amd_hw_cache_event_ids
|
||||
},
|
||||
};
|
||||
|
||||
static __initconst const u64 amd_hw_cache_event_ids_f17h
|
||||
[PERF_COUNT_HW_CACHE_MAX]
|
||||
[PERF_COUNT_HW_CACHE_OP_MAX]
|
||||
[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
|
||||
[C(L1D)] = {
|
||||
[C(OP_READ)] = {
|
||||
[C(RESULT_ACCESS)] = 0x0040, /* Data Cache Accesses */
|
||||
[C(RESULT_MISS)] = 0xc860, /* L2$ access from DC Miss */
|
||||
},
|
||||
[C(OP_WRITE)] = {
|
||||
[C(RESULT_ACCESS)] = 0,
|
||||
[C(RESULT_MISS)] = 0,
|
||||
},
|
||||
[C(OP_PREFETCH)] = {
|
||||
[C(RESULT_ACCESS)] = 0xff5a, /* h/w prefetch DC Fills */
|
||||
[C(RESULT_MISS)] = 0,
|
||||
},
|
||||
},
|
||||
[C(L1I)] = {
|
||||
[C(OP_READ)] = {
|
||||
[C(RESULT_ACCESS)] = 0x0080, /* Instruction cache fetches */
|
||||
[C(RESULT_MISS)] = 0x0081, /* Instruction cache misses */
|
||||
},
|
||||
[C(OP_WRITE)] = {
|
||||
[C(RESULT_ACCESS)] = -1,
|
||||
[C(RESULT_MISS)] = -1,
|
||||
},
|
||||
[C(OP_PREFETCH)] = {
|
||||
[C(RESULT_ACCESS)] = 0,
|
||||
[C(RESULT_MISS)] = 0,
|
||||
},
|
||||
},
|
||||
[C(LL)] = {
|
||||
[C(OP_READ)] = {
|
||||
[C(RESULT_ACCESS)] = 0,
|
||||
[C(RESULT_MISS)] = 0,
|
||||
},
|
||||
[C(OP_WRITE)] = {
|
||||
[C(RESULT_ACCESS)] = 0,
|
||||
[C(RESULT_MISS)] = 0,
|
||||
},
|
||||
[C(OP_PREFETCH)] = {
|
||||
[C(RESULT_ACCESS)] = 0,
|
||||
[C(RESULT_MISS)] = 0,
|
||||
},
|
||||
},
|
||||
[C(DTLB)] = {
|
||||
[C(OP_READ)] = {
|
||||
[C(RESULT_ACCESS)] = 0xff45, /* All L2 DTLB accesses */
|
||||
[C(RESULT_MISS)] = 0xf045, /* L2 DTLB misses (PT walks) */
|
||||
},
|
||||
[C(OP_WRITE)] = {
|
||||
[C(RESULT_ACCESS)] = 0,
|
||||
[C(RESULT_MISS)] = 0,
|
||||
},
|
||||
[C(OP_PREFETCH)] = {
|
||||
[C(RESULT_ACCESS)] = 0,
|
||||
[C(RESULT_MISS)] = 0,
|
||||
},
|
||||
},
|
||||
[C(ITLB)] = {
|
||||
[C(OP_READ)] = {
|
||||
[C(RESULT_ACCESS)] = 0x0084, /* L1 ITLB misses, L2 ITLB hits */
|
||||
[C(RESULT_MISS)] = 0xff85, /* L1 ITLB misses, L2 misses */
|
||||
},
|
||||
[C(OP_WRITE)] = {
|
||||
[C(RESULT_ACCESS)] = -1,
|
||||
[C(RESULT_MISS)] = -1,
|
||||
},
|
||||
[C(OP_PREFETCH)] = {
|
||||
[C(RESULT_ACCESS)] = -1,
|
||||
[C(RESULT_MISS)] = -1,
|
||||
},
|
||||
},
|
||||
[C(BPU)] = {
|
||||
[C(OP_READ)] = {
|
||||
[C(RESULT_ACCESS)] = 0x00c2, /* Retired Branch Instr. */
|
||||
[C(RESULT_MISS)] = 0x00c3, /* Retired Mispredicted BI */
|
||||
},
|
||||
[C(OP_WRITE)] = {
|
||||
[C(RESULT_ACCESS)] = -1,
|
||||
[C(RESULT_MISS)] = -1,
|
||||
},
|
||||
[C(OP_PREFETCH)] = {
|
||||
[C(RESULT_ACCESS)] = -1,
|
||||
[C(RESULT_MISS)] = -1,
|
||||
},
|
||||
},
|
||||
[C(NODE)] = {
|
||||
[C(OP_READ)] = {
|
||||
[C(RESULT_ACCESS)] = 0,
|
||||
[C(RESULT_MISS)] = 0,
|
||||
},
|
||||
[C(OP_WRITE)] = {
|
||||
[C(RESULT_ACCESS)] = -1,
|
||||
[C(RESULT_MISS)] = -1,
|
||||
},
|
||||
[C(OP_PREFETCH)] = {
|
||||
[C(RESULT_ACCESS)] = -1,
|
||||
[C(RESULT_MISS)] = -1,
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
/*
|
||||
* AMD Performance Monitor K7 and later, up to and including Family 16h:
|
||||
*/
|
||||
@@ -861,9 +965,10 @@ __init int amd_pmu_init(void)
|
||||
x86_pmu.amd_nb_constraints = 0;
|
||||
}
|
||||
|
||||
/* Events are common for all AMDs */
|
||||
memcpy(hw_cache_event_ids, amd_hw_cache_event_ids,
|
||||
sizeof(hw_cache_event_ids));
|
||||
if (boot_cpu_data.x86 >= 0x17)
|
||||
memcpy(hw_cache_event_ids, amd_hw_cache_event_ids_f17h, sizeof(hw_cache_event_ids));
|
||||
else
|
||||
memcpy(hw_cache_event_ids, amd_hw_cache_event_ids, sizeof(hw_cache_event_ids));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -165,6 +165,11 @@ static struct severity {
|
||||
SER, MASK(MCI_STATUS_OVER|MCI_UC_SAR|MCI_ADDR|MCACOD, MCI_UC_SAR|MCI_ADDR|MCACOD_DATA),
|
||||
KERNEL
|
||||
),
|
||||
MCESEV(
|
||||
PANIC, "Instruction fetch error in kernel",
|
||||
SER, MASK(MCI_STATUS_OVER|MCI_UC_SAR|MCI_ADDR|MCACOD, MCI_UC_SAR|MCI_ADDR|MCACOD_INSTR),
|
||||
KERNEL
|
||||
),
|
||||
#endif
|
||||
MCESEV(
|
||||
PANIC, "Action required: unknown MCACOD",
|
||||
|
||||
@@ -6789,7 +6789,8 @@ static int sev_dbg_crypt(struct kvm *kvm, struct kvm_sev_cmd *argp, bool dec)
|
||||
struct page **src_p, **dst_p;
|
||||
struct kvm_sev_dbg debug;
|
||||
unsigned long n;
|
||||
int ret, size;
|
||||
unsigned int size;
|
||||
int ret;
|
||||
|
||||
if (!sev_guest(kvm))
|
||||
return -ENOTTY;
|
||||
@@ -6797,6 +6798,11 @@ static int sev_dbg_crypt(struct kvm *kvm, struct kvm_sev_cmd *argp, bool dec)
|
||||
if (copy_from_user(&debug, (void __user *)(uintptr_t)argp->data, sizeof(debug)))
|
||||
return -EFAULT;
|
||||
|
||||
if (!debug.len || debug.src_uaddr + debug.len < debug.src_uaddr)
|
||||
return -EINVAL;
|
||||
if (!debug.dst_uaddr)
|
||||
return -EINVAL;
|
||||
|
||||
vaddr = debug.src_uaddr;
|
||||
size = debug.len;
|
||||
vaddr_end = vaddr + size;
|
||||
@@ -6847,8 +6853,8 @@ static int sev_dbg_crypt(struct kvm *kvm, struct kvm_sev_cmd *argp, bool dec)
|
||||
dst_vaddr,
|
||||
len, &argp->error);
|
||||
|
||||
sev_unpin_memory(kvm, src_p, 1);
|
||||
sev_unpin_memory(kvm, dst_p, 1);
|
||||
sev_unpin_memory(kvm, src_p, n);
|
||||
sev_unpin_memory(kvm, dst_p, n);
|
||||
|
||||
if (ret)
|
||||
goto err;
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
#include <linux/bootmem.h> /* for max_low_pfn */
|
||||
#include <linux/swapfile.h>
|
||||
#include <linux/swapops.h>
|
||||
#include <linux/kmemleak.h>
|
||||
|
||||
#include <asm/set_memory.h>
|
||||
#include <asm/e820/api.h>
|
||||
@@ -767,6 +768,11 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end)
|
||||
if (debug_pagealloc_enabled()) {
|
||||
pr_info("debug: unmapping init [mem %#010lx-%#010lx]\n",
|
||||
begin, end - 1);
|
||||
/*
|
||||
* Inform kmemleak about the hole in the memory since the
|
||||
* corresponding pages will be unmapped.
|
||||
*/
|
||||
kmemleak_free_part((void *)begin, end - begin);
|
||||
set_memory_np(begin, (end - begin) >> PAGE_SHIFT);
|
||||
} else {
|
||||
/*
|
||||
|
||||
@@ -93,7 +93,7 @@ void __init kernel_randomize_memory(void)
|
||||
if (!kaslr_memory_enabled())
|
||||
return;
|
||||
|
||||
kaslr_regions[0].size_tb = 1 << (__PHYSICAL_MASK_SHIFT - TB_SHIFT);
|
||||
kaslr_regions[0].size_tb = 1 << (MAX_PHYSMEM_BITS - TB_SHIFT);
|
||||
kaslr_regions[1].size_tb = VMALLOC_SIZE_TB;
|
||||
|
||||
/*
|
||||
|
||||
@@ -694,7 +694,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
|
||||
{
|
||||
int cpu;
|
||||
|
||||
struct flush_tlb_info info __aligned(SMP_CACHE_BYTES) = {
|
||||
struct flush_tlb_info info = {
|
||||
.mm = mm,
|
||||
};
|
||||
|
||||
|
||||
@@ -980,6 +980,10 @@ static void blk_rq_timed_out_timer(struct timer_list *t)
|
||||
kblockd_schedule_work(&q->timeout_work);
|
||||
}
|
||||
|
||||
static void blk_timeout_work_dummy(struct work_struct *work)
|
||||
{
|
||||
}
|
||||
|
||||
/**
|
||||
* blk_alloc_queue_node - allocate a request queue
|
||||
* @gfp_mask: memory allocation flags
|
||||
@@ -1034,7 +1038,7 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id,
|
||||
timer_setup(&q->backing_dev_info->laptop_mode_wb_timer,
|
||||
laptop_mode_timer_fn, 0);
|
||||
timer_setup(&q->timeout, blk_rq_timed_out_timer, 0);
|
||||
INIT_WORK(&q->timeout_work, NULL);
|
||||
INIT_WORK(&q->timeout_work, blk_timeout_work_dummy);
|
||||
INIT_LIST_HEAD(&q->timeout_list);
|
||||
INIT_LIST_HEAD(&q->icq_list);
|
||||
#ifdef CONFIG_BLK_CGROUP
|
||||
|
||||
@@ -2236,7 +2236,7 @@ static int blk_mq_init_hctx(struct request_queue *q,
|
||||
return 0;
|
||||
|
||||
free_fq:
|
||||
kfree(hctx->fq);
|
||||
blk_free_flush_queue(hctx->fq);
|
||||
exit_hctx:
|
||||
if (set->ops->exit_hctx)
|
||||
set->ops->exit_hctx(hctx, hctx_idx);
|
||||
|
||||
@@ -1063,6 +1063,8 @@ static int ace_setup(struct ace_device *ace)
|
||||
return 0;
|
||||
|
||||
err_read:
|
||||
/* prevent double queue cleanup */
|
||||
ace->gd->queue = NULL;
|
||||
put_disk(ace->gd);
|
||||
err_alloc_disk:
|
||||
blk_cleanup_queue(ace->queue);
|
||||
|
||||
@@ -115,11 +115,13 @@ static int mtk_hci_wmt_sync(struct hci_dev *hdev, u8 op, u8 flag, u16 plen,
|
||||
TASK_INTERRUPTIBLE, HCI_INIT_TIMEOUT);
|
||||
if (err == -EINTR) {
|
||||
bt_dev_err(hdev, "Execution of wmt command interrupted");
|
||||
clear_bit(BTMTKUART_TX_WAIT_VND_EVT, &bdev->tx_state);
|
||||
return err;
|
||||
}
|
||||
|
||||
if (err) {
|
||||
bt_dev_err(hdev, "Execution of wmt command timed out");
|
||||
clear_bit(BTMTKUART_TX_WAIT_VND_EVT, &bdev->tx_state);
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
|
||||
@@ -2888,6 +2888,7 @@ static int btusb_config_oob_wake(struct hci_dev *hdev)
|
||||
return 0;
|
||||
}
|
||||
|
||||
irq_set_status_flags(irq, IRQ_NOAUTOEN);
|
||||
ret = devm_request_irq(&hdev->dev, irq, btusb_oob_wake_handler,
|
||||
0, "OOB Wake-on-BT", data);
|
||||
if (ret) {
|
||||
@@ -2902,7 +2903,6 @@ static int btusb_config_oob_wake(struct hci_dev *hdev)
|
||||
}
|
||||
|
||||
data->oob_wake_irq = irq;
|
||||
disable_irq(irq);
|
||||
bt_dev_info(hdev, "OOB Wake-on-BT configured at IRQ %u", irq);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -1101,6 +1101,7 @@ static struct clk_rcg2 ufs_axi_clk_src = {
|
||||
|
||||
static const struct freq_tbl ftbl_usb30_master_clk_src[] = {
|
||||
F(19200000, P_XO, 1, 0, 0),
|
||||
F(60000000, P_GPLL0_OUT_MAIN, 10, 0, 0),
|
||||
F(120000000, P_GPLL0_OUT_MAIN, 5, 0, 0),
|
||||
F(150000000, P_GPLL0_OUT_MAIN, 4, 0, 0),
|
||||
{ }
|
||||
|
||||
@@ -165,7 +165,7 @@ static const struct clk_ops plt_clk_ops = {
|
||||
};
|
||||
|
||||
static struct clk_plt *plt_clk_register(struct platform_device *pdev, int id,
|
||||
void __iomem *base,
|
||||
const struct pmc_clk_data *pmc_data,
|
||||
const char **parent_names,
|
||||
int num_parents)
|
||||
{
|
||||
@@ -184,9 +184,17 @@ static struct clk_plt *plt_clk_register(struct platform_device *pdev, int id,
|
||||
init.num_parents = num_parents;
|
||||
|
||||
pclk->hw.init = &init;
|
||||
pclk->reg = base + PMC_CLK_CTL_OFFSET + id * PMC_CLK_CTL_SIZE;
|
||||
pclk->reg = pmc_data->base + PMC_CLK_CTL_OFFSET + id * PMC_CLK_CTL_SIZE;
|
||||
spin_lock_init(&pclk->lock);
|
||||
|
||||
/*
|
||||
* On some systems, the pmc_plt_clocks already enabled by the
|
||||
* firmware are being marked as critical to avoid them being
|
||||
* gated by the clock framework.
|
||||
*/
|
||||
if (pmc_data->critical && plt_clk_is_enabled(&pclk->hw))
|
||||
init.flags |= CLK_IS_CRITICAL;
|
||||
|
||||
ret = devm_clk_hw_register(&pdev->dev, &pclk->hw);
|
||||
if (ret) {
|
||||
pclk = ERR_PTR(ret);
|
||||
@@ -332,7 +340,7 @@ static int plt_clk_probe(struct platform_device *pdev)
|
||||
return PTR_ERR(parent_names);
|
||||
|
||||
for (i = 0; i < PMC_CLK_NUM; i++) {
|
||||
data->clks[i] = plt_clk_register(pdev, i, pmc_data->base,
|
||||
data->clks[i] = plt_clk_register(pdev, i, pmc_data,
|
||||
parent_names, data->nparents);
|
||||
if (IS_ERR(data->clks[i])) {
|
||||
err = PTR_ERR(data->clks[i]);
|
||||
|
||||
@@ -95,7 +95,7 @@ struct efi_runtime_work {
|
||||
efi_rts_work.status = EFI_ABORTED; \
|
||||
\
|
||||
init_completion(&efi_rts_work.efi_rts_comp); \
|
||||
INIT_WORK_ONSTACK(&efi_rts_work.work, efi_call_rts); \
|
||||
INIT_WORK(&efi_rts_work.work, efi_call_rts); \
|
||||
efi_rts_work.arg1 = _arg1; \
|
||||
efi_rts_work.arg2 = _arg2; \
|
||||
efi_rts_work.arg3 = _arg3; \
|
||||
|
||||
@@ -438,8 +438,11 @@ static int mxc_gpio_probe(struct platform_device *pdev)
|
||||
|
||||
/* the controller clock is optional */
|
||||
port->clk = devm_clk_get(&pdev->dev, NULL);
|
||||
if (IS_ERR(port->clk))
|
||||
if (IS_ERR(port->clk)) {
|
||||
if (PTR_ERR(port->clk) == -EPROBE_DEFER)
|
||||
return -EPROBE_DEFER;
|
||||
port->clk = NULL;
|
||||
}
|
||||
|
||||
err = clk_prepare_enable(port->clk);
|
||||
if (err) {
|
||||
|
||||
@@ -1060,10 +1060,15 @@ static int hid_debug_rdesc_show(struct seq_file *f, void *p)
|
||||
seq_printf(f, "\n\n");
|
||||
|
||||
/* dump parsed data and input mappings */
|
||||
if (down_interruptible(&hdev->driver_input_lock))
|
||||
return 0;
|
||||
|
||||
hid_dump_device(hdev, f);
|
||||
seq_printf(f, "\n");
|
||||
hid_dump_input_mapping(hdev, f);
|
||||
|
||||
up(&hdev->driver_input_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -982,6 +982,7 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
|
||||
case 0x1b8: map_key_clear(KEY_VIDEO); break;
|
||||
case 0x1bc: map_key_clear(KEY_MESSENGER); break;
|
||||
case 0x1bd: map_key_clear(KEY_INFO); break;
|
||||
case 0x1cb: map_key_clear(KEY_ASSISTANT); break;
|
||||
case 0x201: map_key_clear(KEY_NEW); break;
|
||||
case 0x202: map_key_clear(KEY_OPEN); break;
|
||||
case 0x203: map_key_clear(KEY_CLOSE); break;
|
||||
|
||||
@@ -1907,6 +1907,13 @@ static int hidpp_ff_init(struct hidpp_device *hidpp, u8 feature_index)
|
||||
kfree(data);
|
||||
return -ENOMEM;
|
||||
}
|
||||
data->wq = create_singlethread_workqueue("hidpp-ff-sendqueue");
|
||||
if (!data->wq) {
|
||||
kfree(data->effect_ids);
|
||||
kfree(data);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
data->hidpp = hidpp;
|
||||
data->feature_index = feature_index;
|
||||
data->version = version;
|
||||
@@ -1951,7 +1958,6 @@ static int hidpp_ff_init(struct hidpp_device *hidpp, u8 feature_index)
|
||||
/* ignore boost value at response.fap.params[2] */
|
||||
|
||||
/* init the hardware command queue */
|
||||
data->wq = create_singlethread_workqueue("hidpp-ff-sendqueue");
|
||||
atomic_set(&data->workqueue_size, 0);
|
||||
|
||||
/* initialize with zero autocenter to get wheel in usable state */
|
||||
|
||||
@@ -744,7 +744,6 @@ static const struct hid_device_id hid_ignore_list[] = {
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_DEALEXTREAME, USB_DEVICE_ID_DEALEXTREAME_RADIO_SI4701) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_DELORME, USB_DEVICE_ID_DELORME_EARTHMATE) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_DELORME, USB_DEVICE_ID_DELORME_EM_LT20) },
|
||||
{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, 0x0400) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_ESSENTIAL_REALITY, USB_DEVICE_ID_ESSENTIAL_REALITY_P5) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_ETT, USB_DEVICE_ID_TC5UH) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_ETT, USB_DEVICE_ID_TC4UM) },
|
||||
@@ -1025,6 +1024,10 @@ bool hid_ignore(struct hid_device *hdev)
|
||||
if (hdev->product == 0x0401 &&
|
||||
strncmp(hdev->name, "ELAN0800", 8) != 0)
|
||||
return true;
|
||||
/* Same with product id 0x0400 */
|
||||
if (hdev->product == 0x0400 &&
|
||||
strncmp(hdev->name, "QTEC0001", 8) != 0)
|
||||
return true;
|
||||
break;
|
||||
}
|
||||
|
||||
|
||||
@@ -510,9 +510,9 @@ static int i2c_imx_clk_notifier_call(struct notifier_block *nb,
|
||||
unsigned long action, void *data)
|
||||
{
|
||||
struct clk_notifier_data *ndata = data;
|
||||
struct imx_i2c_struct *i2c_imx = container_of(&ndata->clk,
|
||||
struct imx_i2c_struct *i2c_imx = container_of(nb,
|
||||
struct imx_i2c_struct,
|
||||
clk);
|
||||
clk_change_nb);
|
||||
|
||||
if (action & POST_RATE_CHANGE)
|
||||
i2c_imx_set_clk(i2c_imx, ndata->new_rate);
|
||||
|
||||
@@ -424,7 +424,7 @@ static int stm32f7_i2c_compute_timing(struct stm32f7_i2c_dev *i2c_dev,
|
||||
STM32F7_I2C_ANALOG_FILTER_DELAY_MAX : 0);
|
||||
dnf_delay = setup->dnf * i2cclk;
|
||||
|
||||
sdadel_min = setup->fall_time - i2c_specs[setup->speed].hddat_min -
|
||||
sdadel_min = i2c_specs[setup->speed].hddat_min + setup->fall_time -
|
||||
af_delay_min - (setup->dnf + 3) * i2cclk;
|
||||
|
||||
sdadel_max = i2c_specs[setup->speed].vddat_max - setup->rise_time -
|
||||
|
||||
@@ -602,6 +602,8 @@ static int synquacer_i2c_probe(struct platform_device *pdev)
|
||||
i2c->adapter = synquacer_i2c_ops;
|
||||
i2c_set_adapdata(&i2c->adapter, i2c);
|
||||
i2c->adapter.dev.parent = &pdev->dev;
|
||||
i2c->adapter.dev.of_node = pdev->dev.of_node;
|
||||
ACPI_COMPANION_SET(&i2c->adapter.dev, ACPI_COMPANION(&pdev->dev));
|
||||
i2c->adapter.nr = pdev->id;
|
||||
init_completion(&i2c->completion);
|
||||
|
||||
|
||||
@@ -306,10 +306,7 @@ static int i2c_smbus_host_notify_to_irq(const struct i2c_client *client)
|
||||
if (client->flags & I2C_CLIENT_TEN)
|
||||
return -EINVAL;
|
||||
|
||||
irq = irq_find_mapping(adap->host_notify_domain, client->addr);
|
||||
if (!irq)
|
||||
irq = irq_create_mapping(adap->host_notify_domain,
|
||||
client->addr);
|
||||
irq = irq_create_mapping(adap->host_notify_domain, client->addr);
|
||||
|
||||
return irq > 0 ? irq : -ENXIO;
|
||||
}
|
||||
@@ -330,6 +327,8 @@ static int i2c_device_probe(struct device *dev)
|
||||
|
||||
if (client->flags & I2C_CLIENT_HOST_NOTIFY) {
|
||||
dev_dbg(dev, "Using Host Notify IRQ\n");
|
||||
/* Keep adapter active when Host Notify is required */
|
||||
pm_runtime_get_sync(&client->adapter->dev);
|
||||
irq = i2c_smbus_host_notify_to_irq(client);
|
||||
} else if (dev->of_node) {
|
||||
irq = of_irq_get_byname(dev->of_node, "irq");
|
||||
@@ -433,6 +432,10 @@ static int i2c_device_remove(struct device *dev)
|
||||
dev_pm_clear_wake_irq(&client->dev);
|
||||
device_init_wakeup(&client->dev, false);
|
||||
|
||||
client->irq = client->init_irq;
|
||||
if (client->flags & I2C_CLIENT_HOST_NOTIFY)
|
||||
pm_runtime_put(&client->adapter->dev);
|
||||
|
||||
return status;
|
||||
}
|
||||
|
||||
@@ -742,10 +745,11 @@ i2c_new_device(struct i2c_adapter *adap, struct i2c_board_info const *info)
|
||||
client->flags = info->flags;
|
||||
client->addr = info->addr;
|
||||
|
||||
client->irq = info->irq;
|
||||
if (!client->irq)
|
||||
client->irq = i2c_dev_irq_from_resources(info->resources,
|
||||
client->init_irq = info->irq;
|
||||
if (!client->init_irq)
|
||||
client->init_irq = i2c_dev_irq_from_resources(info->resources,
|
||||
info->num_resources);
|
||||
client->irq = client->init_irq;
|
||||
|
||||
strlcpy(client->name, info->type, sizeof(client->name));
|
||||
|
||||
|
||||
@@ -711,16 +711,20 @@ int ib_mad_agent_security_setup(struct ib_mad_agent *agent,
|
||||
agent->device->name,
|
||||
agent->port_num);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto free_security;
|
||||
|
||||
agent->lsm_nb.notifier_call = ib_mad_agent_security_change;
|
||||
ret = register_lsm_notifier(&agent->lsm_nb);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto free_security;
|
||||
|
||||
agent->smp_allowed = true;
|
||||
agent->lsm_nb_reg = true;
|
||||
return 0;
|
||||
|
||||
free_security:
|
||||
security_ib_free_security(agent->security);
|
||||
return ret;
|
||||
}
|
||||
|
||||
void ib_mad_agent_security_cleanup(struct ib_mad_agent *agent)
|
||||
@@ -728,9 +732,10 @@ void ib_mad_agent_security_cleanup(struct ib_mad_agent *agent)
|
||||
if (!rdma_protocol_ib(agent->device, agent->port_num))
|
||||
return;
|
||||
|
||||
security_ib_free_security(agent->security);
|
||||
if (agent->lsm_nb_reg)
|
||||
unregister_lsm_notifier(&agent->lsm_nb);
|
||||
|
||||
security_ib_free_security(agent->security);
|
||||
}
|
||||
|
||||
int ib_mad_enforce_security(struct ib_mad_agent_private *map, u16 pkey_index)
|
||||
|
||||
@@ -1087,7 +1087,7 @@ struct ib_qp *ib_open_qp(struct ib_xrcd *xrcd,
|
||||
}
|
||||
EXPORT_SYMBOL(ib_open_qp);
|
||||
|
||||
static struct ib_qp *ib_create_xrc_qp(struct ib_qp *qp,
|
||||
static struct ib_qp *create_xrc_qp(struct ib_qp *qp,
|
||||
struct ib_qp_init_attr *qp_init_attr)
|
||||
{
|
||||
struct ib_qp *real_qp = qp;
|
||||
@@ -1103,10 +1103,10 @@ static struct ib_qp *ib_create_xrc_qp(struct ib_qp *qp,
|
||||
|
||||
qp = __ib_open_qp(real_qp, qp_init_attr->event_handler,
|
||||
qp_init_attr->qp_context);
|
||||
if (!IS_ERR(qp))
|
||||
if (IS_ERR(qp))
|
||||
return qp;
|
||||
|
||||
__ib_insert_xrcd_qp(qp_init_attr->xrcd, real_qp);
|
||||
else
|
||||
real_qp->device->destroy_qp(real_qp);
|
||||
return qp;
|
||||
}
|
||||
|
||||
@@ -1137,10 +1137,8 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
|
||||
return qp;
|
||||
|
||||
ret = ib_create_qp_security(qp, device);
|
||||
if (ret) {
|
||||
ib_destroy_qp(qp);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
if (ret)
|
||||
goto err;
|
||||
|
||||
qp->real_qp = qp;
|
||||
qp->qp_type = qp_init_attr->qp_type;
|
||||
@@ -1153,8 +1151,15 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
|
||||
INIT_LIST_HEAD(&qp->sig_mrs);
|
||||
qp->port = 0;
|
||||
|
||||
if (qp_init_attr->qp_type == IB_QPT_XRC_TGT)
|
||||
return ib_create_xrc_qp(qp, qp_init_attr);
|
||||
if (qp_init_attr->qp_type == IB_QPT_XRC_TGT) {
|
||||
struct ib_qp *xrc_qp = create_xrc_qp(qp, qp_init_attr);
|
||||
|
||||
if (IS_ERR(xrc_qp)) {
|
||||
ret = PTR_ERR(xrc_qp);
|
||||
goto err;
|
||||
}
|
||||
return xrc_qp;
|
||||
}
|
||||
|
||||
qp->event_handler = qp_init_attr->event_handler;
|
||||
qp->qp_context = qp_init_attr->qp_context;
|
||||
@@ -1181,11 +1186,8 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
|
||||
|
||||
if (qp_init_attr->cap.max_rdma_ctxs) {
|
||||
ret = rdma_rw_init_mrs(qp, qp_init_attr);
|
||||
if (ret) {
|
||||
pr_err("failed to init MR pool ret= %d\n", ret);
|
||||
ib_destroy_qp(qp);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
if (ret)
|
||||
goto err;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -1198,6 +1200,11 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
|
||||
device->attrs.max_sge_rd);
|
||||
|
||||
return qp;
|
||||
|
||||
err:
|
||||
ib_destroy_qp(qp);
|
||||
return ERR_PTR(ret);
|
||||
|
||||
}
|
||||
EXPORT_SYMBOL(ib_create_qp);
|
||||
|
||||
|
||||
@@ -2793,8 +2793,19 @@ static void srpt_queue_tm_rsp(struct se_cmd *cmd)
|
||||
srpt_queue_response(cmd);
|
||||
}
|
||||
|
||||
/*
|
||||
* This function is called for aborted commands if no response is sent to the
|
||||
* initiator. Make sure that the credits freed by aborting a command are
|
||||
* returned to the initiator the next time a response is sent by incrementing
|
||||
* ch->req_lim_delta.
|
||||
*/
|
||||
static void srpt_aborted_task(struct se_cmd *cmd)
|
||||
{
|
||||
struct srpt_send_ioctx *ioctx = container_of(cmd,
|
||||
struct srpt_send_ioctx, cmd);
|
||||
struct srpt_rdma_ch *ch = ioctx->ch;
|
||||
|
||||
atomic_inc(&ch->req_lim_delta);
|
||||
}
|
||||
|
||||
static int srpt_queue_status(struct se_cmd *cmd)
|
||||
|
||||
@@ -148,6 +148,9 @@ static int imx_snvs_pwrkey_probe(struct platform_device *pdev)
|
||||
return error;
|
||||
}
|
||||
|
||||
pdata->input = input;
|
||||
platform_set_drvdata(pdev, pdata);
|
||||
|
||||
error = devm_request_irq(&pdev->dev, pdata->irq,
|
||||
imx_snvs_pwrkey_interrupt,
|
||||
0, pdev->name, pdev);
|
||||
@@ -163,9 +166,6 @@ static int imx_snvs_pwrkey_probe(struct platform_device *pdev)
|
||||
return error;
|
||||
}
|
||||
|
||||
pdata->input = input;
|
||||
platform_set_drvdata(pdev, pdata);
|
||||
|
||||
device_init_wakeup(&pdev->dev, pdata->wakeup);
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -106,27 +106,29 @@ struct stmfts_data {
|
||||
bool running;
|
||||
};
|
||||
|
||||
static void stmfts_brightness_set(struct led_classdev *led_cdev,
|
||||
static int stmfts_brightness_set(struct led_classdev *led_cdev,
|
||||
enum led_brightness value)
|
||||
{
|
||||
struct stmfts_data *sdata = container_of(led_cdev,
|
||||
struct stmfts_data, led_cdev);
|
||||
int err;
|
||||
|
||||
if (value == sdata->led_status || !sdata->ledvdd)
|
||||
return;
|
||||
|
||||
if (value != sdata->led_status && sdata->ledvdd) {
|
||||
if (!value) {
|
||||
regulator_disable(sdata->ledvdd);
|
||||
} else {
|
||||
err = regulator_enable(sdata->ledvdd);
|
||||
if (err)
|
||||
if (err) {
|
||||
dev_warn(&sdata->client->dev,
|
||||
"failed to disable ledvdd regulator: %d\n",
|
||||
err);
|
||||
return err;
|
||||
}
|
||||
}
|
||||
sdata->led_status = value;
|
||||
}
|
||||
|
||||
sdata->led_status = value;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static enum led_brightness stmfts_brightness_get(struct led_classdev *led_cdev)
|
||||
@@ -608,7 +610,7 @@ static int stmfts_enable_led(struct stmfts_data *sdata)
|
||||
sdata->led_cdev.name = STMFTS_DEV_NAME;
|
||||
sdata->led_cdev.max_brightness = LED_ON;
|
||||
sdata->led_cdev.brightness = LED_OFF;
|
||||
sdata->led_cdev.brightness_set = stmfts_brightness_set;
|
||||
sdata->led_cdev.brightness_set_blocking = stmfts_brightness_set;
|
||||
sdata->led_cdev.brightness_get = stmfts_brightness_get;
|
||||
|
||||
err = devm_led_classdev_register(&sdata->client->dev, &sdata->led_cdev);
|
||||
|
||||
@@ -159,10 +159,10 @@ MODULE_PARM_DESC(debug, "Debug level (0-1)");
|
||||
#define REG_GFIX 0x69 /* Fix gain control */
|
||||
|
||||
#define REG_DBLV 0x6b /* PLL control an debugging */
|
||||
#define DBLV_BYPASS 0x00 /* Bypass PLL */
|
||||
#define DBLV_X4 0x01 /* clock x4 */
|
||||
#define DBLV_X6 0x10 /* clock x6 */
|
||||
#define DBLV_X8 0x11 /* clock x8 */
|
||||
#define DBLV_BYPASS 0x0a /* Bypass PLL */
|
||||
#define DBLV_X4 0x4a /* clock x4 */
|
||||
#define DBLV_X6 0x8a /* clock x6 */
|
||||
#define DBLV_X8 0xca /* clock x8 */
|
||||
|
||||
#define REG_SCALING_XSC 0x70 /* Test pattern and horizontal scale factor */
|
||||
#define TEST_PATTTERN_0 0x80
|
||||
@@ -862,7 +862,7 @@ static int ov7675_set_framerate(struct v4l2_subdev *sd,
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
return ov7670_write(sd, REG_DBLV, DBLV_X4);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ov7670_get_framerate_legacy(struct v4l2_subdev *sd,
|
||||
@@ -1797,11 +1797,7 @@ static int ov7670_probe(struct i2c_client *client,
|
||||
if (config->clock_speed)
|
||||
info->clock_speed = config->clock_speed;
|
||||
|
||||
/*
|
||||
* It should be allowed for ov7670 too when it is migrated to
|
||||
* the new frame rate formula.
|
||||
*/
|
||||
if (config->pll_bypass && id->driver_data != MODEL_OV7670)
|
||||
if (config->pll_bypass)
|
||||
info->pll_bypass = true;
|
||||
|
||||
if (config->pclk_hb_disable)
|
||||
|
||||
@@ -1245,6 +1245,28 @@ twl_probe(struct i2c_client *client, const struct i2c_device_id *id)
|
||||
return status;
|
||||
}
|
||||
|
||||
static int __maybe_unused twl_suspend(struct device *dev)
|
||||
{
|
||||
struct i2c_client *client = to_i2c_client(dev);
|
||||
|
||||
if (client->irq)
|
||||
disable_irq(client->irq);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __maybe_unused twl_resume(struct device *dev)
|
||||
{
|
||||
struct i2c_client *client = to_i2c_client(dev);
|
||||
|
||||
if (client->irq)
|
||||
enable_irq(client->irq);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static SIMPLE_DEV_PM_OPS(twl_dev_pm_ops, twl_suspend, twl_resume);
|
||||
|
||||
static const struct i2c_device_id twl_ids[] = {
|
||||
{ "twl4030", TWL4030_VAUX2 }, /* "Triton 2" */
|
||||
{ "twl5030", 0 }, /* T2 updated */
|
||||
@@ -1262,6 +1284,7 @@ static const struct i2c_device_id twl_ids[] = {
|
||||
/* One Client Driver , 4 Clients */
|
||||
static struct i2c_driver twl_driver = {
|
||||
.driver.name = DRIVER_NAME,
|
||||
.driver.pm = &twl_dev_pm_ops,
|
||||
.id_table = twl_ids,
|
||||
.probe = twl_probe,
|
||||
.remove = twl_remove,
|
||||
|
||||
@@ -55,7 +55,9 @@ static SLAVE_ATTR_RO(link_failure_count);
|
||||
|
||||
static ssize_t perm_hwaddr_show(struct slave *slave, char *buf)
|
||||
{
|
||||
return sprintf(buf, "%pM\n", slave->perm_hwaddr);
|
||||
return sprintf(buf, "%*phC\n",
|
||||
slave->dev->addr_len,
|
||||
slave->perm_hwaddr);
|
||||
}
|
||||
static SLAVE_ATTR_RO(perm_hwaddr);
|
||||
|
||||
|
||||
@@ -354,7 +354,10 @@ static struct cxgbi_ppm_pool *ppm_alloc_cpu_pool(unsigned int *total,
|
||||
ppmax = max;
|
||||
|
||||
/* pool size must be multiple of unsigned long */
|
||||
bmap = BITS_TO_LONGS(ppmax);
|
||||
bmap = ppmax / BITS_PER_TYPE(unsigned long);
|
||||
if (!bmap)
|
||||
return NULL;
|
||||
|
||||
ppmax = (bmap * sizeof(unsigned long)) << 3;
|
||||
|
||||
alloc_sz = sizeof(*pools) + sizeof(unsigned long) * bmap;
|
||||
@@ -402,6 +405,10 @@ int cxgbi_ppm_init(void **ppm_pp, struct net_device *ndev,
|
||||
if (reserve_factor) {
|
||||
ppmax_pool = ppmax / reserve_factor;
|
||||
pool = ppm_alloc_cpu_pool(&ppmax_pool, &pool_index_max);
|
||||
if (!pool) {
|
||||
ppmax_pool = 0;
|
||||
reserve_factor = 0;
|
||||
}
|
||||
|
||||
pr_debug("%s: ppmax %u, cpu total %u, per cpu %u.\n",
|
||||
ndev->name, ppmax, ppmax_pool, pool_index_max);
|
||||
|
||||
@@ -150,7 +150,6 @@ static int hnae_alloc_buffers(struct hnae_ring *ring)
|
||||
/* free desc along with its attached buffer */
|
||||
static void hnae_free_desc(struct hnae_ring *ring)
|
||||
{
|
||||
hnae_free_buffers(ring);
|
||||
dma_unmap_single(ring_to_dev(ring), ring->desc_dma_addr,
|
||||
ring->desc_num * sizeof(ring->desc[0]),
|
||||
ring_to_dma_dir(ring));
|
||||
@@ -183,6 +182,9 @@ static int hnae_alloc_desc(struct hnae_ring *ring)
|
||||
/* fini ring, also free the buffer for the ring */
|
||||
static void hnae_fini_ring(struct hnae_ring *ring)
|
||||
{
|
||||
if (is_rx_ring(ring))
|
||||
hnae_free_buffers(ring);
|
||||
|
||||
hnae_free_desc(ring);
|
||||
kfree(ring->desc_cb);
|
||||
ring->desc_cb = NULL;
|
||||
|
||||
@@ -2750,6 +2750,17 @@ int hns_dsaf_get_regs_count(void)
|
||||
return DSAF_DUMP_REGS_NUM;
|
||||
}
|
||||
|
||||
static int hns_dsaf_get_port_id(u8 port)
|
||||
{
|
||||
if (port < DSAF_SERVICE_NW_NUM)
|
||||
return port;
|
||||
|
||||
if (port >= DSAF_BASE_INNER_PORT_NUM)
|
||||
return port - DSAF_BASE_INNER_PORT_NUM + DSAF_SERVICE_NW_NUM;
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static void set_promisc_tcam_enable(struct dsaf_device *dsaf_dev, u32 port)
|
||||
{
|
||||
struct dsaf_tbl_tcam_ucast_cfg tbl_tcam_ucast = {0, 1, 0, 0, 0x80};
|
||||
@@ -2815,23 +2826,33 @@ static void set_promisc_tcam_enable(struct dsaf_device *dsaf_dev, u32 port)
|
||||
memset(&temp_key, 0x0, sizeof(temp_key));
|
||||
mask_entry.addr[0] = 0x01;
|
||||
hns_dsaf_set_mac_key(dsaf_dev, &mask_key, mask_entry.in_vlan_id,
|
||||
port, mask_entry.addr);
|
||||
0xf, mask_entry.addr);
|
||||
tbl_tcam_mcast.tbl_mcast_item_vld = 1;
|
||||
tbl_tcam_mcast.tbl_mcast_old_en = 0;
|
||||
|
||||
if (port < DSAF_SERVICE_NW_NUM) {
|
||||
mskid = port;
|
||||
} else if (port >= DSAF_BASE_INNER_PORT_NUM) {
|
||||
mskid = port - DSAF_BASE_INNER_PORT_NUM + DSAF_SERVICE_NW_NUM;
|
||||
} else {
|
||||
/* set MAC port to handle multicast */
|
||||
mskid = hns_dsaf_get_port_id(port);
|
||||
if (mskid == -EINVAL) {
|
||||
dev_err(dsaf_dev->dev, "%s,pnum(%d)error,key(%#x:%#x)\n",
|
||||
dsaf_dev->ae_dev.name, port,
|
||||
mask_key.high.val, mask_key.low.val);
|
||||
return;
|
||||
}
|
||||
|
||||
dsaf_set_bit(tbl_tcam_mcast.tbl_mcast_port_msk[mskid / 32],
|
||||
mskid % 32, 1);
|
||||
|
||||
/* set pool bit map to handle multicast */
|
||||
mskid = hns_dsaf_get_port_id(port_num);
|
||||
if (mskid == -EINVAL) {
|
||||
dev_err(dsaf_dev->dev,
|
||||
"%s, pool bit map pnum(%d)error,key(%#x:%#x)\n",
|
||||
dsaf_dev->ae_dev.name, port_num,
|
||||
mask_key.high.val, mask_key.low.val);
|
||||
return;
|
||||
}
|
||||
dsaf_set_bit(tbl_tcam_mcast.tbl_mcast_port_msk[mskid / 32],
|
||||
mskid % 32, 1);
|
||||
|
||||
memcpy(&temp_key, &mask_key, sizeof(mask_key));
|
||||
hns_dsaf_tcam_mc_cfg_vague(dsaf_dev, entry_index, &tbl_tcam_data_mc,
|
||||
(struct dsaf_tbl_tcam_data *)(&mask_key),
|
||||
|
||||
@@ -129,7 +129,7 @@ static void hns_xgmac_lf_rf_control_init(struct mac_driver *mac_drv)
|
||||
dsaf_set_bit(val, XGMAC_UNIDIR_EN_B, 0);
|
||||
dsaf_set_bit(val, XGMAC_RF_TX_EN_B, 1);
|
||||
dsaf_set_field(val, XGMAC_LF_RF_INSERT_M, XGMAC_LF_RF_INSERT_S, 0);
|
||||
dsaf_write_reg(mac_drv, XGMAC_MAC_TX_LF_RF_CONTROL_REG, val);
|
||||
dsaf_write_dev(mac_drv, XGMAC_MAC_TX_LF_RF_CONTROL_REG, val);
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -29,9 +29,6 @@
|
||||
|
||||
#define SERVICE_TIMER_HZ (1 * HZ)
|
||||
|
||||
#define NIC_TX_CLEAN_MAX_NUM 256
|
||||
#define NIC_RX_CLEAN_MAX_NUM 64
|
||||
|
||||
#define RCB_IRQ_NOT_INITED 0
|
||||
#define RCB_IRQ_INITED 1
|
||||
#define HNS_BUFFER_SIZE_2048 2048
|
||||
@@ -376,8 +373,6 @@ netdev_tx_t hns_nic_net_xmit_hw(struct net_device *ndev,
|
||||
wmb(); /* commit all data before submit */
|
||||
assert(skb->queue_mapping < priv->ae_handle->q_num);
|
||||
hnae_queue_xmit(priv->ae_handle->qs[skb->queue_mapping], buf_num);
|
||||
ring->stats.tx_pkts++;
|
||||
ring->stats.tx_bytes += skb->len;
|
||||
|
||||
return NETDEV_TX_OK;
|
||||
|
||||
@@ -999,6 +994,9 @@ static int hns_nic_tx_poll_one(struct hns_nic_ring_data *ring_data,
|
||||
/* issue prefetch for next Tx descriptor */
|
||||
prefetch(&ring->desc_cb[ring->next_to_clean]);
|
||||
}
|
||||
/* update tx ring statistics. */
|
||||
ring->stats.tx_pkts += pkts;
|
||||
ring->stats.tx_bytes += bytes;
|
||||
|
||||
NETIF_TX_UNLOCK(ring);
|
||||
|
||||
@@ -2150,7 +2148,7 @@ static int hns_nic_init_ring_data(struct hns_nic_priv *priv)
|
||||
hns_nic_tx_fini_pro_v2;
|
||||
|
||||
netif_napi_add(priv->netdev, &rd->napi,
|
||||
hns_nic_common_poll, NIC_TX_CLEAN_MAX_NUM);
|
||||
hns_nic_common_poll, NAPI_POLL_WEIGHT);
|
||||
rd->ring->irq_init_flag = RCB_IRQ_NOT_INITED;
|
||||
}
|
||||
for (i = h->q_num; i < h->q_num * 2; i++) {
|
||||
@@ -2163,7 +2161,7 @@ static int hns_nic_init_ring_data(struct hns_nic_priv *priv)
|
||||
hns_nic_rx_fini_pro_v2;
|
||||
|
||||
netif_napi_add(priv->netdev, &rd->napi,
|
||||
hns_nic_common_poll, NIC_RX_CLEAN_MAX_NUM);
|
||||
hns_nic_common_poll, NAPI_POLL_WEIGHT);
|
||||
rd->ring->irq_init_flag = RCB_IRQ_NOT_INITED;
|
||||
}
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
# Makefile for the HISILICON network device drivers.
|
||||
#
|
||||
|
||||
ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
|
||||
ccflags-y := -I $(srctree)/drivers/net/ethernet/hisilicon/hns3
|
||||
|
||||
obj-$(CONFIG_HNS3_HCLGE) += hclge.o
|
||||
hclge-objs = hclge_main.o hclge_cmd.o hclge_mdio.o hclge_tm.o hclge_mbx.o
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
# Makefile for the HISILICON network device drivers.
|
||||
#
|
||||
|
||||
ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
|
||||
ccflags-y := -I $(srctree)/drivers/net/ethernet/hisilicon/hns3
|
||||
|
||||
obj-$(CONFIG_HNS3_HCLGEVF) += hclgevf.o
|
||||
hclgevf-objs = hclgevf_main.o hclgevf_cmd.o hclgevf_mbx.o
|
||||
@@ -194,6 +194,8 @@
|
||||
/* enable link status from external LINK_0 and LINK_1 pins */
|
||||
#define E1000_CTRL_SWDPIN0 0x00040000 /* SWDPIN 0 value */
|
||||
#define E1000_CTRL_SWDPIN1 0x00080000 /* SWDPIN 1 value */
|
||||
#define E1000_CTRL_ADVD3WUC 0x00100000 /* D3 WUC */
|
||||
#define E1000_CTRL_EN_PHY_PWR_MGMT 0x00200000 /* PHY PM enable */
|
||||
#define E1000_CTRL_SDP0_DIR 0x00400000 /* SDP0 Data direction */
|
||||
#define E1000_CTRL_SDP1_DIR 0x00800000 /* SDP1 Data direction */
|
||||
#define E1000_CTRL_RST 0x04000000 /* Global reset */
|
||||
|
||||
@@ -8754,9 +8754,7 @@ static int __igb_shutdown(struct pci_dev *pdev, bool *enable_wake,
|
||||
struct e1000_hw *hw = &adapter->hw;
|
||||
u32 ctrl, rctl, status;
|
||||
u32 wufc = runtime ? E1000_WUFC_LNKC : adapter->wol;
|
||||
#ifdef CONFIG_PM
|
||||
int retval = 0;
|
||||
#endif
|
||||
bool wake;
|
||||
|
||||
rtnl_lock();
|
||||
netif_device_detach(netdev);
|
||||
@@ -8769,14 +8767,6 @@ static int __igb_shutdown(struct pci_dev *pdev, bool *enable_wake,
|
||||
igb_clear_interrupt_scheme(adapter);
|
||||
rtnl_unlock();
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
if (!runtime) {
|
||||
retval = pci_save_state(pdev);
|
||||
if (retval)
|
||||
return retval;
|
||||
}
|
||||
#endif
|
||||
|
||||
status = rd32(E1000_STATUS);
|
||||
if (status & E1000_STATUS_LU)
|
||||
wufc &= ~E1000_WUFC_LNKC;
|
||||
@@ -8793,10 +8783,6 @@ static int __igb_shutdown(struct pci_dev *pdev, bool *enable_wake,
|
||||
}
|
||||
|
||||
ctrl = rd32(E1000_CTRL);
|
||||
/* advertise wake from D3Cold */
|
||||
#define E1000_CTRL_ADVD3WUC 0x00100000
|
||||
/* phy power management enable */
|
||||
#define E1000_CTRL_EN_PHY_PWR_MGMT 0x00200000
|
||||
ctrl |= E1000_CTRL_ADVD3WUC;
|
||||
wr32(E1000_CTRL, ctrl);
|
||||
|
||||
@@ -8810,12 +8796,15 @@ static int __igb_shutdown(struct pci_dev *pdev, bool *enable_wake,
|
||||
wr32(E1000_WUFC, 0);
|
||||
}
|
||||
|
||||
*enable_wake = wufc || adapter->en_mng_pt;
|
||||
if (!*enable_wake)
|
||||
wake = wufc || adapter->en_mng_pt;
|
||||
if (!wake)
|
||||
igb_power_down_link(adapter);
|
||||
else
|
||||
igb_power_up_link(adapter);
|
||||
|
||||
if (enable_wake)
|
||||
*enable_wake = wake;
|
||||
|
||||
/* Release control of h/w to f/w. If f/w is AMT enabled, this
|
||||
* would have already happened in close and is redundant.
|
||||
*/
|
||||
@@ -8858,22 +8847,7 @@ static void igb_deliver_wake_packet(struct net_device *netdev)
|
||||
|
||||
static int __maybe_unused igb_suspend(struct device *dev)
|
||||
{
|
||||
int retval;
|
||||
bool wake;
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
|
||||
retval = __igb_shutdown(pdev, &wake, 0);
|
||||
if (retval)
|
||||
return retval;
|
||||
|
||||
if (wake) {
|
||||
pci_prepare_to_sleep(pdev);
|
||||
} else {
|
||||
pci_wake_from_d3(pdev, false);
|
||||
pci_set_power_state(pdev, PCI_D3hot);
|
||||
}
|
||||
|
||||
return 0;
|
||||
return __igb_shutdown(to_pci_dev(dev), NULL, 0);
|
||||
}
|
||||
|
||||
static int __maybe_unused igb_resume(struct device *dev)
|
||||
@@ -8944,22 +8918,7 @@ static int __maybe_unused igb_runtime_idle(struct device *dev)
|
||||
|
||||
static int __maybe_unused igb_runtime_suspend(struct device *dev)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
int retval;
|
||||
bool wake;
|
||||
|
||||
retval = __igb_shutdown(pdev, &wake, 1);
|
||||
if (retval)
|
||||
return retval;
|
||||
|
||||
if (wake) {
|
||||
pci_prepare_to_sleep(pdev);
|
||||
} else {
|
||||
pci_wake_from_d3(pdev, false);
|
||||
pci_set_power_state(pdev, PCI_D3hot);
|
||||
}
|
||||
|
||||
return 0;
|
||||
return __igb_shutdown(to_pci_dev(dev), NULL, 1);
|
||||
}
|
||||
|
||||
static int __maybe_unused igb_runtime_resume(struct device *dev)
|
||||
|
||||
@@ -80,7 +80,6 @@ static int arm_vport_context_events_cmd(struct mlx5_core_dev *dev, u16 vport,
|
||||
opcode, MLX5_CMD_OP_MODIFY_NIC_VPORT_CONTEXT);
|
||||
MLX5_SET(modify_nic_vport_context_in, in, field_select.change_event, 1);
|
||||
MLX5_SET(modify_nic_vport_context_in, in, vport_number, vport);
|
||||
if (vport)
|
||||
MLX5_SET(modify_nic_vport_context_in, in, other_vport, 1);
|
||||
nic_vport_ctx = MLX5_ADDR_OF(modify_nic_vport_context_in,
|
||||
in, nic_vport_context);
|
||||
@@ -109,7 +108,6 @@ static int modify_esw_vport_context_cmd(struct mlx5_core_dev *dev, u16 vport,
|
||||
MLX5_SET(modify_esw_vport_context_in, in, opcode,
|
||||
MLX5_CMD_OP_MODIFY_ESW_VPORT_CONTEXT);
|
||||
MLX5_SET(modify_esw_vport_context_in, in, vport_number, vport);
|
||||
if (vport)
|
||||
MLX5_SET(modify_esw_vport_context_in, in, other_vport, 1);
|
||||
return mlx5_cmd_exec(dev, in, inlen, out, sizeof(out));
|
||||
}
|
||||
|
||||
@@ -29,8 +29,10 @@
|
||||
/* Specific functions used for Ring mode */
|
||||
|
||||
/* Enhanced descriptors */
|
||||
static inline void ehn_desc_rx_set_on_ring(struct dma_desc *p, int end)
|
||||
static inline void ehn_desc_rx_set_on_ring(struct dma_desc *p, int end,
|
||||
int bfsize)
|
||||
{
|
||||
if (bfsize == BUF_SIZE_16KiB)
|
||||
p->des1 |= cpu_to_le32((BUF_SIZE_8KiB
|
||||
<< ERDES1_BUFFER2_SIZE_SHIFT)
|
||||
& ERDES1_BUFFER2_SIZE_MASK);
|
||||
@@ -59,11 +61,15 @@ static inline void enh_set_tx_desc_len_on_ring(struct dma_desc *p, int len)
|
||||
}
|
||||
|
||||
/* Normal descriptors */
|
||||
static inline void ndesc_rx_set_on_ring(struct dma_desc *p, int end)
|
||||
static inline void ndesc_rx_set_on_ring(struct dma_desc *p, int end, int bfsize)
|
||||
{
|
||||
p->des1 |= cpu_to_le32(((BUF_SIZE_2KiB - 1)
|
||||
<< RDES1_BUFFER2_SIZE_SHIFT)
|
||||
if (bfsize >= BUF_SIZE_2KiB) {
|
||||
int bfsize2;
|
||||
|
||||
bfsize2 = min(bfsize - BUF_SIZE_2KiB + 1, BUF_SIZE_2KiB - 1);
|
||||
p->des1 |= cpu_to_le32((bfsize2 << RDES1_BUFFER2_SIZE_SHIFT)
|
||||
& RDES1_BUFFER2_SIZE_MASK);
|
||||
}
|
||||
|
||||
if (end)
|
||||
p->des1 |= cpu_to_le32(RDES1_END_RING);
|
||||
|
||||
@@ -296,7 +296,7 @@ static int dwmac4_wrback_get_rx_timestamp_status(void *desc, void *next_desc,
|
||||
}
|
||||
|
||||
static void dwmac4_rd_init_rx_desc(struct dma_desc *p, int disable_rx_ic,
|
||||
int mode, int end)
|
||||
int mode, int end, int bfsize)
|
||||
{
|
||||
dwmac4_set_rx_owner(p, disable_rx_ic);
|
||||
}
|
||||
|
||||
@@ -123,7 +123,7 @@ static int dwxgmac2_get_rx_timestamp_status(void *desc, void *next_desc,
|
||||
}
|
||||
|
||||
static void dwxgmac2_init_rx_desc(struct dma_desc *p, int disable_rx_ic,
|
||||
int mode, int end)
|
||||
int mode, int end, int bfsize)
|
||||
{
|
||||
dwxgmac2_set_rx_owner(p, disable_rx_ic);
|
||||
}
|
||||
|
||||
@@ -201,6 +201,11 @@ static int enh_desc_get_rx_status(void *data, struct stmmac_extra_stats *x,
|
||||
if (unlikely(rdes0 & RDES0_OWN))
|
||||
return dma_own;
|
||||
|
||||
if (unlikely(!(rdes0 & RDES0_LAST_DESCRIPTOR))) {
|
||||
stats->rx_length_errors++;
|
||||
return discard_frame;
|
||||
}
|
||||
|
||||
if (unlikely(rdes0 & RDES0_ERROR_SUMMARY)) {
|
||||
if (unlikely(rdes0 & RDES0_DESCRIPTOR_ERROR)) {
|
||||
x->rx_desc++;
|
||||
@@ -231,6 +236,7 @@ static int enh_desc_get_rx_status(void *data, struct stmmac_extra_stats *x,
|
||||
* It doesn't match with the information reported into the databook.
|
||||
* At any rate, we need to understand if the CSUM hw computation is ok
|
||||
* and report this info to the upper layers. */
|
||||
if (likely(ret == good_frame))
|
||||
ret = enh_desc_coe_rdes0(!!(rdes0 & RDES0_IPC_CSUM_ERROR),
|
||||
!!(rdes0 & RDES0_FRAME_TYPE),
|
||||
!!(rdes0 & ERDES0_RX_MAC_ADDR));
|
||||
@@ -259,15 +265,19 @@ static int enh_desc_get_rx_status(void *data, struct stmmac_extra_stats *x,
|
||||
}
|
||||
|
||||
static void enh_desc_init_rx_desc(struct dma_desc *p, int disable_rx_ic,
|
||||
int mode, int end)
|
||||
int mode, int end, int bfsize)
|
||||
{
|
||||
int bfsize1;
|
||||
|
||||
p->des0 |= cpu_to_le32(RDES0_OWN);
|
||||
p->des1 |= cpu_to_le32(BUF_SIZE_8KiB & ERDES1_BUFFER1_SIZE_MASK);
|
||||
|
||||
bfsize1 = min(bfsize, BUF_SIZE_8KiB);
|
||||
p->des1 |= cpu_to_le32(bfsize1 & ERDES1_BUFFER1_SIZE_MASK);
|
||||
|
||||
if (mode == STMMAC_CHAIN_MODE)
|
||||
ehn_desc_rx_set_on_chain(p);
|
||||
else
|
||||
ehn_desc_rx_set_on_ring(p, end);
|
||||
ehn_desc_rx_set_on_ring(p, end, bfsize);
|
||||
|
||||
if (disable_rx_ic)
|
||||
p->des1 |= cpu_to_le32(ERDES1_DISABLE_IC);
|
||||
|
||||
@@ -33,7 +33,7 @@ struct dma_extended_desc;
|
||||
struct stmmac_desc_ops {
|
||||
/* DMA RX descriptor ring initialization */
|
||||
void (*init_rx_desc)(struct dma_desc *p, int disable_rx_ic, int mode,
|
||||
int end);
|
||||
int end, int bfsize);
|
||||
/* DMA TX descriptor ring initialization */
|
||||
void (*init_tx_desc)(struct dma_desc *p, int mode, int end);
|
||||
/* Invoked by the xmit function to prepare the tx descriptor */
|
||||
|
||||
@@ -91,8 +91,6 @@ static int ndesc_get_rx_status(void *data, struct stmmac_extra_stats *x,
|
||||
return dma_own;
|
||||
|
||||
if (unlikely(!(rdes0 & RDES0_LAST_DESCRIPTOR))) {
|
||||
pr_warn("%s: Oversized frame spanned multiple buffers\n",
|
||||
__func__);
|
||||
stats->rx_length_errors++;
|
||||
return discard_frame;
|
||||
}
|
||||
@@ -135,15 +133,19 @@ static int ndesc_get_rx_status(void *data, struct stmmac_extra_stats *x,
|
||||
}
|
||||
|
||||
static void ndesc_init_rx_desc(struct dma_desc *p, int disable_rx_ic, int mode,
|
||||
int end)
|
||||
int end, int bfsize)
|
||||
{
|
||||
int bfsize1;
|
||||
|
||||
p->des0 |= cpu_to_le32(RDES0_OWN);
|
||||
p->des1 |= cpu_to_le32((BUF_SIZE_2KiB - 1) & RDES1_BUFFER1_SIZE_MASK);
|
||||
|
||||
bfsize1 = min(bfsize, BUF_SIZE_2KiB - 1);
|
||||
p->des1 |= cpu_to_le32(bfsize & RDES1_BUFFER1_SIZE_MASK);
|
||||
|
||||
if (mode == STMMAC_CHAIN_MODE)
|
||||
ndesc_rx_set_on_chain(p, end);
|
||||
else
|
||||
ndesc_rx_set_on_ring(p, end);
|
||||
ndesc_rx_set_on_ring(p, end, bfsize);
|
||||
|
||||
if (disable_rx_ic)
|
||||
p->des1 |= cpu_to_le32(RDES1_DISABLE_IC);
|
||||
|
||||
@@ -1111,11 +1111,13 @@ static void stmmac_clear_rx_descriptors(struct stmmac_priv *priv, u32 queue)
|
||||
if (priv->extend_desc)
|
||||
stmmac_init_rx_desc(priv, &rx_q->dma_erx[i].basic,
|
||||
priv->use_riwt, priv->mode,
|
||||
(i == DMA_RX_SIZE - 1));
|
||||
(i == DMA_RX_SIZE - 1),
|
||||
priv->dma_buf_sz);
|
||||
else
|
||||
stmmac_init_rx_desc(priv, &rx_q->dma_rx[i],
|
||||
priv->use_riwt, priv->mode,
|
||||
(i == DMA_RX_SIZE - 1));
|
||||
(i == DMA_RX_SIZE - 1),
|
||||
priv->dma_buf_sz);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -3331,9 +3333,8 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
|
||||
{
|
||||
struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
|
||||
struct stmmac_channel *ch = &priv->channel[queue];
|
||||
unsigned int entry = rx_q->cur_rx;
|
||||
unsigned int next_entry = rx_q->cur_rx;
|
||||
int coe = priv->hw->rx_csum;
|
||||
unsigned int next_entry;
|
||||
unsigned int count = 0;
|
||||
bool xmac;
|
||||
|
||||
@@ -3351,10 +3352,12 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
|
||||
stmmac_display_ring(priv, rx_head, DMA_RX_SIZE, true);
|
||||
}
|
||||
while (count < limit) {
|
||||
int status;
|
||||
int entry, status;
|
||||
struct dma_desc *p;
|
||||
struct dma_desc *np;
|
||||
|
||||
entry = next_entry;
|
||||
|
||||
if (priv->extend_desc)
|
||||
p = (struct dma_desc *)(rx_q->dma_erx + entry);
|
||||
else
|
||||
@@ -3410,11 +3413,12 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
|
||||
* ignored
|
||||
*/
|
||||
if (frame_len > priv->dma_buf_sz) {
|
||||
if (net_ratelimit())
|
||||
netdev_err(priv->dev,
|
||||
"len %d larger than size (%d)\n",
|
||||
frame_len, priv->dma_buf_sz);
|
||||
priv->dev->stats.rx_length_errors++;
|
||||
break;
|
||||
continue;
|
||||
}
|
||||
|
||||
/* ACS is set; GMAC core strips PAD/FCS for IEEE 802.3
|
||||
@@ -3449,7 +3453,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
|
||||
dev_warn(priv->device,
|
||||
"packet dropped\n");
|
||||
priv->dev->stats.rx_dropped++;
|
||||
break;
|
||||
continue;
|
||||
}
|
||||
|
||||
dma_sync_single_for_cpu(priv->device,
|
||||
@@ -3469,11 +3473,12 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
|
||||
} else {
|
||||
skb = rx_q->rx_skbuff[entry];
|
||||
if (unlikely(!skb)) {
|
||||
if (net_ratelimit())
|
||||
netdev_err(priv->dev,
|
||||
"%s: Inconsistent Rx chain\n",
|
||||
priv->dev->name);
|
||||
priv->dev->stats.rx_dropped++;
|
||||
break;
|
||||
continue;
|
||||
}
|
||||
prefetch(skb->data - NET_IP_ALIGN);
|
||||
rx_q->rx_skbuff[entry] = NULL;
|
||||
@@ -3508,7 +3513,6 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
|
||||
priv->dev->stats.rx_packets++;
|
||||
priv->dev->stats.rx_bytes += frame_len;
|
||||
}
|
||||
entry = next_entry;
|
||||
}
|
||||
|
||||
stmmac_rx_refill(priv, queue);
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
/******************************************************************************
|
||||
*
|
||||
* Copyright(c) 2007 - 2014 Intel Corporation. All rights reserved.
|
||||
* Copyright(c) 2018 Intel Corporation
|
||||
* Copyright(c) 2018 - 2019 Intel Corporation
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms of version 2 of the GNU General Public License as
|
||||
@@ -140,6 +140,7 @@ const struct iwl_cfg iwl5350_agn_cfg = {
|
||||
.ht_params = &iwl5000_ht_params,
|
||||
.led_mode = IWL_LED_BLINK,
|
||||
.internal_wimax_coex = true,
|
||||
.csr = &iwl_csr_v1,
|
||||
};
|
||||
|
||||
#define IWL_DEVICE_5150 \
|
||||
|
||||
@@ -181,7 +181,7 @@ static int mwifiex_sdio_resume(struct device *dev)
|
||||
|
||||
adapter = card->adapter;
|
||||
|
||||
if (test_bit(MWIFIEX_IS_SUSPENDED, &adapter->work_flags)) {
|
||||
if (!test_bit(MWIFIEX_IS_SUSPENDED, &adapter->work_flags)) {
|
||||
mwifiex_dbg(adapter, WARN,
|
||||
"device already resumed\n");
|
||||
return 0;
|
||||
|
||||
@@ -921,6 +921,15 @@ bool nvmet_host_allowed(struct nvmet_req *req, struct nvmet_subsys *subsys,
|
||||
return __nvmet_host_allowed(subsys, hostnqn);
|
||||
}
|
||||
|
||||
static void nvmet_fatal_error_handler(struct work_struct *work)
|
||||
{
|
||||
struct nvmet_ctrl *ctrl =
|
||||
container_of(work, struct nvmet_ctrl, fatal_err_work);
|
||||
|
||||
pr_err("ctrl %d fatal error occurred!\n", ctrl->cntlid);
|
||||
ctrl->ops->delete_ctrl(ctrl);
|
||||
}
|
||||
|
||||
u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
|
||||
struct nvmet_req *req, u32 kato, struct nvmet_ctrl **ctrlp)
|
||||
{
|
||||
@@ -962,6 +971,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
|
||||
|
||||
INIT_WORK(&ctrl->async_event_work, nvmet_async_event_work);
|
||||
INIT_LIST_HEAD(&ctrl->async_events);
|
||||
INIT_WORK(&ctrl->fatal_err_work, nvmet_fatal_error_handler);
|
||||
|
||||
memcpy(ctrl->subsysnqn, subsysnqn, NVMF_NQN_SIZE);
|
||||
memcpy(ctrl->hostnqn, hostnqn, NVMF_NQN_SIZE);
|
||||
@@ -1076,21 +1086,11 @@ void nvmet_ctrl_put(struct nvmet_ctrl *ctrl)
|
||||
kref_put(&ctrl->ref, nvmet_ctrl_free);
|
||||
}
|
||||
|
||||
static void nvmet_fatal_error_handler(struct work_struct *work)
|
||||
{
|
||||
struct nvmet_ctrl *ctrl =
|
||||
container_of(work, struct nvmet_ctrl, fatal_err_work);
|
||||
|
||||
pr_err("ctrl %d fatal error occurred!\n", ctrl->cntlid);
|
||||
ctrl->ops->delete_ctrl(ctrl);
|
||||
}
|
||||
|
||||
void nvmet_ctrl_fatal_error(struct nvmet_ctrl *ctrl)
|
||||
{
|
||||
mutex_lock(&ctrl->lock);
|
||||
if (!(ctrl->csts & NVME_CSTS_CFS)) {
|
||||
ctrl->csts |= NVME_CSTS_CFS;
|
||||
INIT_WORK(&ctrl->fatal_err_work, nvmet_fatal_error_handler);
|
||||
schedule_work(&ctrl->fatal_err_work);
|
||||
}
|
||||
mutex_unlock(&ctrl->lock);
|
||||
|
||||
@@ -185,7 +185,7 @@ static const struct pmc_bit_map cnp_pfear_map[] = {
|
||||
{"CNVI", BIT(3)},
|
||||
{"UFS0", BIT(4)},
|
||||
{"EMMC", BIT(5)},
|
||||
{"Res_6", BIT(6)},
|
||||
{"SPF", BIT(6)},
|
||||
{"SBR6", BIT(7)},
|
||||
|
||||
{"SBR7", BIT(0)},
|
||||
@@ -682,7 +682,7 @@ static int __init pmc_core_probe(void)
|
||||
* Sunrisepoint PCH regmap can't be used. Use Cannonlake PCH regmap
|
||||
* in this case.
|
||||
*/
|
||||
if (!pci_dev_present(pmc_pci_ids))
|
||||
if (pmcdev->map == &spt_reg_map && !pci_dev_present(pmc_pci_ids))
|
||||
pmcdev->map = &cnp_reg_map;
|
||||
|
||||
if (lpit_read_residency_count_address(&slp_s0_addr))
|
||||
|
||||
@@ -17,6 +17,7 @@
|
||||
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/dmi.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/platform_data/x86/clk-pmc-atom.h>
|
||||
@@ -391,11 +392,27 @@ static int pmc_dbgfs_register(struct pmc_dev *pmc)
|
||||
}
|
||||
#endif /* CONFIG_DEBUG_FS */
|
||||
|
||||
/*
|
||||
* Some systems need one or more of their pmc_plt_clks to be
|
||||
* marked as critical.
|
||||
*/
|
||||
static const struct dmi_system_id critclk_systems[] __initconst = {
|
||||
{
|
||||
.ident = "MPL CEC1x",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "MPL AG"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "CEC10 Family"),
|
||||
},
|
||||
},
|
||||
{ /*sentinel*/ }
|
||||
};
|
||||
|
||||
static int pmc_setup_clks(struct pci_dev *pdev, void __iomem *pmc_regmap,
|
||||
const struct pmc_data *pmc_data)
|
||||
{
|
||||
struct platform_device *clkdev;
|
||||
struct pmc_clk_data *clk_data;
|
||||
const struct dmi_system_id *d = dmi_first_match(critclk_systems);
|
||||
|
||||
clk_data = kzalloc(sizeof(*clk_data), GFP_KERNEL);
|
||||
if (!clk_data)
|
||||
@@ -403,6 +420,10 @@ static int pmc_setup_clks(struct pci_dev *pdev, void __iomem *pmc_regmap,
|
||||
|
||||
clk_data->base = pmc_regmap; /* offset is added by client */
|
||||
clk_data->clks = pmc_data->clks;
|
||||
if (d) {
|
||||
clk_data->critical = true;
|
||||
pr_info("%s critclks quirk enabled\n", d->ident);
|
||||
}
|
||||
|
||||
clkdev = platform_device_register_data(&pdev->dev, "clk-pmc-atom",
|
||||
PLATFORM_DEVID_NONE,
|
||||
|
||||
@@ -130,6 +130,7 @@ static int meson_audio_arb_probe(struct platform_device *pdev)
|
||||
arb->rstc.nr_resets = ARRAY_SIZE(axg_audio_arb_reset_bits);
|
||||
arb->rstc.ops = &meson_audio_arb_rstc_ops;
|
||||
arb->rstc.of_node = dev->of_node;
|
||||
arb->rstc.owner = THIS_MODULE;
|
||||
|
||||
/*
|
||||
* Enable general :
|
||||
|
||||
@@ -298,7 +298,7 @@ static int cros_ec_rtc_suspend(struct device *dev)
|
||||
struct cros_ec_rtc *cros_ec_rtc = dev_get_drvdata(&pdev->dev);
|
||||
|
||||
if (device_may_wakeup(dev))
|
||||
enable_irq_wake(cros_ec_rtc->cros_ec->irq);
|
||||
return enable_irq_wake(cros_ec_rtc->cros_ec->irq);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -309,7 +309,7 @@ static int cros_ec_rtc_resume(struct device *dev)
|
||||
struct cros_ec_rtc *cros_ec_rtc = dev_get_drvdata(&pdev->dev);
|
||||
|
||||
if (device_may_wakeup(dev))
|
||||
disable_irq_wake(cros_ec_rtc->cros_ec->irq);
|
||||
return disable_irq_wake(cros_ec_rtc->cros_ec->irq);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -480,6 +480,13 @@ static int da9063_rtc_probe(struct platform_device *pdev)
|
||||
da9063_data_to_tm(data, &rtc->alarm_time, rtc);
|
||||
rtc->rtc_sync = false;
|
||||
|
||||
/*
|
||||
* TODO: some models have alarms on a minute boundary but still support
|
||||
* real hardware interrupts. Add this once the core supports it.
|
||||
*/
|
||||
if (config->rtc_data_start != RTC_SEC)
|
||||
rtc->rtc_dev->uie_unsupported = 1;
|
||||
|
||||
irq_alarm = platform_get_irq_byname(pdev, "ALARM");
|
||||
ret = devm_request_threaded_irq(&pdev->dev, irq_alarm, NULL,
|
||||
da9063_alarm_event,
|
||||
|
||||
@@ -377,7 +377,7 @@ static int sh_rtc_set_time(struct device *dev, struct rtc_time *tm)
|
||||
static inline int sh_rtc_read_alarm_value(struct sh_rtc *rtc, int reg_off)
|
||||
{
|
||||
unsigned int byte;
|
||||
int value = 0xff; /* return 0xff for ignored values */
|
||||
int value = -1; /* return -1 for ignored values */
|
||||
|
||||
byte = readb(rtc->regbase + reg_off);
|
||||
if (byte & AR_ENB) {
|
||||
|
||||
@@ -238,6 +238,7 @@ static struct {
|
||||
{"NETAPP", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
|
||||
{"LSI", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
|
||||
{"ENGENIO", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
|
||||
{"LENOVO", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
|
||||
{"SMSC", "USB 2 HS-CF", NULL, BLIST_SPARSELUN | BLIST_INQUIRY_36},
|
||||
{"SONY", "CD-ROM CDU-8001", NULL, BLIST_BORKEN},
|
||||
{"SONY", "TSL", NULL, BLIST_FORCELUN}, /* DDS3 & DDS4 autoloaders */
|
||||
|
||||
@@ -75,6 +75,7 @@ static const struct scsi_dh_blist scsi_dh_blist[] = {
|
||||
{"NETAPP", "INF-01-00", "rdac", },
|
||||
{"LSI", "INF-01-00", "rdac", },
|
||||
{"ENGENIO", "INF-01-00", "rdac", },
|
||||
{"LENOVO", "DE_Series", "rdac", },
|
||||
{NULL, NULL, NULL },
|
||||
};
|
||||
|
||||
|
||||
@@ -664,13 +664,22 @@ static void handle_sc_creation(struct vmbus_channel *new_sc)
|
||||
static void handle_multichannel_storage(struct hv_device *device, int max_chns)
|
||||
{
|
||||
struct storvsc_device *stor_device;
|
||||
int num_cpus = num_online_cpus();
|
||||
int num_sc;
|
||||
struct storvsc_cmd_request *request;
|
||||
struct vstor_packet *vstor_packet;
|
||||
int ret, t;
|
||||
|
||||
num_sc = ((max_chns > num_cpus) ? num_cpus : max_chns);
|
||||
/*
|
||||
* If the number of CPUs is artificially restricted, such as
|
||||
* with maxcpus=1 on the kernel boot line, Hyper-V could offer
|
||||
* sub-channels >= the number of CPUs. These sub-channels
|
||||
* should not be created. The primary channel is already created
|
||||
* and assigned to one CPU, so check against # CPUs - 1.
|
||||
*/
|
||||
num_sc = min((int)(num_online_cpus() - 1), max_chns);
|
||||
if (!num_sc)
|
||||
return;
|
||||
|
||||
stor_device = get_out_stor_device(device);
|
||||
if (!stor_device)
|
||||
return;
|
||||
|
||||
@@ -47,6 +47,8 @@
|
||||
#define ADT7516_MSB_AIN3 0xA
|
||||
#define ADT7516_MSB_AIN4 0xB
|
||||
#define ADT7316_DA_DATA_BASE 0x10
|
||||
#define ADT7316_DA_10_BIT_LSB_SHIFT 6
|
||||
#define ADT7316_DA_12_BIT_LSB_SHIFT 4
|
||||
#define ADT7316_DA_MSB_DATA_REGS 4
|
||||
#define ADT7316_LSB_DAC_A 0x10
|
||||
#define ADT7316_MSB_DAC_A 0x11
|
||||
@@ -1086,7 +1088,7 @@ static ssize_t adt7316_store_DAC_internal_Vref(struct device *dev,
|
||||
ldac_config = chip->ldac_config & (~ADT7516_DAC_IN_VREF_MASK);
|
||||
if (data & 0x1)
|
||||
ldac_config |= ADT7516_DAC_AB_IN_VREF;
|
||||
else if (data & 0x2)
|
||||
if (data & 0x2)
|
||||
ldac_config |= ADT7516_DAC_CD_IN_VREF;
|
||||
} else {
|
||||
ret = kstrtou8(buf, 16, &data);
|
||||
@@ -1408,7 +1410,7 @@ static IIO_DEVICE_ATTR(ex_analog_temp_offset, 0644,
|
||||
static ssize_t adt7316_show_DAC(struct adt7316_chip_info *chip,
|
||||
int channel, char *buf)
|
||||
{
|
||||
u16 data;
|
||||
u16 data = 0;
|
||||
u8 msb, lsb, offset;
|
||||
int ret;
|
||||
|
||||
@@ -1433,7 +1435,11 @@ static ssize_t adt7316_show_DAC(struct adt7316_chip_info *chip,
|
||||
if (ret)
|
||||
return -EIO;
|
||||
|
||||
data = (msb << offset) + (lsb & ((1 << offset) - 1));
|
||||
if (chip->dac_bits == 12)
|
||||
data = lsb >> ADT7316_DA_12_BIT_LSB_SHIFT;
|
||||
else if (chip->dac_bits == 10)
|
||||
data = lsb >> ADT7316_DA_10_BIT_LSB_SHIFT;
|
||||
data |= msb << offset;
|
||||
|
||||
return sprintf(buf, "%d\n", data);
|
||||
}
|
||||
@@ -1441,7 +1447,7 @@ static ssize_t adt7316_show_DAC(struct adt7316_chip_info *chip,
|
||||
static ssize_t adt7316_store_DAC(struct adt7316_chip_info *chip,
|
||||
int channel, const char *buf, size_t len)
|
||||
{
|
||||
u8 msb, lsb, offset;
|
||||
u8 msb, lsb, lsb_reg, offset;
|
||||
u16 data;
|
||||
int ret;
|
||||
|
||||
@@ -1459,9 +1465,13 @@ static ssize_t adt7316_store_DAC(struct adt7316_chip_info *chip,
|
||||
return -EINVAL;
|
||||
|
||||
if (chip->dac_bits > 8) {
|
||||
lsb = data & (1 << offset);
|
||||
lsb = data & ((1 << offset) - 1);
|
||||
if (chip->dac_bits == 12)
|
||||
lsb_reg = lsb << ADT7316_DA_12_BIT_LSB_SHIFT;
|
||||
else
|
||||
lsb_reg = lsb << ADT7316_DA_10_BIT_LSB_SHIFT;
|
||||
ret = chip->bus.write(chip->bus.client,
|
||||
ADT7316_DA_DATA_BASE + channel * 2, lsb);
|
||||
ADT7316_DA_DATA_BASE + channel * 2, lsb_reg);
|
||||
if (ret)
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
@@ -473,11 +473,6 @@ static int usb_unbind_interface(struct device *dev)
|
||||
pm_runtime_disable(dev);
|
||||
pm_runtime_set_suspended(dev);
|
||||
|
||||
/* Undo any residual pm_autopm_get_interface_* calls */
|
||||
for (r = atomic_read(&intf->pm_usage_cnt); r > 0; --r)
|
||||
usb_autopm_put_interface_no_suspend(intf);
|
||||
atomic_set(&intf->pm_usage_cnt, 0);
|
||||
|
||||
if (!error)
|
||||
usb_autosuspend_device(udev);
|
||||
|
||||
@@ -1636,7 +1631,6 @@ void usb_autopm_put_interface(struct usb_interface *intf)
|
||||
int status;
|
||||
|
||||
usb_mark_last_busy(udev);
|
||||
atomic_dec(&intf->pm_usage_cnt);
|
||||
status = pm_runtime_put_sync(&intf->dev);
|
||||
dev_vdbg(&intf->dev, "%s: cnt %d -> %d\n",
|
||||
__func__, atomic_read(&intf->dev.power.usage_count),
|
||||
@@ -1665,7 +1659,6 @@ void usb_autopm_put_interface_async(struct usb_interface *intf)
|
||||
int status;
|
||||
|
||||
usb_mark_last_busy(udev);
|
||||
atomic_dec(&intf->pm_usage_cnt);
|
||||
status = pm_runtime_put(&intf->dev);
|
||||
dev_vdbg(&intf->dev, "%s: cnt %d -> %d\n",
|
||||
__func__, atomic_read(&intf->dev.power.usage_count),
|
||||
@@ -1687,7 +1680,6 @@ void usb_autopm_put_interface_no_suspend(struct usb_interface *intf)
|
||||
struct usb_device *udev = interface_to_usbdev(intf);
|
||||
|
||||
usb_mark_last_busy(udev);
|
||||
atomic_dec(&intf->pm_usage_cnt);
|
||||
pm_runtime_put_noidle(&intf->dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(usb_autopm_put_interface_no_suspend);
|
||||
@@ -1718,8 +1710,6 @@ int usb_autopm_get_interface(struct usb_interface *intf)
|
||||
status = pm_runtime_get_sync(&intf->dev);
|
||||
if (status < 0)
|
||||
pm_runtime_put_sync(&intf->dev);
|
||||
else
|
||||
atomic_inc(&intf->pm_usage_cnt);
|
||||
dev_vdbg(&intf->dev, "%s: cnt %d -> %d\n",
|
||||
__func__, atomic_read(&intf->dev.power.usage_count),
|
||||
status);
|
||||
@@ -1753,8 +1743,6 @@ int usb_autopm_get_interface_async(struct usb_interface *intf)
|
||||
status = pm_runtime_get(&intf->dev);
|
||||
if (status < 0 && status != -EINPROGRESS)
|
||||
pm_runtime_put_noidle(&intf->dev);
|
||||
else
|
||||
atomic_inc(&intf->pm_usage_cnt);
|
||||
dev_vdbg(&intf->dev, "%s: cnt %d -> %d\n",
|
||||
__func__, atomic_read(&intf->dev.power.usage_count),
|
||||
status);
|
||||
@@ -1778,7 +1766,6 @@ void usb_autopm_get_interface_no_resume(struct usb_interface *intf)
|
||||
struct usb_device *udev = interface_to_usbdev(intf);
|
||||
|
||||
usb_mark_last_busy(udev);
|
||||
atomic_inc(&intf->pm_usage_cnt);
|
||||
pm_runtime_get_noresume(&intf->dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(usb_autopm_get_interface_no_resume);
|
||||
|
||||
@@ -820,9 +820,11 @@ int usb_string(struct usb_device *dev, int index, char *buf, size_t size)
|
||||
|
||||
if (dev->state == USB_STATE_SUSPENDED)
|
||||
return -EHOSTUNREACH;
|
||||
if (size <= 0 || !buf || !index)
|
||||
if (size <= 0 || !buf)
|
||||
return -EINVAL;
|
||||
buf[0] = 0;
|
||||
if (index <= 0 || index >= 256)
|
||||
return -EINVAL;
|
||||
tbuf = kmalloc(256, GFP_NOIO);
|
||||
if (!tbuf)
|
||||
return -ENOMEM;
|
||||
|
||||
@@ -979,8 +979,18 @@ static int dummy_udc_start(struct usb_gadget *g,
|
||||
struct dummy_hcd *dum_hcd = gadget_to_dummy_hcd(g);
|
||||
struct dummy *dum = dum_hcd->dum;
|
||||
|
||||
if (driver->max_speed == USB_SPEED_UNKNOWN)
|
||||
switch (g->speed) {
|
||||
/* All the speeds we support */
|
||||
case USB_SPEED_LOW:
|
||||
case USB_SPEED_FULL:
|
||||
case USB_SPEED_HIGH:
|
||||
case USB_SPEED_SUPER:
|
||||
break;
|
||||
default:
|
||||
dev_err(dummy_dev(dum_hcd), "Unsupported driver max speed %d\n",
|
||||
driver->max_speed);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
* SLAVE side init ... the layer above hardware, which
|
||||
@@ -1784,9 +1794,10 @@ static void dummy_timer(struct timer_list *t)
|
||||
/* Bus speed is 500000 bytes/ms, so use a little less */
|
||||
total = 490000;
|
||||
break;
|
||||
default:
|
||||
default: /* Can't happen */
|
||||
dev_err(dummy_dev(dum_hcd), "bogus device speed\n");
|
||||
return;
|
||||
total = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
/* FIXME if HZ != 1000 this will probably misbehave ... */
|
||||
@@ -1828,7 +1839,7 @@ static void dummy_timer(struct timer_list *t)
|
||||
|
||||
/* Used up this frame's bandwidth? */
|
||||
if (total <= 0)
|
||||
break;
|
||||
continue;
|
||||
|
||||
/* find the gadget's ep for this request (if configured) */
|
||||
address = usb_pipeendpoint (urb->pipe);
|
||||
|
||||
@@ -314,6 +314,7 @@ static void yurex_disconnect(struct usb_interface *interface)
|
||||
usb_deregister_dev(interface, &yurex_class);
|
||||
|
||||
/* prevent more I/O from starting */
|
||||
usb_poison_urb(dev->urb);
|
||||
mutex_lock(&dev->io_mutex);
|
||||
dev->interface = NULL;
|
||||
mutex_unlock(&dev->io_mutex);
|
||||
|
||||
@@ -763,18 +763,16 @@ static void rts51x_suspend_timer_fn(struct timer_list *t)
|
||||
break;
|
||||
case RTS51X_STAT_IDLE:
|
||||
case RTS51X_STAT_SS:
|
||||
usb_stor_dbg(us, "RTS51X_STAT_SS, intf->pm_usage_cnt:%d, power.usage:%d\n",
|
||||
atomic_read(&us->pusb_intf->pm_usage_cnt),
|
||||
usb_stor_dbg(us, "RTS51X_STAT_SS, power.usage:%d\n",
|
||||
atomic_read(&us->pusb_intf->dev.power.usage_count));
|
||||
|
||||
if (atomic_read(&us->pusb_intf->pm_usage_cnt) > 0) {
|
||||
if (atomic_read(&us->pusb_intf->dev.power.usage_count) > 0) {
|
||||
usb_stor_dbg(us, "Ready to enter SS state\n");
|
||||
rts51x_set_stat(chip, RTS51X_STAT_SS);
|
||||
/* ignore mass storage interface's children */
|
||||
pm_suspend_ignore_children(&us->pusb_intf->dev, true);
|
||||
usb_autopm_put_interface_async(us->pusb_intf);
|
||||
usb_stor_dbg(us, "RTS51X_STAT_SS 01, intf->pm_usage_cnt:%d, power.usage:%d\n",
|
||||
atomic_read(&us->pusb_intf->pm_usage_cnt),
|
||||
usb_stor_dbg(us, "RTS51X_STAT_SS 01, power.usage:%d\n",
|
||||
atomic_read(&us->pusb_intf->dev.power.usage_count));
|
||||
}
|
||||
break;
|
||||
@@ -807,11 +805,10 @@ static void rts51x_invoke_transport(struct scsi_cmnd *srb, struct us_data *us)
|
||||
int ret;
|
||||
|
||||
if (working_scsi(srb)) {
|
||||
usb_stor_dbg(us, "working scsi, intf->pm_usage_cnt:%d, power.usage:%d\n",
|
||||
atomic_read(&us->pusb_intf->pm_usage_cnt),
|
||||
usb_stor_dbg(us, "working scsi, power.usage:%d\n",
|
||||
atomic_read(&us->pusb_intf->dev.power.usage_count));
|
||||
|
||||
if (atomic_read(&us->pusb_intf->pm_usage_cnt) <= 0) {
|
||||
if (atomic_read(&us->pusb_intf->dev.power.usage_count) <= 0) {
|
||||
ret = usb_autopm_get_interface(us->pusb_intf);
|
||||
usb_stor_dbg(us, "working scsi, ret=%d\n", ret);
|
||||
}
|
||||
|
||||
@@ -361,16 +361,10 @@ static int get_pipe(struct stub_device *sdev, struct usbip_header *pdu)
|
||||
}
|
||||
|
||||
if (usb_endpoint_xfer_isoc(epd)) {
|
||||
/* validate packet size and number of packets */
|
||||
unsigned int maxp, packets, bytes;
|
||||
|
||||
maxp = usb_endpoint_maxp(epd);
|
||||
maxp *= usb_endpoint_maxp_mult(epd);
|
||||
bytes = pdu->u.cmd_submit.transfer_buffer_length;
|
||||
packets = DIV_ROUND_UP(bytes, maxp);
|
||||
|
||||
/* validate number of packets */
|
||||
if (pdu->u.cmd_submit.number_of_packets < 0 ||
|
||||
pdu->u.cmd_submit.number_of_packets > packets) {
|
||||
pdu->u.cmd_submit.number_of_packets >
|
||||
USBIP_MAX_ISO_PACKETS) {
|
||||
dev_err(&sdev->udev->dev,
|
||||
"CMD_SUBMIT: isoc invalid num packets %d\n",
|
||||
pdu->u.cmd_submit.number_of_packets);
|
||||
|
||||
@@ -121,6 +121,13 @@ extern struct device_attribute dev_attr_usbip_debug;
|
||||
#define USBIP_DIR_OUT 0x00
|
||||
#define USBIP_DIR_IN 0x01
|
||||
|
||||
/*
|
||||
* Arbitrary limit for the maximum number of isochronous packets in an URB,
|
||||
* compare for example the uhci_submit_isochronous function in
|
||||
* drivers/usb/host/uhci-q.c
|
||||
*/
|
||||
#define USBIP_MAX_ISO_PACKETS 1024
|
||||
|
||||
/**
|
||||
* struct usbip_header_basic - data pertinent to every request
|
||||
* @command: the usbip request type
|
||||
|
||||
@@ -1443,11 +1443,11 @@ static void __init vfio_pci_fill_ids(void)
|
||||
rc = pci_add_dynid(&vfio_pci_driver, vendor, device,
|
||||
subvendor, subdevice, class, class_mask, 0);
|
||||
if (rc)
|
||||
pr_warn("failed to add dynamic id [%04hx:%04hx[%04hx:%04hx]] class %#08x/%08x (%d)\n",
|
||||
pr_warn("failed to add dynamic id [%04x:%04x[%04x:%04x]] class %#08x/%08x (%d)\n",
|
||||
vendor, device, subvendor, subdevice,
|
||||
class, class_mask, rc);
|
||||
else
|
||||
pr_info("add [%04hx:%04hx[%04hx:%04hx]] class %#08x/%08x\n",
|
||||
pr_info("add [%04x:%04x[%04x:%04x]] class %#08x/%08x\n",
|
||||
vendor, device, subvendor, subdevice,
|
||||
class, class_mask);
|
||||
}
|
||||
|
||||
@@ -1016,15 +1016,15 @@ static int ds_probe(struct usb_interface *intf,
|
||||
/* alternative 3, 1ms interrupt (greatly speeds search), 64 byte bulk */
|
||||
alt = 3;
|
||||
err = usb_set_interface(dev->udev,
|
||||
intf->altsetting[alt].desc.bInterfaceNumber, alt);
|
||||
intf->cur_altsetting->desc.bInterfaceNumber, alt);
|
||||
if (err) {
|
||||
dev_err(&dev->udev->dev, "Failed to set alternative setting %d "
|
||||
"for %d interface: err=%d.\n", alt,
|
||||
intf->altsetting[alt].desc.bInterfaceNumber, err);
|
||||
intf->cur_altsetting->desc.bInterfaceNumber, err);
|
||||
goto err_out_clear;
|
||||
}
|
||||
|
||||
iface_desc = &intf->altsetting[alt];
|
||||
iface_desc = intf->cur_altsetting;
|
||||
if (iface_desc->desc.bNumEndpoints != NUM_EP-1) {
|
||||
pr_info("Num endpoints=%d. It is not DS9490R.\n",
|
||||
iface_desc->desc.bNumEndpoints);
|
||||
|
||||
@@ -622,9 +622,7 @@ static int xenbus_file_open(struct inode *inode, struct file *filp)
|
||||
if (xen_store_evtchn == 0)
|
||||
return -ENOENT;
|
||||
|
||||
nonseekable_open(inode, filp);
|
||||
|
||||
filp->f_mode &= ~FMODE_ATOMIC_POS; /* cdev-style semantics */
|
||||
stream_open(inode, filp);
|
||||
|
||||
u = kzalloc(sizeof(*u), GFP_KERNEL);
|
||||
if (u == NULL)
|
||||
|
||||
@@ -163,19 +163,24 @@ static int debugfs_show_options(struct seq_file *m, struct dentry *root)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void debugfs_evict_inode(struct inode *inode)
|
||||
static void debugfs_i_callback(struct rcu_head *head)
|
||||
{
|
||||
truncate_inode_pages_final(&inode->i_data);
|
||||
clear_inode(inode);
|
||||
struct inode *inode = container_of(head, struct inode, i_rcu);
|
||||
if (S_ISLNK(inode->i_mode))
|
||||
kfree(inode->i_link);
|
||||
free_inode_nonrcu(inode);
|
||||
}
|
||||
|
||||
static void debugfs_destroy_inode(struct inode *inode)
|
||||
{
|
||||
call_rcu(&inode->i_rcu, debugfs_i_callback);
|
||||
}
|
||||
|
||||
static const struct super_operations debugfs_super_operations = {
|
||||
.statfs = simple_statfs,
|
||||
.remount_fs = debugfs_remount,
|
||||
.show_options = debugfs_show_options,
|
||||
.evict_inode = debugfs_evict_inode,
|
||||
.destroy_inode = debugfs_destroy_inode,
|
||||
};
|
||||
|
||||
static void debugfs_release_dentry(struct dentry *dentry)
|
||||
|
||||
@@ -741,11 +741,17 @@ static struct inode *hugetlbfs_get_inode(struct super_block *sb,
|
||||
umode_t mode, dev_t dev)
|
||||
{
|
||||
struct inode *inode;
|
||||
struct resv_map *resv_map;
|
||||
struct resv_map *resv_map = NULL;
|
||||
|
||||
/*
|
||||
* Reserve maps are only needed for inodes that can have associated
|
||||
* page allocations.
|
||||
*/
|
||||
if (S_ISREG(mode) || S_ISLNK(mode)) {
|
||||
resv_map = resv_map_alloc();
|
||||
if (!resv_map)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
inode = new_inode(sb);
|
||||
if (inode) {
|
||||
@@ -780,8 +786,10 @@ static struct inode *hugetlbfs_get_inode(struct super_block *sb,
|
||||
break;
|
||||
}
|
||||
lockdep_annotate_inode_mutex_key(inode);
|
||||
} else
|
||||
} else {
|
||||
if (resv_map)
|
||||
kref_put(&resv_map->refs, resv_map_release);
|
||||
}
|
||||
|
||||
return inode;
|
||||
}
|
||||
|
||||
@@ -1414,11 +1414,6 @@ void jffs2_do_clear_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f)
|
||||
|
||||
jffs2_kill_fragtree(&f->fragtree, deleted?c:NULL);
|
||||
|
||||
if (f->target) {
|
||||
kfree(f->target);
|
||||
f->target = NULL;
|
||||
}
|
||||
|
||||
fds = f->dents;
|
||||
while(fds) {
|
||||
fd = fds;
|
||||
|
||||
@@ -47,7 +47,10 @@ static struct inode *jffs2_alloc_inode(struct super_block *sb)
|
||||
static void jffs2_i_callback(struct rcu_head *head)
|
||||
{
|
||||
struct inode *inode = container_of(head, struct inode, i_rcu);
|
||||
kmem_cache_free(jffs2_inode_cachep, JFFS2_INODE_INFO(inode));
|
||||
struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode);
|
||||
|
||||
kfree(f->target);
|
||||
kmem_cache_free(jffs2_inode_cachep, f);
|
||||
}
|
||||
|
||||
static void jffs2_destroy_inode(struct inode *inode)
|
||||
|
||||
18
fs/open.c
18
fs/open.c
@@ -1227,3 +1227,21 @@ int nonseekable_open(struct inode *inode, struct file *filp)
|
||||
}
|
||||
|
||||
EXPORT_SYMBOL(nonseekable_open);
|
||||
|
||||
/*
|
||||
* stream_open is used by subsystems that want stream-like file descriptors.
|
||||
* Such file descriptors are not seekable and don't have notion of position
|
||||
* (file.f_pos is always 0). Contrary to file descriptors of other regular
|
||||
* files, .read() and .write() can run simultaneously.
|
||||
*
|
||||
* stream_open never fails and is marked to return int so that it could be
|
||||
* directly used as file_operations.open .
|
||||
*/
|
||||
int stream_open(struct inode *inode, struct file *filp)
|
||||
{
|
||||
filp->f_mode &= ~(FMODE_LSEEK | FMODE_PREAD | FMODE_PWRITE | FMODE_ATOMIC_POS);
|
||||
filp->f_mode |= FMODE_STREAM;
|
||||
return 0;
|
||||
}
|
||||
|
||||
EXPORT_SYMBOL(stream_open);
|
||||
|
||||
@@ -564,11 +564,12 @@ EXPORT_SYMBOL(vfs_write);
|
||||
|
||||
static inline loff_t file_pos_read(struct file *file)
|
||||
{
|
||||
return file->f_pos;
|
||||
return file->f_mode & FMODE_STREAM ? 0 : file->f_pos;
|
||||
}
|
||||
|
||||
static inline void file_pos_write(struct file *file, loff_t pos)
|
||||
{
|
||||
if ((file->f_mode & FMODE_STREAM) == 0)
|
||||
file->f_pos = pos;
|
||||
}
|
||||
|
||||
|
||||
@@ -153,6 +153,9 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
|
||||
#define FMODE_OPENED ((__force fmode_t)0x80000)
|
||||
#define FMODE_CREATED ((__force fmode_t)0x100000)
|
||||
|
||||
/* File is stream-like */
|
||||
#define FMODE_STREAM ((__force fmode_t)0x200000)
|
||||
|
||||
/* File was opened by fanotify and shouldn't generate fanotify events */
|
||||
#define FMODE_NONOTIFY ((__force fmode_t)0x4000000)
|
||||
|
||||
@@ -3043,6 +3046,7 @@ extern loff_t no_seek_end_llseek_size(struct file *, loff_t, int, loff_t);
|
||||
extern loff_t no_seek_end_llseek(struct file *, loff_t, int);
|
||||
extern int generic_file_open(struct inode * inode, struct file * filp);
|
||||
extern int nonseekable_open(struct inode * inode, struct file * filp);
|
||||
extern int stream_open(struct inode * inode, struct file * filp);
|
||||
|
||||
#ifdef CONFIG_BLOCK
|
||||
typedef void (dio_submit_t)(struct bio *bio, struct inode *inode,
|
||||
|
||||
@@ -333,6 +333,7 @@ struct i2c_client {
|
||||
char name[I2C_NAME_SIZE];
|
||||
struct i2c_adapter *adapter; /* the adapter we sit on */
|
||||
struct device dev; /* the device structure */
|
||||
int init_irq; /* irq set at initialization */
|
||||
int irq; /* irq issued by device */
|
||||
struct list_head detected;
|
||||
#if IS_ENABLED(CONFIG_I2C_SLAVE)
|
||||
|
||||
@@ -35,10 +35,13 @@ struct pmc_clk {
|
||||
*
|
||||
* @base: PMC clock register base offset
|
||||
* @clks: pointer to set of registered clocks, typically 0..5
|
||||
* @critical: flag to indicate if firmware enabled pmc_plt_clks
|
||||
* should be marked as critial or not
|
||||
*/
|
||||
struct pmc_clk_data {
|
||||
void __iomem *base;
|
||||
const struct pmc_clk *clks;
|
||||
bool critical;
|
||||
};
|
||||
|
||||
#endif /* __PLATFORM_DATA_X86_CLK_PMC_ATOM_H */
|
||||
|
||||
@@ -200,7 +200,6 @@ usb_find_last_int_out_endpoint(struct usb_host_interface *alt,
|
||||
* @dev: driver model's view of this device
|
||||
* @usb_dev: if an interface is bound to the USB major, this will point
|
||||
* to the sysfs representation for that device.
|
||||
* @pm_usage_cnt: PM usage counter for this interface
|
||||
* @reset_ws: Used for scheduling resets from atomic context.
|
||||
* @resetting_device: USB core reset the device, so use alt setting 0 as
|
||||
* current; needs bandwidth alloc after reset.
|
||||
@@ -257,7 +256,6 @@ struct usb_interface {
|
||||
|
||||
struct device dev; /* interface specific device info */
|
||||
struct device *usb_dev;
|
||||
atomic_t pm_usage_cnt; /* usage counter for autosuspend */
|
||||
struct work_struct reset_ws; /* for resets in atomic context */
|
||||
};
|
||||
#define to_usb_interface(d) container_of(d, struct usb_interface, dev)
|
||||
|
||||
@@ -1373,6 +1373,7 @@ static void scan_block(void *_start, void *_end,
|
||||
/*
|
||||
* Scan a large memory block in MAX_SCAN_SIZE chunks to reduce the latency.
|
||||
*/
|
||||
#ifdef CONFIG_SMP
|
||||
static void scan_large_block(void *start, void *end)
|
||||
{
|
||||
void *next;
|
||||
@@ -1384,6 +1385,7 @@ static void scan_large_block(void *start, void *end)
|
||||
cond_resched();
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Scan a memory block corresponding to a kmemleak_object. A condition is
|
||||
@@ -1501,11 +1503,6 @@ static void kmemleak_scan(void)
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
/* data/bss scanning */
|
||||
scan_large_block(_sdata, _edata);
|
||||
scan_large_block(__bss_start, __bss_stop);
|
||||
scan_large_block(__start_ro_after_init, __end_ro_after_init);
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
/* per-cpu sections scanning */
|
||||
for_each_possible_cpu(i)
|
||||
@@ -2036,6 +2033,17 @@ void __init kmemleak_init(void)
|
||||
}
|
||||
local_irq_restore(flags);
|
||||
|
||||
/* register the data/bss sections */
|
||||
create_object((unsigned long)_sdata, _edata - _sdata,
|
||||
KMEMLEAK_GREY, GFP_ATOMIC);
|
||||
create_object((unsigned long)__bss_start, __bss_stop - __bss_start,
|
||||
KMEMLEAK_GREY, GFP_ATOMIC);
|
||||
/* only register .data..ro_after_init if not within .data */
|
||||
if (__start_ro_after_init < _sdata || __end_ro_after_init > _edata)
|
||||
create_object((unsigned long)__start_ro_after_init,
|
||||
__end_ro_after_init - __start_ro_after_init,
|
||||
KMEMLEAK_GREY, GFP_ATOMIC);
|
||||
|
||||
/*
|
||||
* This is the point where tracking allocations is safe. Automatic
|
||||
* scanning is started during the late initcall. Add the early logged
|
||||
|
||||
@@ -104,8 +104,10 @@ static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)
|
||||
|
||||
ret = cfg80211_get_station(real_netdev, neigh->addr, &sinfo);
|
||||
|
||||
if (!ret) {
|
||||
/* free the TID stats immediately */
|
||||
cfg80211_sinfo_release_content(&sinfo);
|
||||
}
|
||||
|
||||
dev_put(real_netdev);
|
||||
if (ret == -ENOENT) {
|
||||
|
||||
@@ -803,6 +803,8 @@ static void batadv_bla_del_claim(struct batadv_priv *bat_priv,
|
||||
const u8 *mac, const unsigned short vid)
|
||||
{
|
||||
struct batadv_bla_claim search_claim, *claim;
|
||||
struct batadv_bla_claim *claim_removed_entry;
|
||||
struct hlist_node *claim_removed_node;
|
||||
|
||||
ether_addr_copy(search_claim.addr, mac);
|
||||
search_claim.vid = vid;
|
||||
@@ -813,10 +815,18 @@ static void batadv_bla_del_claim(struct batadv_priv *bat_priv,
|
||||
batadv_dbg(BATADV_DBG_BLA, bat_priv, "%s(): %pM, vid %d\n", __func__,
|
||||
mac, batadv_print_vid(vid));
|
||||
|
||||
batadv_hash_remove(bat_priv->bla.claim_hash, batadv_compare_claim,
|
||||
claim_removed_node = batadv_hash_remove(bat_priv->bla.claim_hash,
|
||||
batadv_compare_claim,
|
||||
batadv_choose_claim, claim);
|
||||
batadv_claim_put(claim); /* reference from the hash is gone */
|
||||
if (!claim_removed_node)
|
||||
goto free_claim;
|
||||
|
||||
/* reference from the hash is gone */
|
||||
claim_removed_entry = hlist_entry(claim_removed_node,
|
||||
struct batadv_bla_claim, hash_entry);
|
||||
batadv_claim_put(claim_removed_entry);
|
||||
|
||||
free_claim:
|
||||
/* don't need the reference from hash_find() anymore */
|
||||
batadv_claim_put(claim);
|
||||
}
|
||||
|
||||
@@ -616,14 +616,26 @@ static void batadv_tt_global_free(struct batadv_priv *bat_priv,
|
||||
struct batadv_tt_global_entry *tt_global,
|
||||
const char *message)
|
||||
{
|
||||
struct batadv_tt_global_entry *tt_removed_entry;
|
||||
struct hlist_node *tt_removed_node;
|
||||
|
||||
batadv_dbg(BATADV_DBG_TT, bat_priv,
|
||||
"Deleting global tt entry %pM (vid: %d): %s\n",
|
||||
tt_global->common.addr,
|
||||
batadv_print_vid(tt_global->common.vid), message);
|
||||
|
||||
batadv_hash_remove(bat_priv->tt.global_hash, batadv_compare_tt,
|
||||
batadv_choose_tt, &tt_global->common);
|
||||
batadv_tt_global_entry_put(tt_global);
|
||||
tt_removed_node = batadv_hash_remove(bat_priv->tt.global_hash,
|
||||
batadv_compare_tt,
|
||||
batadv_choose_tt,
|
||||
&tt_global->common);
|
||||
if (!tt_removed_node)
|
||||
return;
|
||||
|
||||
/* drop reference of remove hash entry */
|
||||
tt_removed_entry = hlist_entry(tt_removed_node,
|
||||
struct batadv_tt_global_entry,
|
||||
common.hash_entry);
|
||||
batadv_tt_global_entry_put(tt_removed_entry);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1332,9 +1344,10 @@ u16 batadv_tt_local_remove(struct batadv_priv *bat_priv, const u8 *addr,
|
||||
unsigned short vid, const char *message,
|
||||
bool roaming)
|
||||
{
|
||||
struct batadv_tt_local_entry *tt_removed_entry;
|
||||
struct batadv_tt_local_entry *tt_local_entry;
|
||||
u16 flags, curr_flags = BATADV_NO_FLAGS;
|
||||
void *tt_entry_exists;
|
||||
struct hlist_node *tt_removed_node;
|
||||
|
||||
tt_local_entry = batadv_tt_local_hash_find(bat_priv, addr, vid);
|
||||
if (!tt_local_entry)
|
||||
@@ -1363,15 +1376,18 @@ u16 batadv_tt_local_remove(struct batadv_priv *bat_priv, const u8 *addr,
|
||||
*/
|
||||
batadv_tt_local_event(bat_priv, tt_local_entry, BATADV_TT_CLIENT_DEL);
|
||||
|
||||
tt_entry_exists = batadv_hash_remove(bat_priv->tt.local_hash,
|
||||
tt_removed_node = batadv_hash_remove(bat_priv->tt.local_hash,
|
||||
batadv_compare_tt,
|
||||
batadv_choose_tt,
|
||||
&tt_local_entry->common);
|
||||
if (!tt_entry_exists)
|
||||
if (!tt_removed_node)
|
||||
goto out;
|
||||
|
||||
/* extra call to free the local tt entry */
|
||||
batadv_tt_local_entry_put(tt_local_entry);
|
||||
/* drop reference of remove hash entry */
|
||||
tt_removed_entry = hlist_entry(tt_removed_node,
|
||||
struct batadv_tt_local_entry,
|
||||
common.hash_entry);
|
||||
batadv_tt_local_entry_put(tt_removed_entry);
|
||||
|
||||
out:
|
||||
if (tt_local_entry)
|
||||
|
||||
@@ -838,7 +838,7 @@ void ieee80211_debugfs_rename_netdev(struct ieee80211_sub_if_data *sdata)
|
||||
|
||||
dir = sdata->vif.debugfs_dir;
|
||||
|
||||
if (!dir)
|
||||
if (IS_ERR_OR_NULL(dir))
|
||||
return;
|
||||
|
||||
sprintf(buf, "netdev:%s", sdata->name);
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user