Merge tag 'ASB-2024-06-05_4.19-stable' of https://android.googlesource.com/kernel/common into android13-4.19-kona
https://source.android.com/docs/security/bulletin/2024-06-01 CVE-2024-26926 * tag 'ASB-2024-06-05_4.19-stable' of https://android.googlesource.com/kernel/common: BACKPORT: net: fix __dst_negative_advice() race Linux 4.19.315 docs: kernel_include.py: Cope with docutils 0.21 serial: kgdboc: Fix NMI-safety problems from keyboard reset code tracing: Remove unnecessary var_ref destroy in track_data_destroy() tracing: Generalize hist trigger onmax and save action tracing: Split up onmatch action data tracing: Refactor hist trigger action code tracing: Have the historgram use the result of str_has_prefix() for len of prefix tracing: Use str_has_prefix() instead of using fixed sizes tracing: Use str_has_prefix() helper for histogram code string.h: Add str_has_prefix() helper function tracing: Consolidate trace_add/remove_event_call back to the nolock functions tracing: Remove unneeded synth_event_mutex tracing: Use dyn_event framework for synthetic events tracing: Add unified dynamic event framework tracing: Simplify creation and deletion of synthetic events btrfs: add missing mutex_unlock in btrfs_relocate_sys_chunks() dm: limit the number of targets and parameter size area Revert "selftests: mm: fix map_hugetlb failure on 64K page size systems" Linux 4.19.314 af_unix: Suppress false-positive lockdep splat for spin_lock() in __unix_gc(). net: fix out-of-bounds access in ops_init drm/vmwgfx: Fix invalid reads in fence signaled events dyndbg: fix old BUG_ON in >control parser tipc: fix UAF in error path usb: gadget: f_fs: Fix a race condition when processing setup packets. usb: gadget: composite: fix OS descriptors w_value logic firewire: nosy: ensure user_length is taken into account when fetching packet contents af_unix: Fix garbage collector racing against connect() af_unix: Do not use atomic ops for unix_sk(sk)->inflight. ipv6: fib6_rules: avoid possible NULL dereference in fib6_rule_action() net: bridge: fix corrupted ethernet header on multicast-to-unicast phonet: fix rtm_phonet_notify() skb allocation rtnetlink: Correct nested IFLA_VF_VLAN_LIST attribute validation Bluetooth: l2cap: fix null-ptr-deref in l2cap_chan_timeout Bluetooth: Fix use-after-free bugs caused by sco_sock_timeout tcp: Use refcount_inc_not_zero() in tcp_twsk_unique(). tcp: defer shutdown(SEND_SHUTDOWN) for TCP_SYN_RECV sockets tcp: remove redundant check on tskb net:usb:qmi_wwan: support Rolling modules fs/9p: drop inodes immediately on non-.L too gpio: crystalcove: Use -ENOTSUPP consistently gpio: wcove: Use -ENOTSUPP consistently 9p: explicitly deny setlease attempts fs/9p: translate O_TRUNC into OTRUNC fs/9p: only translate RWX permissions for plain 9P2000 selftests: timers: Fix valid-adjtimex signed left-shift undefined behavior scsi: target: Fix SELinux error when systemd-modules loads the target module btrfs: always clear PERTRANS metadata during commit btrfs: make btrfs_clear_delalloc_extent() free delalloc reserve tools/power turbostat: Fix Bzy_MHz documentation typo tools/power turbostat: Fix added raw MSR output firewire: ohci: mask bus reset interrupts between ISR and bottom half ata: sata_gemini: Check clk_enable() result net: bcmgenet: Reset RBUF on first open ALSA: line6: Zero-initialize message buffers scsi: bnx2fc: Remove spin_lock_bh while releasing resources after upload net: mark racy access on sk->sk_rcvbuf wifi: mac80211: fix ieee80211_bss_*_flags kernel-doc gfs2: Fix invalid metadata access in punch_hole scsi: lpfc: Update lpfc_ramp_down_queue_handler() logic tipc: fix a possible memleak in tipc_buf_append net: bridge: fix multicast-to-unicast with fraglist GSO net: dsa: mv88e6xxx: Fix number of databases for 88E6141 / 88E6341 net: dsa: mv88e6xxx: Add number of MACs in the ATU net l2tp: drop flow hash on forward nsh: Restore skb->{protocol,data,mac_header} for outer header in nsh_gso_segment(). bna: ensure the copied buf is NUL terminated s390/mm: Fix clearing storage keys for huge pages s390/mm: Fix storage key clearing for guest huge pages pinctrl: devicetree: fix refcount leak in pinctrl_dt_to_map() power: rt9455: hide unused rt9455_boost_voltage_values pinctrl: core: delete incorrect free in pinctrl_enable() ethernet: Add helper for assigning packet type when dest address does not match device address ethernet: add a helper for assigning port addresses net: slightly optimize eth_type_trans drm/amdgpu: Fix leak when GPU memory allocation fails drm/amdkfd: change system memory overcommit limit wifi: nl80211: don't free NULL coalescing rule dmaengine: Revert "dmaengine: pl330: issue_pending waits until WFP state" dmaengine: pl330: issue_pending waits until WFP state Linux 4.19.313 serial: core: fix kernel-doc for uart_port_unlock_irqrestore() udp: preserve the connected status if only UDP cmsg Revert "y2038: rusage: use __kernel_old_timeval" Revert "loop: Remove sector_t truncation checks" HID: i2c-hid: remove I2C_HID_READ_PENDING flag to prevent lock-up i2c: smbus: fix NULL function pointer dereference idma64: Don't try to serve interrupts when device is powered off dmaengine: owl: fix register access functions tcp: Fix NEW_SYN_RECV handling in inet_twsk_purge() tcp: Clean up kernel listener's reqsk in inet_twsk_purge() mtd: diskonchip: work around ubsan link failure stackdepot: respect __GFP_NOLOCKDEP allocation flag net: b44: set pause params only when interface is up irqchip/gic-v3-its: Prevent double free on error arm64: dts: rockchip: enable internal pull-up for Q7_THRM# on RK3399 Puma btrfs: fix information leak in btrfs_ioctl_logical_to_ino() Bluetooth: Fix type of len in {l2cap,sco}_sock_getsockopt_old() tracing: Increase PERF_MAX_TRACE_SIZE to handle Sentinel1 and docker together tracing: Show size of requested perf buffer Revert "crypto: api - Disallow identical driver names" drm/amdgpu: validate the parameters of bo mapping operations more clearly amdgpu: validate offset_in_bo of drm_amdgpu_gem_va drm/amdgpu: restrict bo mapping within gpu address limits serial: mxs-auart: add spinlock around changing cts state serial: core: Provide port lock wrappers i40e: Do not use WQ_MEM_RECLAIM flag for workqueue net: openvswitch: Fix Use-After-Free in ovs_ct_exit net: openvswitch: ovs_ct_exit to be done under ovs_lock ipvs: Fix checksumming on GSO of SCTP packets net: gtp: Fix Use-After-Free in gtp_dellink net: usb: ax88179_178a: stop lying about skb->truesize NFC: trf7970a: disable all regulators on removal mlxsw: core: Unregister EMAD trap using FORWARD action vxlan: drop packets from invalid src-address ARC: [plat-hsdk]: Remove misplaced interrupt-cells property arm64: dts: mediatek: mt7622: drop "reset-names" from thermal block arm64: dts: mediatek: mt7622: fix ethernet controller "compatible" arm64: dts: mediatek: mt7622: fix IR nodename arm64: dts: rockchip: enable internal pull-up on PCIE_WAKE# for RK3399 Puma arm64: dts: rockchip: fix alphabetical ordering RK3399 puma tracing: Use var_refs[] for hist trigger reference checking tracing: Remove hist trigger synth_var_refs nilfs2: fix OOB in nilfs_set_de_type nouveau: fix instmem race condition around ptr stores fs: sysfs: Fix reference leak in sysfs_break_active_protection() speakup: Avoid crash on very long word usb: dwc2: host: Fix dereference issue in DDMA completion flow. Revert "usb: cdc-wdm: close race between read and workqueue" USB: serial: option: add Telit FN920C04 rmnet compositions USB: serial: option: add Rolling RW101-GL and RW135-GL support USB: serial: option: support Quectel EM060K sub-models USB: serial: option: add Lonsung U8300/U9300 product USB: serial: option: add support for Fibocom FM650/FG650 USB: serial: option: add Fibocom FM135-GL variants serial/pmac_zilog: Remove flawed mitigation for rx irq flood comedi: vmk80xx: fix incomplete endpoint checking drm: nv04: Fix out of bounds access RDMA/mlx5: Fix port number for counter query in multi-port configuration tun: limit printing rate when illegal packet received by tun dev netfilter: nf_tables: Fix potential data-race in __nft_expr_type_get() netfilter: nf_tables: __nft_expr_type_get() selects specific family type Revert "tracing/trigger: Fix to return error if failed to alloc snapshot" kprobes: Fix possible use-after-free issue on kprobe registration selftests/ftrace: Limit length in subsystem-enable tests btrfs: record delayed inode root in transaction x86/apic: Force native_apic_mem_read() to use the MOV instruction selftests: timers: Fix abs() warning in posix_timers test vhost: Add smp_rmb() in vhost_vq_avail_empty() tracing: hide unused ftrace_event_id_fops net/mlx5: Properly link new fs rules into the tree ipv6: fix race condition between ipv6_get_ifaddr and ipv6_del_addr ipv4/route: avoid unused-but-set-variable warning ipv6: fib: hide unused 'pn' variable geneve: fix header validation in geneve[6]_xmit_skb nouveau: fix function cast warning Bluetooth: Fix memory leak in hci_req_sync_complete() batman-adv: Avoid infinite loop trying to resize local TT Conflicts: drivers/net/usb/ax88179_178a.c Change-Id: I73f07cafe3403d98dad2e4a8b34f89cfbd49818c
This commit is contained in:
@@ -94,7 +94,6 @@ class KernelInclude(Include):
|
||||
# HINT: this is the only line I had to change / commented out:
|
||||
#path = utils.relative_path(None, path)
|
||||
|
||||
path = nodes.reprunicode(path)
|
||||
encoding = self.options.get(
|
||||
'encoding', self.state.document.settings.input_encoding)
|
||||
e_handler=self.state.document.settings.input_encoding_error_handler
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 19
|
||||
SUBLEVEL = 312
|
||||
SUBLEVEL = 315
|
||||
EXTRAVERSION =
|
||||
NAME = "People's Front"
|
||||
|
||||
|
||||
@@ -964,7 +964,7 @@ put_tv32(struct timeval32 __user *o, struct timespec64 *i)
|
||||
}
|
||||
|
||||
static inline long
|
||||
put_tv_to_tv32(struct timeval32 __user *o, struct __kernel_old_timeval *i)
|
||||
put_tv_to_tv32(struct timeval32 __user *o, struct timeval *i)
|
||||
{
|
||||
return copy_to_user(o, &(struct timeval32){
|
||||
.tv_sec = i->tv_sec,
|
||||
|
||||
@@ -170,7 +170,6 @@
|
||||
};
|
||||
|
||||
gmac: ethernet@8000 {
|
||||
#interrupt-cells = <1>;
|
||||
compatible = "snps,dwmac";
|
||||
reg = <0x8000 0x2000>;
|
||||
interrupts = <10>;
|
||||
|
||||
@@ -232,7 +232,7 @@
|
||||
clock-names = "hif_sel";
|
||||
};
|
||||
|
||||
cir: cir@10009000 {
|
||||
cir: ir-receiver@10009000 {
|
||||
compatible = "mediatek,mt7622-cir";
|
||||
reg = <0 0x10009000 0 0x1000>;
|
||||
interrupts = <GIC_SPI 175 IRQ_TYPE_LEVEL_LOW>;
|
||||
@@ -459,7 +459,6 @@
|
||||
<&pericfg CLK_PERI_AUXADC_PD>;
|
||||
clock-names = "therm", "auxadc";
|
||||
resets = <&pericfg MT7622_PERI_THERM_SW_RST>;
|
||||
reset-names = "therm";
|
||||
mediatek,auxadc = <&auxadc>;
|
||||
mediatek,apmixedsys = <&apmixedsys>;
|
||||
nvmem-cells = <&thermal_calibration>;
|
||||
@@ -846,9 +845,7 @@
|
||||
};
|
||||
|
||||
eth: ethernet@1b100000 {
|
||||
compatible = "mediatek,mt7622-eth",
|
||||
"mediatek,mt2701-eth",
|
||||
"syscon";
|
||||
compatible = "mediatek,mt7622-eth";
|
||||
reg = <0 0x1b100000 0 0x20000>;
|
||||
interrupts = <GIC_SPI 223 IRQ_TYPE_LEVEL_LOW>,
|
||||
<GIC_SPI 224 IRQ_TYPE_LEVEL_LOW>,
|
||||
|
||||
@@ -426,16 +426,22 @@
|
||||
gpio1830-supply = <&vcc_1v8>;
|
||||
};
|
||||
|
||||
&pmu_io_domains {
|
||||
status = "okay";
|
||||
pmu1830-supply = <&vcc_1v8>;
|
||||
};
|
||||
|
||||
&pwm2 {
|
||||
status = "okay";
|
||||
&pcie_clkreqn_cpm {
|
||||
rockchip,pins =
|
||||
<2 RK_PD2 RK_FUNC_GPIO &pcfg_pull_up>;
|
||||
};
|
||||
|
||||
&pinctrl {
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&q7_thermal_pin>;
|
||||
|
||||
gpios {
|
||||
q7_thermal_pin: q7-thermal-pin {
|
||||
rockchip,pins =
|
||||
<0 RK_PA3 RK_FUNC_GPIO &pcfg_pull_up>;
|
||||
};
|
||||
};
|
||||
|
||||
i2c8 {
|
||||
i2c8_xfer_a: i2c8-xfer {
|
||||
rockchip,pins =
|
||||
@@ -466,6 +472,15 @@
|
||||
};
|
||||
};
|
||||
|
||||
&pmu_io_domains {
|
||||
status = "okay";
|
||||
pmu1830-supply = <&vcc_1v8>;
|
||||
};
|
||||
|
||||
&pwm2 {
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
&sdhci {
|
||||
/*
|
||||
* Signal integrity isn't great at 200MHz but 100MHz has proven stable
|
||||
|
||||
@@ -2583,7 +2583,7 @@ static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr,
|
||||
return 0;
|
||||
|
||||
start = pmd_val(*pmd) & HPAGE_MASK;
|
||||
end = start + HPAGE_SIZE - 1;
|
||||
end = start + HPAGE_SIZE;
|
||||
__storage_key_init_range(start, end);
|
||||
set_bit(PG_arch_1, &page->flags);
|
||||
return 0;
|
||||
|
||||
@@ -146,7 +146,7 @@ static void clear_huge_pte_skeys(struct mm_struct *mm, unsigned long rste)
|
||||
}
|
||||
|
||||
if (!test_and_set_bit(PG_arch_1, &page->flags))
|
||||
__storage_key_init_range(paddr, paddr + size - 1);
|
||||
__storage_key_init_range(paddr, paddr + size);
|
||||
}
|
||||
|
||||
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
|
||||
@@ -11,6 +11,7 @@
|
||||
#include <asm/mpspec.h>
|
||||
#include <asm/msr.h>
|
||||
#include <asm/hardirq.h>
|
||||
#include <asm/io.h>
|
||||
|
||||
#define ARCH_APICTIMER_STOPS_ON_C3 1
|
||||
|
||||
@@ -110,7 +111,7 @@ static inline void native_apic_mem_write(u32 reg, u32 v)
|
||||
|
||||
static inline u32 native_apic_mem_read(u32 reg)
|
||||
{
|
||||
return *((volatile u32 *)(APIC_BASE + reg));
|
||||
return readl((void __iomem *)(APIC_BASE + reg));
|
||||
}
|
||||
|
||||
extern void native_apic_wait_icr_idle(void);
|
||||
|
||||
@@ -236,7 +236,6 @@ static struct crypto_larval *__crypto_register_alg(struct crypto_alg *alg)
|
||||
}
|
||||
|
||||
if (!strcmp(q->cra_driver_name, alg->cra_name) ||
|
||||
!strcmp(q->cra_driver_name, alg->cra_driver_name) ||
|
||||
!strcmp(q->cra_name, alg->cra_driver_name))
|
||||
goto err;
|
||||
}
|
||||
|
||||
@@ -200,7 +200,10 @@ int gemini_sata_start_bridge(struct sata_gemini *sg, unsigned int bridge)
|
||||
pclk = sg->sata0_pclk;
|
||||
else
|
||||
pclk = sg->sata1_pclk;
|
||||
clk_enable(pclk);
|
||||
ret = clk_enable(pclk);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
msleep(10);
|
||||
|
||||
/* Do not keep clocking a bridge that is not online */
|
||||
|
||||
@@ -172,6 +172,10 @@ static irqreturn_t idma64_irq(int irq, void *dev)
|
||||
u32 status_err;
|
||||
unsigned short i;
|
||||
|
||||
/* Since IRQ may be shared, check if DMA controller is powered on */
|
||||
if (status == GENMASK(31, 0))
|
||||
return IRQ_NONE;
|
||||
|
||||
dev_vdbg(idma64->dma.dev, "%s: status=%#x\n", __func__, status);
|
||||
|
||||
/* Check if we have any interrupt from the DMA controller */
|
||||
|
||||
@@ -230,7 +230,7 @@ static void pchan_update(struct owl_dma_pchan *pchan, u32 reg,
|
||||
else
|
||||
regval &= ~val;
|
||||
|
||||
writel(val, pchan->base + reg);
|
||||
writel(regval, pchan->base + reg);
|
||||
}
|
||||
|
||||
static void pchan_writel(struct owl_dma_pchan *pchan, u32 reg, u32 data)
|
||||
@@ -254,7 +254,7 @@ static void dma_update(struct owl_dma *od, u32 reg, u32 val, bool state)
|
||||
else
|
||||
regval &= ~val;
|
||||
|
||||
writel(val, od->base + reg);
|
||||
writel(regval, od->base + reg);
|
||||
}
|
||||
|
||||
static void dma_writel(struct owl_dma *od, u32 reg, u32 data)
|
||||
|
||||
@@ -161,10 +161,12 @@ packet_buffer_get(struct client *client, char __user *data, size_t user_length)
|
||||
if (atomic_read(&buffer->size) == 0)
|
||||
return -ENODEV;
|
||||
|
||||
/* FIXME: Check length <= user_length. */
|
||||
length = buffer->head->length;
|
||||
|
||||
if (length > user_length)
|
||||
return 0;
|
||||
|
||||
end = buffer->data + buffer->capacity;
|
||||
length = buffer->head->length;
|
||||
|
||||
if (&buffer->head->data[length] < end) {
|
||||
if (copy_to_user(data, buffer->head->data, length))
|
||||
|
||||
@@ -2066,6 +2066,8 @@ static void bus_reset_work(struct work_struct *work)
|
||||
|
||||
ohci->generation = generation;
|
||||
reg_write(ohci, OHCI1394_IntEventClear, OHCI1394_busReset);
|
||||
if (param_debug & OHCI_PARAM_DEBUG_BUSRESETS)
|
||||
reg_write(ohci, OHCI1394_IntMaskSet, OHCI1394_busReset);
|
||||
|
||||
if (ohci->quirks & QUIRK_RESET_PACKET)
|
||||
ohci->request_generation = generation;
|
||||
@@ -2132,12 +2134,14 @@ static irqreturn_t irq_handler(int irq, void *data)
|
||||
return IRQ_NONE;
|
||||
|
||||
/*
|
||||
* busReset and postedWriteErr must not be cleared yet
|
||||
* busReset and postedWriteErr events must not be cleared yet
|
||||
* (OHCI 1.1 clauses 7.2.3.2 and 13.2.8.1)
|
||||
*/
|
||||
reg_write(ohci, OHCI1394_IntEventClear,
|
||||
event & ~(OHCI1394_busReset | OHCI1394_postedWriteErr));
|
||||
log_irqs(ohci, event);
|
||||
if (event & OHCI1394_busReset)
|
||||
reg_write(ohci, OHCI1394_IntMaskClear, OHCI1394_busReset);
|
||||
|
||||
if (event & OHCI1394_selfIDComplete)
|
||||
queue_work(selfid_workqueue, &ohci->bus_reset_work);
|
||||
|
||||
@@ -99,7 +99,7 @@ static inline int to_reg(int gpio, enum ctrl_register reg_type)
|
||||
case 0x5e:
|
||||
return GPIOPANELCTL;
|
||||
default:
|
||||
return -EOPNOTSUPP;
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -110,7 +110,7 @@ static inline unsigned int to_reg(int gpio, enum ctrl_register reg_type)
|
||||
unsigned int reg;
|
||||
|
||||
if (gpio >= WCOVE_GPIO_NUM)
|
||||
return -EOPNOTSUPP;
|
||||
return -ENOTSUPP;
|
||||
|
||||
if (reg_type == CTRL_IN)
|
||||
reg = GPIO_IN_CTRL_BASE + gpio;
|
||||
|
||||
@@ -46,9 +46,9 @@
|
||||
/* Impose limit on how much memory KFD can use */
|
||||
static struct {
|
||||
uint64_t max_system_mem_limit;
|
||||
uint64_t max_userptr_mem_limit;
|
||||
uint64_t max_ttm_mem_limit;
|
||||
int64_t system_mem_used;
|
||||
int64_t userptr_mem_used;
|
||||
int64_t ttm_mem_used;
|
||||
spinlock_t mem_limit_lock;
|
||||
} kfd_mem_limit;
|
||||
|
||||
@@ -90,8 +90,8 @@ static bool check_if_add_bo_to_vm(struct amdgpu_vm *avm,
|
||||
}
|
||||
|
||||
/* Set memory usage limits. Current, limits are
|
||||
* System (kernel) memory - 3/8th System RAM
|
||||
* Userptr memory - 3/4th System RAM
|
||||
* System (TTM + userptr) memory - 3/4th System RAM
|
||||
* TTM memory - 3/8th System RAM
|
||||
*/
|
||||
void amdgpu_amdkfd_gpuvm_init_mem_limits(void)
|
||||
{
|
||||
@@ -103,48 +103,54 @@ void amdgpu_amdkfd_gpuvm_init_mem_limits(void)
|
||||
mem *= si.mem_unit;
|
||||
|
||||
spin_lock_init(&kfd_mem_limit.mem_limit_lock);
|
||||
kfd_mem_limit.max_system_mem_limit = (mem >> 1) - (mem >> 3);
|
||||
kfd_mem_limit.max_userptr_mem_limit = mem - (mem >> 2);
|
||||
pr_debug("Kernel memory limit %lluM, userptr limit %lluM\n",
|
||||
kfd_mem_limit.max_system_mem_limit = (mem >> 1) + (mem >> 2);
|
||||
kfd_mem_limit.max_ttm_mem_limit = (mem >> 1) - (mem >> 3);
|
||||
pr_debug("Kernel memory limit %lluM, TTM limit %lluM\n",
|
||||
(kfd_mem_limit.max_system_mem_limit >> 20),
|
||||
(kfd_mem_limit.max_userptr_mem_limit >> 20));
|
||||
(kfd_mem_limit.max_ttm_mem_limit >> 20));
|
||||
}
|
||||
|
||||
static int amdgpu_amdkfd_reserve_system_mem_limit(struct amdgpu_device *adev,
|
||||
uint64_t size, u32 domain)
|
||||
uint64_t size, u32 domain, bool sg)
|
||||
{
|
||||
size_t acc_size;
|
||||
size_t acc_size, system_mem_needed, ttm_mem_needed;
|
||||
int ret = 0;
|
||||
|
||||
acc_size = ttm_bo_dma_acc_size(&adev->mman.bdev, size,
|
||||
sizeof(struct amdgpu_bo));
|
||||
|
||||
spin_lock(&kfd_mem_limit.mem_limit_lock);
|
||||
|
||||
if (domain == AMDGPU_GEM_DOMAIN_GTT) {
|
||||
if (kfd_mem_limit.system_mem_used + (acc_size + size) >
|
||||
kfd_mem_limit.max_system_mem_limit) {
|
||||
ret = -ENOMEM;
|
||||
goto err_no_mem;
|
||||
/* TTM GTT memory */
|
||||
system_mem_needed = acc_size + size;
|
||||
ttm_mem_needed = acc_size + size;
|
||||
} else if (domain == AMDGPU_GEM_DOMAIN_CPU && !sg) {
|
||||
/* Userptr */
|
||||
system_mem_needed = acc_size + size;
|
||||
ttm_mem_needed = acc_size;
|
||||
} else {
|
||||
/* VRAM and SG */
|
||||
system_mem_needed = acc_size;
|
||||
ttm_mem_needed = acc_size;
|
||||
}
|
||||
kfd_mem_limit.system_mem_used += (acc_size + size);
|
||||
} else if (domain == AMDGPU_GEM_DOMAIN_CPU) {
|
||||
if ((kfd_mem_limit.system_mem_used + acc_size >
|
||||
|
||||
if ((kfd_mem_limit.system_mem_used + system_mem_needed >
|
||||
kfd_mem_limit.max_system_mem_limit) ||
|
||||
(kfd_mem_limit.userptr_mem_used + (size + acc_size) >
|
||||
kfd_mem_limit.max_userptr_mem_limit)) {
|
||||
(kfd_mem_limit.ttm_mem_used + ttm_mem_needed >
|
||||
kfd_mem_limit.max_ttm_mem_limit))
|
||||
ret = -ENOMEM;
|
||||
goto err_no_mem;
|
||||
else {
|
||||
kfd_mem_limit.system_mem_used += system_mem_needed;
|
||||
kfd_mem_limit.ttm_mem_used += ttm_mem_needed;
|
||||
}
|
||||
kfd_mem_limit.system_mem_used += acc_size;
|
||||
kfd_mem_limit.userptr_mem_used += size;
|
||||
}
|
||||
err_no_mem:
|
||||
|
||||
spin_unlock(&kfd_mem_limit.mem_limit_lock);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void unreserve_system_mem_limit(struct amdgpu_device *adev,
|
||||
uint64_t size, u32 domain)
|
||||
uint64_t size, u32 domain, bool sg)
|
||||
{
|
||||
size_t acc_size;
|
||||
|
||||
@@ -154,14 +160,18 @@ static void unreserve_system_mem_limit(struct amdgpu_device *adev,
|
||||
spin_lock(&kfd_mem_limit.mem_limit_lock);
|
||||
if (domain == AMDGPU_GEM_DOMAIN_GTT) {
|
||||
kfd_mem_limit.system_mem_used -= (acc_size + size);
|
||||
} else if (domain == AMDGPU_GEM_DOMAIN_CPU) {
|
||||
kfd_mem_limit.ttm_mem_used -= (acc_size + size);
|
||||
} else if (domain == AMDGPU_GEM_DOMAIN_CPU && !sg) {
|
||||
kfd_mem_limit.system_mem_used -= (acc_size + size);
|
||||
kfd_mem_limit.ttm_mem_used -= acc_size;
|
||||
} else {
|
||||
kfd_mem_limit.system_mem_used -= acc_size;
|
||||
kfd_mem_limit.userptr_mem_used -= size;
|
||||
kfd_mem_limit.ttm_mem_used -= acc_size;
|
||||
}
|
||||
WARN_ONCE(kfd_mem_limit.system_mem_used < 0,
|
||||
"kfd system memory accounting unbalanced");
|
||||
WARN_ONCE(kfd_mem_limit.userptr_mem_used < 0,
|
||||
"kfd userptr memory accounting unbalanced");
|
||||
WARN_ONCE(kfd_mem_limit.ttm_mem_used < 0,
|
||||
"kfd TTM memory accounting unbalanced");
|
||||
|
||||
spin_unlock(&kfd_mem_limit.mem_limit_lock);
|
||||
}
|
||||
@@ -171,16 +181,22 @@ void amdgpu_amdkfd_unreserve_system_memory_limit(struct amdgpu_bo *bo)
|
||||
spin_lock(&kfd_mem_limit.mem_limit_lock);
|
||||
|
||||
if (bo->flags & AMDGPU_AMDKFD_USERPTR_BO) {
|
||||
kfd_mem_limit.system_mem_used -= bo->tbo.acc_size;
|
||||
kfd_mem_limit.userptr_mem_used -= amdgpu_bo_size(bo);
|
||||
kfd_mem_limit.system_mem_used -=
|
||||
(bo->tbo.acc_size + amdgpu_bo_size(bo));
|
||||
kfd_mem_limit.ttm_mem_used -= bo->tbo.acc_size;
|
||||
} else if (bo->preferred_domains == AMDGPU_GEM_DOMAIN_GTT) {
|
||||
kfd_mem_limit.system_mem_used -=
|
||||
(bo->tbo.acc_size + amdgpu_bo_size(bo));
|
||||
kfd_mem_limit.ttm_mem_used -=
|
||||
(bo->tbo.acc_size + amdgpu_bo_size(bo));
|
||||
} else {
|
||||
kfd_mem_limit.system_mem_used -= bo->tbo.acc_size;
|
||||
kfd_mem_limit.ttm_mem_used -= bo->tbo.acc_size;
|
||||
}
|
||||
WARN_ONCE(kfd_mem_limit.system_mem_used < 0,
|
||||
"kfd system memory accounting unbalanced");
|
||||
WARN_ONCE(kfd_mem_limit.userptr_mem_used < 0,
|
||||
"kfd userptr memory accounting unbalanced");
|
||||
WARN_ONCE(kfd_mem_limit.ttm_mem_used < 0,
|
||||
"kfd TTM memory accounting unbalanced");
|
||||
|
||||
spin_unlock(&kfd_mem_limit.mem_limit_lock);
|
||||
}
|
||||
@@ -1201,10 +1217,11 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
|
||||
|
||||
amdgpu_sync_create(&(*mem)->sync);
|
||||
|
||||
ret = amdgpu_amdkfd_reserve_system_mem_limit(adev, size, alloc_domain);
|
||||
ret = amdgpu_amdkfd_reserve_system_mem_limit(adev, size,
|
||||
alloc_domain, false);
|
||||
if (ret) {
|
||||
pr_debug("Insufficient system memory\n");
|
||||
goto err_reserve_system_mem;
|
||||
goto err_reserve_limit;
|
||||
}
|
||||
|
||||
pr_debug("\tcreate BO VA 0x%llx size 0x%llx domain %s\n",
|
||||
@@ -1252,10 +1269,11 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
|
||||
allocate_init_user_pages_failed:
|
||||
amdgpu_bo_unref(&bo);
|
||||
/* Don't unreserve system mem limit twice */
|
||||
goto err_reserve_system_mem;
|
||||
goto err_reserve_limit;
|
||||
err_bo_create:
|
||||
unreserve_system_mem_limit(adev, size, alloc_domain);
|
||||
err_reserve_system_mem:
|
||||
unreserve_system_mem_limit(adev, size, alloc_domain, false);
|
||||
err_reserve_limit:
|
||||
amdgpu_sync_free(&(*mem)->sync);
|
||||
mutex_destroy(&(*mem)->lock);
|
||||
kfree(*mem);
|
||||
return ret;
|
||||
|
||||
@@ -2048,6 +2048,37 @@ static void amdgpu_vm_bo_insert_map(struct amdgpu_device *adev,
|
||||
trace_amdgpu_vm_bo_map(bo_va, mapping);
|
||||
}
|
||||
|
||||
/* Validate operation parameters to prevent potential abuse */
|
||||
static int amdgpu_vm_verify_parameters(struct amdgpu_device *adev,
|
||||
struct amdgpu_bo *bo,
|
||||
uint64_t saddr,
|
||||
uint64_t offset,
|
||||
uint64_t size)
|
||||
{
|
||||
uint64_t tmp, lpfn;
|
||||
|
||||
if (saddr & AMDGPU_GPU_PAGE_MASK
|
||||
|| offset & AMDGPU_GPU_PAGE_MASK
|
||||
|| size & AMDGPU_GPU_PAGE_MASK)
|
||||
return -EINVAL;
|
||||
|
||||
if (check_add_overflow(saddr, size, &tmp)
|
||||
|| check_add_overflow(offset, size, &tmp)
|
||||
|| size == 0 /* which also leads to end < begin */)
|
||||
return -EINVAL;
|
||||
|
||||
/* make sure object fit at this offset */
|
||||
if (bo && offset + size > amdgpu_bo_size(bo))
|
||||
return -EINVAL;
|
||||
|
||||
/* Ensure last pfn not exceed max_pfn */
|
||||
lpfn = (saddr + size - 1) >> AMDGPU_GPU_PAGE_SHIFT;
|
||||
if (lpfn >= adev->vm_manager.max_pfn)
|
||||
return -EINVAL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* amdgpu_vm_bo_map - map bo inside a vm
|
||||
*
|
||||
@@ -2074,20 +2105,14 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
|
||||
struct amdgpu_bo *bo = bo_va->base.bo;
|
||||
struct amdgpu_vm *vm = bo_va->base.vm;
|
||||
uint64_t eaddr;
|
||||
int r;
|
||||
|
||||
/* validate the parameters */
|
||||
if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK ||
|
||||
size == 0 || size & ~PAGE_MASK)
|
||||
return -EINVAL;
|
||||
|
||||
/* make sure object fit at this offset */
|
||||
eaddr = saddr + size - 1;
|
||||
if (saddr >= eaddr ||
|
||||
(bo && offset + size > amdgpu_bo_size(bo)))
|
||||
return -EINVAL;
|
||||
r = amdgpu_vm_verify_parameters(adev, bo, saddr, offset, size);
|
||||
if (r)
|
||||
return r;
|
||||
|
||||
saddr /= AMDGPU_GPU_PAGE_SIZE;
|
||||
eaddr /= AMDGPU_GPU_PAGE_SIZE;
|
||||
eaddr = saddr + (size - 1) / AMDGPU_GPU_PAGE_SIZE;
|
||||
|
||||
tmp = amdgpu_vm_it_iter_first(&vm->va, saddr, eaddr);
|
||||
if (tmp) {
|
||||
@@ -2140,16 +2165,9 @@ int amdgpu_vm_bo_replace_map(struct amdgpu_device *adev,
|
||||
uint64_t eaddr;
|
||||
int r;
|
||||
|
||||
/* validate the parameters */
|
||||
if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK ||
|
||||
size == 0 || size & ~PAGE_MASK)
|
||||
return -EINVAL;
|
||||
|
||||
/* make sure object fit at this offset */
|
||||
eaddr = saddr + size - 1;
|
||||
if (saddr >= eaddr ||
|
||||
(bo && offset + size > amdgpu_bo_size(bo)))
|
||||
return -EINVAL;
|
||||
r = amdgpu_vm_verify_parameters(adev, bo, saddr, offset, size);
|
||||
if (r)
|
||||
return r;
|
||||
|
||||
/* Allocate all the needed memory */
|
||||
mapping = kmalloc(sizeof(*mapping), GFP_KERNEL);
|
||||
@@ -2163,7 +2181,7 @@ int amdgpu_vm_bo_replace_map(struct amdgpu_device *adev,
|
||||
}
|
||||
|
||||
saddr /= AMDGPU_GPU_PAGE_SIZE;
|
||||
eaddr /= AMDGPU_GPU_PAGE_SIZE;
|
||||
eaddr = saddr + (size - 1) / AMDGPU_GPU_PAGE_SIZE;
|
||||
|
||||
mapping->start = saddr;
|
||||
mapping->last = eaddr;
|
||||
@@ -2250,10 +2268,14 @@ int amdgpu_vm_bo_clear_mappings(struct amdgpu_device *adev,
|
||||
struct amdgpu_bo_va_mapping *before, *after, *tmp, *next;
|
||||
LIST_HEAD(removed);
|
||||
uint64_t eaddr;
|
||||
int r;
|
||||
|
||||
r = amdgpu_vm_verify_parameters(adev, NULL, saddr, 0, size);
|
||||
if (r)
|
||||
return r;
|
||||
|
||||
eaddr = saddr + size - 1;
|
||||
saddr /= AMDGPU_GPU_PAGE_SIZE;
|
||||
eaddr /= AMDGPU_GPU_PAGE_SIZE;
|
||||
eaddr = saddr + (size - 1) / AMDGPU_GPU_PAGE_SIZE;
|
||||
|
||||
/* Allocate all the needed memory */
|
||||
before = kzalloc(sizeof(*before), GFP_KERNEL);
|
||||
|
||||
@@ -25,6 +25,7 @@
|
||||
#include <drm/drmP.h>
|
||||
|
||||
#include "nouveau_drv.h"
|
||||
#include "nouveau_bios.h"
|
||||
#include "nouveau_reg.h"
|
||||
#include "dispnv04/hw.h"
|
||||
#include "nouveau_encoder.h"
|
||||
@@ -1674,7 +1675,7 @@ apply_dcb_encoder_quirks(struct drm_device *dev, int idx, u32 *conn, u32 *conf)
|
||||
*/
|
||||
if (nv_match_device(dev, 0x0201, 0x1462, 0x8851)) {
|
||||
if (*conn == 0xf2005014 && *conf == 0xffffffff) {
|
||||
fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 1, 1, 1);
|
||||
fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 1, 1, DCB_OUTPUT_B);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
@@ -1760,26 +1761,26 @@ fabricate_dcb_encoder_table(struct drm_device *dev, struct nvbios *bios)
|
||||
#ifdef __powerpc__
|
||||
/* Apple iMac G4 NV17 */
|
||||
if (of_machine_is_compatible("PowerMac4,5")) {
|
||||
fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 0, all_heads, 1);
|
||||
fabricate_dcb_output(dcb, DCB_OUTPUT_ANALOG, 1, all_heads, 2);
|
||||
fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 0, all_heads, DCB_OUTPUT_B);
|
||||
fabricate_dcb_output(dcb, DCB_OUTPUT_ANALOG, 1, all_heads, DCB_OUTPUT_C);
|
||||
return;
|
||||
}
|
||||
#endif
|
||||
|
||||
/* Make up some sane defaults */
|
||||
fabricate_dcb_output(dcb, DCB_OUTPUT_ANALOG,
|
||||
bios->legacy.i2c_indices.crt, 1, 1);
|
||||
bios->legacy.i2c_indices.crt, 1, DCB_OUTPUT_B);
|
||||
|
||||
if (nv04_tv_identify(dev, bios->legacy.i2c_indices.tv) >= 0)
|
||||
fabricate_dcb_output(dcb, DCB_OUTPUT_TV,
|
||||
bios->legacy.i2c_indices.tv,
|
||||
all_heads, 0);
|
||||
all_heads, DCB_OUTPUT_A);
|
||||
|
||||
else if (bios->tmds.output0_script_ptr ||
|
||||
bios->tmds.output1_script_ptr)
|
||||
fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS,
|
||||
bios->legacy.i2c_indices.panel,
|
||||
all_heads, 1);
|
||||
all_heads, DCB_OUTPUT_B);
|
||||
}
|
||||
|
||||
static int
|
||||
|
||||
@@ -66,11 +66,16 @@ of_init(struct nvkm_bios *bios, const char *name)
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
static void of_fini(void *p)
|
||||
{
|
||||
kfree(p);
|
||||
}
|
||||
|
||||
const struct nvbios_source
|
||||
nvbios_of = {
|
||||
.name = "OpenFirmware",
|
||||
.init = of_init,
|
||||
.fini = (void(*)(void *))kfree,
|
||||
.fini = of_fini,
|
||||
.read = of_read,
|
||||
.size = of_size,
|
||||
.rw = false,
|
||||
|
||||
@@ -221,8 +221,11 @@ nv50_instobj_acquire(struct nvkm_memory *memory)
|
||||
void __iomem *map = NULL;
|
||||
|
||||
/* Already mapped? */
|
||||
if (refcount_inc_not_zero(&iobj->maps))
|
||||
if (refcount_inc_not_zero(&iobj->maps)) {
|
||||
/* read barrier match the wmb on refcount set */
|
||||
smp_rmb();
|
||||
return iobj->map;
|
||||
}
|
||||
|
||||
/* Take the lock, and re-check that another thread hasn't
|
||||
* already mapped the object in the meantime.
|
||||
@@ -249,6 +252,8 @@ nv50_instobj_acquire(struct nvkm_memory *memory)
|
||||
iobj->base.memory.ptrs = &nv50_instobj_fast;
|
||||
else
|
||||
iobj->base.memory.ptrs = &nv50_instobj_slow;
|
||||
/* barrier to ensure the ptrs are written before refcount is set */
|
||||
smp_wmb();
|
||||
refcount_set(&iobj->maps, 1);
|
||||
}
|
||||
|
||||
|
||||
@@ -1064,7 +1064,7 @@ static int vmw_event_fence_action_create(struct drm_file *file_priv,
|
||||
}
|
||||
|
||||
event->event.base.type = DRM_VMW_EVENT_FENCE_SIGNALED;
|
||||
event->event.base.length = sizeof(*event);
|
||||
event->event.base.length = sizeof(event->event);
|
||||
event->event.user_data = user_data;
|
||||
|
||||
ret = drm_event_reserve_init(dev, file_priv, &event->base, &event->event.base);
|
||||
|
||||
@@ -58,7 +58,6 @@
|
||||
/* flags */
|
||||
#define I2C_HID_STARTED 0
|
||||
#define I2C_HID_RESET_PENDING 1
|
||||
#define I2C_HID_READ_PENDING 2
|
||||
|
||||
#define I2C_HID_PWR_ON 0x00
|
||||
#define I2C_HID_PWR_SLEEP 0x01
|
||||
@@ -259,7 +258,6 @@ static int __i2c_hid_command(struct i2c_client *client,
|
||||
msg[1].len = data_len;
|
||||
msg[1].buf = buf_recv;
|
||||
msg_num = 2;
|
||||
set_bit(I2C_HID_READ_PENDING, &ihid->flags);
|
||||
}
|
||||
|
||||
if (wait)
|
||||
@@ -267,9 +265,6 @@ static int __i2c_hid_command(struct i2c_client *client,
|
||||
|
||||
ret = i2c_transfer(client->adapter, msg, msg_num);
|
||||
|
||||
if (data_len > 0)
|
||||
clear_bit(I2C_HID_READ_PENDING, &ihid->flags);
|
||||
|
||||
if (ret != msg_num)
|
||||
return ret < 0 ? ret : -EIO;
|
||||
|
||||
@@ -550,9 +545,6 @@ static irqreturn_t i2c_hid_irq(int irq, void *dev_id)
|
||||
{
|
||||
struct i2c_hid *ihid = dev_id;
|
||||
|
||||
if (test_bit(I2C_HID_READ_PENDING, &ihid->flags))
|
||||
return IRQ_HANDLED;
|
||||
|
||||
i2c_hid_get_input(ihid);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
|
||||
@@ -1872,13 +1872,18 @@ static int i2c_check_for_quirks(struct i2c_adapter *adap, struct i2c_msg *msgs,
|
||||
* Returns negative errno, else the number of messages executed.
|
||||
*
|
||||
* Adapter lock must be held when calling this function. No debug logging
|
||||
* takes place. adap->algo->master_xfer existence isn't checked.
|
||||
* takes place.
|
||||
*/
|
||||
int __i2c_transfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
|
||||
{
|
||||
unsigned long orig_jiffies;
|
||||
int ret, try;
|
||||
|
||||
if (!adap->algo->master_xfer) {
|
||||
dev_dbg(&adap->dev, "I2C level transfers not supported\n");
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
if (WARN_ON(!msgs || num < 1))
|
||||
return -EINVAL;
|
||||
|
||||
|
||||
@@ -216,7 +216,8 @@ static int process_pma_cmd(struct mlx5_ib_dev *dev, u8 port_num,
|
||||
mdev = dev->mdev;
|
||||
mdev_port_num = 1;
|
||||
}
|
||||
if (MLX5_CAP_GEN(dev->mdev, num_ports) == 1) {
|
||||
if (MLX5_CAP_GEN(dev->mdev, num_ports) == 1 &&
|
||||
!mlx5_core_mp_enabled(mdev)) {
|
||||
/* set local port to one for Function-Per-Port HCA. */
|
||||
mdev = dev->mdev;
|
||||
mdev_port_num = 1;
|
||||
|
||||
@@ -2994,14 +2994,9 @@ static int its_vpe_irq_domain_alloc(struct irq_domain *domain, unsigned int virq
|
||||
set_bit(i, bitmap);
|
||||
}
|
||||
|
||||
if (err) {
|
||||
if (i > 0)
|
||||
if (err)
|
||||
its_vpe_irq_domain_free(domain, virq, i);
|
||||
|
||||
its_lpi_free(bitmap, base, nr_ids);
|
||||
its_free_prop_table(vprop_page);
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
|
||||
@@ -18,6 +18,8 @@
|
||||
#include "dm.h"
|
||||
|
||||
#define DM_RESERVED_MAX_IOS 1024
|
||||
#define DM_MAX_TARGETS 1048576
|
||||
#define DM_MAX_TARGET_PARAMS 1024
|
||||
|
||||
struct dm_kobject_holder {
|
||||
struct kobject kobj;
|
||||
|
||||
@@ -1734,7 +1734,8 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
|
||||
if (copy_from_user(param_kernel, user, minimum_data_size))
|
||||
return -EFAULT;
|
||||
|
||||
if (param_kernel->data_size < minimum_data_size)
|
||||
if (unlikely(param_kernel->data_size < minimum_data_size) ||
|
||||
unlikely(param_kernel->data_size > DM_MAX_TARGETS * DM_MAX_TARGET_PARAMS))
|
||||
return -EINVAL;
|
||||
|
||||
secure_data = param_kernel->flags & DM_SECURE_DATA_FLAG;
|
||||
|
||||
@@ -190,7 +190,12 @@ static int alloc_targets(struct dm_table *t, unsigned int num)
|
||||
int dm_table_create(struct dm_table **result, fmode_t mode,
|
||||
unsigned num_targets, struct mapped_device *md)
|
||||
{
|
||||
struct dm_table *t = kzalloc(sizeof(*t), GFP_KERNEL);
|
||||
struct dm_table *t;
|
||||
|
||||
if (num_targets > DM_MAX_TARGETS)
|
||||
return -EOVERFLOW;
|
||||
|
||||
t = kzalloc(sizeof(*t), GFP_KERNEL);
|
||||
|
||||
if (!t)
|
||||
return -ENOMEM;
|
||||
@@ -205,7 +210,7 @@ int dm_table_create(struct dm_table **result, fmode_t mode,
|
||||
|
||||
if (!num_targets) {
|
||||
kfree(t);
|
||||
return -ENOMEM;
|
||||
return -EOVERFLOW;
|
||||
}
|
||||
|
||||
if (alloc_targets(t, num_targets)) {
|
||||
|
||||
@@ -52,7 +52,7 @@ static unsigned long doc_locations[] __initdata = {
|
||||
0xe8000, 0xea000, 0xec000, 0xee000,
|
||||
#endif
|
||||
#endif
|
||||
0xffffffff };
|
||||
};
|
||||
|
||||
static struct mtd_info *doclist = NULL;
|
||||
|
||||
@@ -1678,7 +1678,7 @@ static int __init init_nanddoc(void)
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
} else {
|
||||
for (i = 0; (doc_locations[i] != 0xffffffff); i++) {
|
||||
for (i = 0; i < ARRAY_SIZE(doc_locations); i++) {
|
||||
doc_probe(doc_locations[i]);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3933,6 +3933,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6097,
|
||||
.name = "Marvell 88E6085",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 8192,
|
||||
.num_ports = 10,
|
||||
.num_internal_phys = 5,
|
||||
.max_vid = 4095,
|
||||
@@ -3955,6 +3956,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6095,
|
||||
.name = "Marvell 88E6095/88E6095F",
|
||||
.num_databases = 256,
|
||||
.num_macs = 8192,
|
||||
.num_ports = 11,
|
||||
.num_internal_phys = 0,
|
||||
.max_vid = 4095,
|
||||
@@ -3975,6 +3977,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6097,
|
||||
.name = "Marvell 88E6097/88E6097F",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 8192,
|
||||
.num_ports = 11,
|
||||
.num_internal_phys = 8,
|
||||
.max_vid = 4095,
|
||||
@@ -3997,6 +4000,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6165,
|
||||
.name = "Marvell 88E6123",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 1024,
|
||||
.num_ports = 3,
|
||||
.num_internal_phys = 5,
|
||||
.max_vid = 4095,
|
||||
@@ -4019,6 +4023,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6185,
|
||||
.name = "Marvell 88E6131",
|
||||
.num_databases = 256,
|
||||
.num_macs = 8192,
|
||||
.num_ports = 8,
|
||||
.num_internal_phys = 0,
|
||||
.max_vid = 4095,
|
||||
@@ -4038,7 +4043,8 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.prod_num = MV88E6XXX_PORT_SWITCH_ID_PROD_6141,
|
||||
.family = MV88E6XXX_FAMILY_6341,
|
||||
.name = "Marvell 88E6141",
|
||||
.num_databases = 4096,
|
||||
.num_databases = 256,
|
||||
.num_macs = 2048,
|
||||
.num_ports = 6,
|
||||
.num_internal_phys = 5,
|
||||
.num_gpio = 11,
|
||||
@@ -4062,6 +4068,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6165,
|
||||
.name = "Marvell 88E6161",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 1024,
|
||||
.num_ports = 6,
|
||||
.num_internal_phys = 5,
|
||||
.max_vid = 4095,
|
||||
@@ -4085,6 +4092,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6165,
|
||||
.name = "Marvell 88E6165",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 8192,
|
||||
.num_ports = 6,
|
||||
.num_internal_phys = 0,
|
||||
.max_vid = 4095,
|
||||
@@ -4108,6 +4116,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6351,
|
||||
.name = "Marvell 88E6171",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 8192,
|
||||
.num_ports = 7,
|
||||
.num_internal_phys = 5,
|
||||
.max_vid = 4095,
|
||||
@@ -4130,6 +4139,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6352,
|
||||
.name = "Marvell 88E6172",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 8192,
|
||||
.num_ports = 7,
|
||||
.num_internal_phys = 5,
|
||||
.num_gpio = 15,
|
||||
@@ -4153,6 +4163,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6351,
|
||||
.name = "Marvell 88E6175",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 8192,
|
||||
.num_ports = 7,
|
||||
.num_internal_phys = 5,
|
||||
.max_vid = 4095,
|
||||
@@ -4175,6 +4186,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6352,
|
||||
.name = "Marvell 88E6176",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 8192,
|
||||
.num_ports = 7,
|
||||
.num_internal_phys = 5,
|
||||
.num_gpio = 15,
|
||||
@@ -4198,6 +4210,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6185,
|
||||
.name = "Marvell 88E6185",
|
||||
.num_databases = 256,
|
||||
.num_macs = 8192,
|
||||
.num_ports = 10,
|
||||
.num_internal_phys = 0,
|
||||
.max_vid = 4095,
|
||||
@@ -4218,6 +4231,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6390,
|
||||
.name = "Marvell 88E6190",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 16384,
|
||||
.num_ports = 11, /* 10 + Z80 */
|
||||
.num_internal_phys = 9,
|
||||
.num_gpio = 16,
|
||||
@@ -4241,6 +4255,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6390,
|
||||
.name = "Marvell 88E6190X",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 16384,
|
||||
.num_ports = 11, /* 10 + Z80 */
|
||||
.num_internal_phys = 9,
|
||||
.num_gpio = 16,
|
||||
@@ -4264,6 +4279,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6390,
|
||||
.name = "Marvell 88E6191",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 16384,
|
||||
.num_ports = 11, /* 10 + Z80 */
|
||||
.num_internal_phys = 9,
|
||||
.max_vid = 8191,
|
||||
@@ -4287,6 +4303,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6352,
|
||||
.name = "Marvell 88E6240",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 8192,
|
||||
.num_ports = 7,
|
||||
.num_internal_phys = 5,
|
||||
.num_gpio = 15,
|
||||
@@ -4335,6 +4352,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6320,
|
||||
.name = "Marvell 88E6320",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 8192,
|
||||
.num_ports = 7,
|
||||
.num_internal_phys = 5,
|
||||
.num_gpio = 15,
|
||||
@@ -4359,6 +4377,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6320,
|
||||
.name = "Marvell 88E6321",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 8192,
|
||||
.num_ports = 7,
|
||||
.num_internal_phys = 5,
|
||||
.num_gpio = 15,
|
||||
@@ -4381,7 +4400,8 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.prod_num = MV88E6XXX_PORT_SWITCH_ID_PROD_6341,
|
||||
.family = MV88E6XXX_FAMILY_6341,
|
||||
.name = "Marvell 88E6341",
|
||||
.num_databases = 4096,
|
||||
.num_databases = 256,
|
||||
.num_macs = 2048,
|
||||
.num_internal_phys = 5,
|
||||
.num_ports = 6,
|
||||
.num_gpio = 11,
|
||||
@@ -4406,6 +4426,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6351,
|
||||
.name = "Marvell 88E6350",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 8192,
|
||||
.num_ports = 7,
|
||||
.num_internal_phys = 5,
|
||||
.max_vid = 4095,
|
||||
@@ -4428,6 +4449,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6351,
|
||||
.name = "Marvell 88E6351",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 8192,
|
||||
.num_ports = 7,
|
||||
.num_internal_phys = 5,
|
||||
.max_vid = 4095,
|
||||
@@ -4450,6 +4472,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6352,
|
||||
.name = "Marvell 88E6352",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 8192,
|
||||
.num_ports = 7,
|
||||
.num_internal_phys = 5,
|
||||
.num_gpio = 15,
|
||||
@@ -4473,6 +4496,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6390,
|
||||
.name = "Marvell 88E6390",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 16384,
|
||||
.num_ports = 11, /* 10 + Z80 */
|
||||
.num_internal_phys = 9,
|
||||
.num_gpio = 16,
|
||||
@@ -4496,6 +4520,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.family = MV88E6XXX_FAMILY_6390,
|
||||
.name = "Marvell 88E6390X",
|
||||
.num_databases = 4096,
|
||||
.num_macs = 16384,
|
||||
.num_ports = 11, /* 10 + Z80 */
|
||||
.num_internal_phys = 9,
|
||||
.num_gpio = 16,
|
||||
|
||||
@@ -105,6 +105,7 @@ struct mv88e6xxx_info {
|
||||
u16 prod_num;
|
||||
const char *name;
|
||||
unsigned int num_databases;
|
||||
unsigned int num_macs;
|
||||
unsigned int num_ports;
|
||||
unsigned int num_internal_phys;
|
||||
unsigned int num_gpio;
|
||||
@@ -559,6 +560,11 @@ static inline unsigned int mv88e6xxx_num_databases(struct mv88e6xxx_chip *chip)
|
||||
return chip->info->num_databases;
|
||||
}
|
||||
|
||||
static inline unsigned int mv88e6xxx_num_macs(struct mv88e6xxx_chip *chip)
|
||||
{
|
||||
return chip->info->num_macs;
|
||||
}
|
||||
|
||||
static inline unsigned int mv88e6xxx_num_ports(struct mv88e6xxx_chip *chip)
|
||||
{
|
||||
return chip->info->num_ports;
|
||||
|
||||
@@ -2033,6 +2033,7 @@ static int b44_set_pauseparam(struct net_device *dev,
|
||||
bp->flags |= B44_FLAG_TX_PAUSE;
|
||||
else
|
||||
bp->flags &= ~B44_FLAG_TX_PAUSE;
|
||||
if (netif_running(dev)) {
|
||||
if (bp->flags & B44_FLAG_PAUSE_AUTO) {
|
||||
b44_halt(bp);
|
||||
b44_init_rings(bp);
|
||||
@@ -2040,6 +2041,7 @@ static int b44_set_pauseparam(struct net_device *dev,
|
||||
} else {
|
||||
__b44_set_flow_ctrl(bp, bp->flags);
|
||||
}
|
||||
}
|
||||
spin_unlock_irq(&bp->lock);
|
||||
|
||||
b44_enable_ints(bp);
|
||||
|
||||
@@ -2806,7 +2806,7 @@ static void bcmgenet_set_hw_addr(struct bcmgenet_priv *priv,
|
||||
}
|
||||
|
||||
/* Returns a reusable dma control register value */
|
||||
static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv)
|
||||
static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv, bool flush_rx)
|
||||
{
|
||||
unsigned int i;
|
||||
u32 reg;
|
||||
@@ -2831,6 +2831,14 @@ static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv)
|
||||
udelay(10);
|
||||
bcmgenet_umac_writel(priv, 0, UMAC_TX_FLUSH);
|
||||
|
||||
if (flush_rx) {
|
||||
reg = bcmgenet_rbuf_ctrl_get(priv);
|
||||
bcmgenet_rbuf_ctrl_set(priv, reg | BIT(0));
|
||||
udelay(10);
|
||||
bcmgenet_rbuf_ctrl_set(priv, reg);
|
||||
udelay(10);
|
||||
}
|
||||
|
||||
return dma_ctrl;
|
||||
}
|
||||
|
||||
@@ -2926,8 +2934,8 @@ static int bcmgenet_open(struct net_device *dev)
|
||||
|
||||
bcmgenet_set_hw_addr(priv, dev->dev_addr);
|
||||
|
||||
/* Disable RX/TX DMA and flush TX queues */
|
||||
dma_ctrl = bcmgenet_dma_disable(priv);
|
||||
/* Disable RX/TX DMA and flush TX and RX queues */
|
||||
dma_ctrl = bcmgenet_dma_disable(priv, true);
|
||||
|
||||
/* Reinitialize TDMA and RDMA and SW housekeeping */
|
||||
ret = bcmgenet_init_dma(priv);
|
||||
@@ -3682,7 +3690,7 @@ static int bcmgenet_resume(struct device *d)
|
||||
bcmgenet_power_up(priv, GENET_POWER_WOL_MAGIC);
|
||||
|
||||
/* Disable RX/TX DMA and flush TX queues */
|
||||
dma_ctrl = bcmgenet_dma_disable(priv);
|
||||
dma_ctrl = bcmgenet_dma_disable(priv, false);
|
||||
|
||||
/* Reinitialize TDMA and RDMA and SW housekeeping */
|
||||
ret = bcmgenet_init_dma(priv);
|
||||
|
||||
@@ -320,7 +320,7 @@ bnad_debugfs_write_regrd(struct file *file, const char __user *buf,
|
||||
void *kern_buf;
|
||||
|
||||
/* Copy the user space buf */
|
||||
kern_buf = memdup_user(buf, nbytes);
|
||||
kern_buf = memdup_user_nul(buf, nbytes);
|
||||
if (IS_ERR(kern_buf))
|
||||
return PTR_ERR(kern_buf);
|
||||
|
||||
@@ -380,7 +380,7 @@ bnad_debugfs_write_regwr(struct file *file, const char __user *buf,
|
||||
void *kern_buf;
|
||||
|
||||
/* Copy the user space buf */
|
||||
kern_buf = memdup_user(buf, nbytes);
|
||||
kern_buf = memdup_user_nul(buf, nbytes);
|
||||
if (IS_ERR(kern_buf))
|
||||
return PTR_ERR(kern_buf);
|
||||
|
||||
|
||||
@@ -14728,7 +14728,7 @@ static int __init i40e_init_module(void)
|
||||
* since we need to be able to guarantee forward progress even under
|
||||
* memory pressure.
|
||||
*/
|
||||
i40e_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, i40e_driver_name);
|
||||
i40e_wq = alloc_workqueue("%s", 0, 0, i40e_driver_name);
|
||||
if (!i40e_wq) {
|
||||
pr_err("%s: Failed to create workqueue\n", i40e_driver_name);
|
||||
return -ENOMEM;
|
||||
|
||||
@@ -1452,8 +1452,9 @@ static struct mlx5_flow_handle *add_rule_fg(struct mlx5_flow_group *fg,
|
||||
}
|
||||
trace_mlx5_fs_set_fte(fte, false);
|
||||
|
||||
/* Link newly added rules into the tree. */
|
||||
for (i = 0; i < handle->num_rules; i++) {
|
||||
if (refcount_read(&handle->rule[i]->node.refcount) == 1) {
|
||||
if (!handle->rule[i]->node.parent) {
|
||||
tree_add_node(&handle->rule[i]->node, &fte->node);
|
||||
trace_mlx5_fs_add_rule(handle->rule[i]);
|
||||
}
|
||||
|
||||
@@ -561,7 +561,7 @@ static void mlxsw_emad_rx_listener_func(struct sk_buff *skb, u8 local_port,
|
||||
|
||||
static const struct mlxsw_listener mlxsw_emad_rx_listener =
|
||||
MLXSW_RXL(mlxsw_emad_rx_listener_func, ETHEMAD, TRAP_TO_CPU, false,
|
||||
EMAD, DISCARD);
|
||||
EMAD, FORWARD);
|
||||
|
||||
static int mlxsw_emad_init(struct mlxsw_core *mlxsw_core)
|
||||
{
|
||||
|
||||
@@ -838,7 +838,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
|
||||
__be16 df;
|
||||
int err;
|
||||
|
||||
if (!pskb_inet_may_pull(skb))
|
||||
if (!skb_vlan_inet_prepare(skb))
|
||||
return -EINVAL;
|
||||
|
||||
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
|
||||
@@ -884,7 +884,7 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
|
||||
__be16 sport;
|
||||
int err;
|
||||
|
||||
if (!pskb_inet_may_pull(skb))
|
||||
if (!skb_vlan_inet_prepare(skb))
|
||||
return -EINVAL;
|
||||
|
||||
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
|
||||
|
||||
@@ -710,11 +710,12 @@ static int gtp_newlink(struct net *src_net, struct net_device *dev,
|
||||
static void gtp_dellink(struct net_device *dev, struct list_head *head)
|
||||
{
|
||||
struct gtp_dev *gtp = netdev_priv(dev);
|
||||
struct hlist_node *next;
|
||||
struct pdp_ctx *pctx;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < gtp->hash_size; i++)
|
||||
hlist_for_each_entry_rcu(pctx, >p->tid_hash[i], hlist_tid)
|
||||
hlist_for_each_entry_safe(pctx, next, >p->tid_hash[i], hlist_tid)
|
||||
pdp_context_delete(pctx);
|
||||
|
||||
gtp_encap_disable(gtp);
|
||||
|
||||
@@ -2172,14 +2172,16 @@ static ssize_t tun_put_user(struct tun_struct *tun,
|
||||
tun_is_little_endian(tun), true,
|
||||
vlan_hlen)) {
|
||||
struct skb_shared_info *sinfo = skb_shinfo(skb);
|
||||
pr_err("unexpected GSO type: "
|
||||
"0x%x, gso_size %d, hdr_len %d\n",
|
||||
|
||||
if (net_ratelimit()) {
|
||||
netdev_err(tun->dev, "unexpected GSO type: 0x%x, gso_size %d, hdr_len %d\n",
|
||||
sinfo->gso_type, tun16_to_cpu(tun, gso.gso_size),
|
||||
tun16_to_cpu(tun, gso.hdr_len));
|
||||
print_hex_dump(KERN_ERR, "tun: ",
|
||||
DUMP_PREFIX_NONE,
|
||||
16, 1, skb->head,
|
||||
min((int)tun16_to_cpu(tun, gso.hdr_len), 64), true);
|
||||
}
|
||||
WARN_ON_ONCE(1);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@@ -1383,6 +1383,7 @@ static const struct usb_device_id products[] = {
|
||||
{QMI_FIXED_INTF(0x0489, 0xe0b5, 0)}, /* Foxconn T77W968 LTE with eSIM support*/
|
||||
{QMI_FIXED_INTF(0x2692, 0x9025, 4)}, /* Cellient MPL200 (rebranded Qualcomm 05c6:9025) */
|
||||
{QMI_QUIRK_SET_DTR(0x1546, 0x1342, 4)}, /* u-blox LARA-L6 */
|
||||
{QMI_QUIRK_SET_DTR(0x33f8, 0x0104, 4)}, /* Rolling RW101 RMNET */
|
||||
|
||||
/* 4. Gobi 1000 devices */
|
||||
{QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
|
||||
|
||||
@@ -1320,6 +1320,10 @@ static bool vxlan_set_mac(struct vxlan_dev *vxlan,
|
||||
if (ether_addr_equal(eth_hdr(skb)->h_source, vxlan->dev->dev_addr))
|
||||
return false;
|
||||
|
||||
/* Ignore packets from invalid src-address */
|
||||
if (!is_valid_ether_addr(eth_hdr(skb)->h_source))
|
||||
return false;
|
||||
|
||||
/* Get address from the outer IP header */
|
||||
if (vxlan_get_sk_family(vs) == AF_INET) {
|
||||
saddr.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
|
||||
|
||||
@@ -427,7 +427,8 @@ struct trf7970a {
|
||||
enum trf7970a_state state;
|
||||
struct device *dev;
|
||||
struct spi_device *spi;
|
||||
struct regulator *regulator;
|
||||
struct regulator *vin_regulator;
|
||||
struct regulator *vddio_regulator;
|
||||
struct nfc_digital_dev *ddev;
|
||||
u32 quirks;
|
||||
bool is_initiator;
|
||||
@@ -1886,7 +1887,7 @@ static int trf7970a_power_up(struct trf7970a *trf)
|
||||
if (trf->state != TRF7970A_ST_PWR_OFF)
|
||||
return 0;
|
||||
|
||||
ret = regulator_enable(trf->regulator);
|
||||
ret = regulator_enable(trf->vin_regulator);
|
||||
if (ret) {
|
||||
dev_err(trf->dev, "%s - Can't enable VIN: %d\n", __func__, ret);
|
||||
return ret;
|
||||
@@ -1929,7 +1930,7 @@ static int trf7970a_power_down(struct trf7970a *trf)
|
||||
if (trf->en2_gpiod && !(trf->quirks & TRF7970A_QUIRK_EN2_MUST_STAY_LOW))
|
||||
gpiod_set_value_cansleep(trf->en2_gpiod, 0);
|
||||
|
||||
ret = regulator_disable(trf->regulator);
|
||||
ret = regulator_disable(trf->vin_regulator);
|
||||
if (ret)
|
||||
dev_err(trf->dev, "%s - Can't disable VIN: %d\n", __func__,
|
||||
ret);
|
||||
@@ -2068,37 +2069,37 @@ static int trf7970a_probe(struct spi_device *spi)
|
||||
mutex_init(&trf->lock);
|
||||
INIT_DELAYED_WORK(&trf->timeout_work, trf7970a_timeout_work_handler);
|
||||
|
||||
trf->regulator = devm_regulator_get(&spi->dev, "vin");
|
||||
if (IS_ERR(trf->regulator)) {
|
||||
ret = PTR_ERR(trf->regulator);
|
||||
trf->vin_regulator = devm_regulator_get(&spi->dev, "vin");
|
||||
if (IS_ERR(trf->vin_regulator)) {
|
||||
ret = PTR_ERR(trf->vin_regulator);
|
||||
dev_err(trf->dev, "Can't get VIN regulator: %d\n", ret);
|
||||
goto err_destroy_lock;
|
||||
}
|
||||
|
||||
ret = regulator_enable(trf->regulator);
|
||||
ret = regulator_enable(trf->vin_regulator);
|
||||
if (ret) {
|
||||
dev_err(trf->dev, "Can't enable VIN: %d\n", ret);
|
||||
goto err_destroy_lock;
|
||||
}
|
||||
|
||||
uvolts = regulator_get_voltage(trf->regulator);
|
||||
uvolts = regulator_get_voltage(trf->vin_regulator);
|
||||
if (uvolts > 4000000)
|
||||
trf->chip_status_ctrl = TRF7970A_CHIP_STATUS_VRS5_3;
|
||||
|
||||
trf->regulator = devm_regulator_get(&spi->dev, "vdd-io");
|
||||
if (IS_ERR(trf->regulator)) {
|
||||
ret = PTR_ERR(trf->regulator);
|
||||
trf->vddio_regulator = devm_regulator_get(&spi->dev, "vdd-io");
|
||||
if (IS_ERR(trf->vddio_regulator)) {
|
||||
ret = PTR_ERR(trf->vddio_regulator);
|
||||
dev_err(trf->dev, "Can't get VDD_IO regulator: %d\n", ret);
|
||||
goto err_destroy_lock;
|
||||
goto err_disable_vin_regulator;
|
||||
}
|
||||
|
||||
ret = regulator_enable(trf->regulator);
|
||||
ret = regulator_enable(trf->vddio_regulator);
|
||||
if (ret) {
|
||||
dev_err(trf->dev, "Can't enable VDD_IO: %d\n", ret);
|
||||
goto err_destroy_lock;
|
||||
goto err_disable_vin_regulator;
|
||||
}
|
||||
|
||||
if (regulator_get_voltage(trf->regulator) == 1800000) {
|
||||
if (regulator_get_voltage(trf->vddio_regulator) == 1800000) {
|
||||
trf->io_ctrl = TRF7970A_REG_IO_CTRL_IO_LOW;
|
||||
dev_dbg(trf->dev, "trf7970a config vdd_io to 1.8V\n");
|
||||
}
|
||||
@@ -2111,7 +2112,7 @@ static int trf7970a_probe(struct spi_device *spi)
|
||||
if (!trf->ddev) {
|
||||
dev_err(trf->dev, "Can't allocate NFC digital device\n");
|
||||
ret = -ENOMEM;
|
||||
goto err_disable_regulator;
|
||||
goto err_disable_vddio_regulator;
|
||||
}
|
||||
|
||||
nfc_digital_set_parent_dev(trf->ddev, trf->dev);
|
||||
@@ -2140,8 +2141,10 @@ static int trf7970a_probe(struct spi_device *spi)
|
||||
trf7970a_shutdown(trf);
|
||||
err_free_ddev:
|
||||
nfc_digital_free_device(trf->ddev);
|
||||
err_disable_regulator:
|
||||
regulator_disable(trf->regulator);
|
||||
err_disable_vddio_regulator:
|
||||
regulator_disable(trf->vddio_regulator);
|
||||
err_disable_vin_regulator:
|
||||
regulator_disable(trf->vin_regulator);
|
||||
err_destroy_lock:
|
||||
mutex_destroy(&trf->lock);
|
||||
return ret;
|
||||
@@ -2160,7 +2163,8 @@ static int trf7970a_remove(struct spi_device *spi)
|
||||
nfc_digital_unregister_device(trf->ddev);
|
||||
nfc_digital_free_device(trf->ddev);
|
||||
|
||||
regulator_disable(trf->regulator);
|
||||
regulator_disable(trf->vddio_regulator);
|
||||
regulator_disable(trf->vin_regulator);
|
||||
|
||||
mutex_destroy(&trf->lock);
|
||||
|
||||
|
||||
@@ -2036,13 +2036,7 @@ int pinctrl_enable(struct pinctrl_dev *pctldev)
|
||||
|
||||
error = pinctrl_claim_hogs(pctldev);
|
||||
if (error) {
|
||||
dev_err(pctldev->dev, "could not claim hogs: %i\n",
|
||||
error);
|
||||
pinctrl_free_pindescs(pctldev, pctldev->desc->pins,
|
||||
pctldev->desc->npins);
|
||||
mutex_destroy(&pctldev->mutex);
|
||||
kfree(pctldev);
|
||||
|
||||
dev_err(pctldev->dev, "could not claim hogs: %i\n", error);
|
||||
return error;
|
||||
}
|
||||
|
||||
|
||||
@@ -235,14 +235,16 @@ int pinctrl_dt_to_map(struct pinctrl *p, struct pinctrl_dev *pctldev)
|
||||
for (state = 0; ; state++) {
|
||||
/* Retrieve the pinctrl-* property */
|
||||
propname = kasprintf(GFP_KERNEL, "pinctrl-%d", state);
|
||||
if (!propname)
|
||||
return -ENOMEM;
|
||||
if (!propname) {
|
||||
ret = -ENOMEM;
|
||||
goto err;
|
||||
}
|
||||
prop = of_find_property(np, propname, &size);
|
||||
kfree(propname);
|
||||
if (!prop) {
|
||||
if (state == 0) {
|
||||
of_node_put(np);
|
||||
return -ENODEV;
|
||||
ret = -ENODEV;
|
||||
goto err;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
@@ -202,6 +202,7 @@ static const int rt9455_voreg_values[] = {
|
||||
4450000, 4450000, 4450000, 4450000, 4450000, 4450000, 4450000, 4450000
|
||||
};
|
||||
|
||||
#if IS_ENABLED(CONFIG_USB_PHY)
|
||||
/*
|
||||
* When the charger is in boost mode, REG02[7:2] represent boost output
|
||||
* voltage.
|
||||
@@ -217,6 +218,7 @@ static const int rt9455_boost_voltage_values[] = {
|
||||
5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 5600000,
|
||||
5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 5600000,
|
||||
};
|
||||
#endif
|
||||
|
||||
/* REG07[3:0] (VMREG) in uV */
|
||||
static const int rt9455_vmreg_values[] = {
|
||||
|
||||
@@ -834,7 +834,6 @@ static void bnx2fc_free_session_resc(struct bnx2fc_hba *hba,
|
||||
|
||||
BNX2FC_TGT_DBG(tgt, "Freeing up session resources\n");
|
||||
|
||||
spin_lock_bh(&tgt->cq_lock);
|
||||
ctx_base_ptr = tgt->ctx_base;
|
||||
tgt->ctx_base = NULL;
|
||||
|
||||
@@ -890,7 +889,6 @@ static void bnx2fc_free_session_resc(struct bnx2fc_hba *hba,
|
||||
tgt->sq, tgt->sq_dma);
|
||||
tgt->sq = NULL;
|
||||
}
|
||||
spin_unlock_bh(&tgt->cq_lock);
|
||||
|
||||
if (ctx_base_ptr)
|
||||
iounmap(ctx_base_ptr);
|
||||
|
||||
@@ -989,7 +989,6 @@ struct lpfc_hba {
|
||||
unsigned long bit_flags;
|
||||
#define FABRIC_COMANDS_BLOCKED 0
|
||||
atomic_t num_rsrc_err;
|
||||
atomic_t num_cmd_success;
|
||||
unsigned long last_rsrc_error_time;
|
||||
unsigned long last_ramp_down_time;
|
||||
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
|
||||
|
||||
@@ -303,11 +303,10 @@ lpfc_ramp_down_queue_handler(struct lpfc_hba *phba)
|
||||
struct Scsi_Host *shost;
|
||||
struct scsi_device *sdev;
|
||||
unsigned long new_queue_depth;
|
||||
unsigned long num_rsrc_err, num_cmd_success;
|
||||
unsigned long num_rsrc_err;
|
||||
int i;
|
||||
|
||||
num_rsrc_err = atomic_read(&phba->num_rsrc_err);
|
||||
num_cmd_success = atomic_read(&phba->num_cmd_success);
|
||||
|
||||
/*
|
||||
* The error and success command counters are global per
|
||||
@@ -322,20 +321,16 @@ lpfc_ramp_down_queue_handler(struct lpfc_hba *phba)
|
||||
for (i = 0; i <= phba->max_vports && vports[i] != NULL; i++) {
|
||||
shost = lpfc_shost_from_vport(vports[i]);
|
||||
shost_for_each_device(sdev, shost) {
|
||||
new_queue_depth =
|
||||
sdev->queue_depth * num_rsrc_err /
|
||||
(num_rsrc_err + num_cmd_success);
|
||||
if (!new_queue_depth)
|
||||
new_queue_depth = sdev->queue_depth - 1;
|
||||
if (num_rsrc_err >= sdev->queue_depth)
|
||||
new_queue_depth = 1;
|
||||
else
|
||||
new_queue_depth = sdev->queue_depth -
|
||||
new_queue_depth;
|
||||
num_rsrc_err;
|
||||
scsi_change_queue_depth(sdev, new_queue_depth);
|
||||
}
|
||||
}
|
||||
lpfc_destroy_vport_work_array(phba, vports);
|
||||
atomic_set(&phba->num_rsrc_err, 0);
|
||||
atomic_set(&phba->num_cmd_success, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -642,32 +642,21 @@ static int vmk80xx_find_usb_endpoints(struct comedi_device *dev)
|
||||
struct vmk80xx_private *devpriv = dev->private;
|
||||
struct usb_interface *intf = comedi_to_usb_interface(dev);
|
||||
struct usb_host_interface *iface_desc = intf->cur_altsetting;
|
||||
struct usb_endpoint_descriptor *ep_desc;
|
||||
int i;
|
||||
struct usb_endpoint_descriptor *ep_rx_desc, *ep_tx_desc;
|
||||
int ret;
|
||||
|
||||
if (iface_desc->desc.bNumEndpoints != 2)
|
||||
if (devpriv->model == VMK8061_MODEL)
|
||||
ret = usb_find_common_endpoints(iface_desc, &ep_rx_desc,
|
||||
&ep_tx_desc, NULL, NULL);
|
||||
else
|
||||
ret = usb_find_common_endpoints(iface_desc, NULL, NULL,
|
||||
&ep_rx_desc, &ep_tx_desc);
|
||||
|
||||
if (ret)
|
||||
return -ENODEV;
|
||||
|
||||
for (i = 0; i < iface_desc->desc.bNumEndpoints; i++) {
|
||||
ep_desc = &iface_desc->endpoint[i].desc;
|
||||
|
||||
if (usb_endpoint_is_int_in(ep_desc) ||
|
||||
usb_endpoint_is_bulk_in(ep_desc)) {
|
||||
if (!devpriv->ep_rx)
|
||||
devpriv->ep_rx = ep_desc;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (usb_endpoint_is_int_out(ep_desc) ||
|
||||
usb_endpoint_is_bulk_out(ep_desc)) {
|
||||
if (!devpriv->ep_tx)
|
||||
devpriv->ep_tx = ep_desc;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
if (!devpriv->ep_rx || !devpriv->ep_tx)
|
||||
return -ENODEV;
|
||||
devpriv->ep_rx = ep_rx_desc;
|
||||
devpriv->ep_tx = ep_tx_desc;
|
||||
|
||||
if (!usb_endpoint_maxp(devpriv->ep_rx) || !usb_endpoint_maxp(devpriv->ep_tx))
|
||||
return -EINVAL;
|
||||
|
||||
@@ -577,7 +577,7 @@ static u_long get_word(struct vc_data *vc)
|
||||
}
|
||||
attr_ch = get_char(vc, (u_short *)tmp_pos, &spk_attr);
|
||||
buf[cnt++] = attr_ch;
|
||||
while (tmpx < vc->vc_cols - 1) {
|
||||
while (tmpx < vc->vc_cols - 1 && cnt < sizeof(buf) - 1) {
|
||||
tmp_pos += 2;
|
||||
tmpx++;
|
||||
ch = get_char(vc, (u_short *)tmp_pos, &temp);
|
||||
|
||||
@@ -3240,6 +3240,8 @@ static int __init target_core_init_configfs(void)
|
||||
{
|
||||
struct configfs_subsystem *subsys = &target_core_fabrics;
|
||||
struct t10_alua_lu_gp *lu_gp;
|
||||
struct cred *kern_cred;
|
||||
const struct cred *old_cred;
|
||||
int ret;
|
||||
|
||||
pr_debug("TARGET_CORE[0]: Loading Generic Kernel Storage"
|
||||
@@ -3316,11 +3318,21 @@ static int __init target_core_init_configfs(void)
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
/* We use the kernel credentials to access the target directory */
|
||||
kern_cred = prepare_kernel_cred(&init_task);
|
||||
if (!kern_cred) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
old_cred = override_creds(kern_cred);
|
||||
target_init_dbroot();
|
||||
revert_creds(old_cred);
|
||||
put_cred(kern_cred);
|
||||
|
||||
return 0;
|
||||
|
||||
out:
|
||||
target_xcopy_release_pt();
|
||||
configfs_unregister_subsystem(subsys);
|
||||
core_dev_release_virtual_lun0();
|
||||
rd_module_exit();
|
||||
|
||||
@@ -16,6 +16,7 @@
|
||||
#include <linux/console.h>
|
||||
#include <linux/vt_kern.h>
|
||||
#include <linux/input.h>
|
||||
#include <linux/irq_work.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
#define MAX_CONFIG_LEN 40
|
||||
@@ -35,6 +36,25 @@ static int kgdboc_use_kms; /* 1 if we use kernel mode switching */
|
||||
static struct tty_driver *kgdb_tty_driver;
|
||||
static int kgdb_tty_line;
|
||||
|
||||
/*
|
||||
* When we leave the debug trap handler we need to reset the keyboard status
|
||||
* (since the original keyboard state gets partially clobbered by kdb use of
|
||||
* the keyboard).
|
||||
*
|
||||
* The path to deliver the reset is somewhat circuitous.
|
||||
*
|
||||
* To deliver the reset we register an input handler, reset the keyboard and
|
||||
* then deregister the input handler. However, to get this done right, we do
|
||||
* have to carefully manage the calling context because we can only register
|
||||
* input handlers from task context.
|
||||
*
|
||||
* In particular we need to trigger the action from the debug trap handler with
|
||||
* all its NMI and/or NMI-like oddities. To solve this the kgdboc trap exit code
|
||||
* (the "post_exception" callback) uses irq_work_queue(), which is NMI-safe, to
|
||||
* schedule a callback from a hardirq context. From there we have to defer the
|
||||
* work again, this time using schedule_work(), to get a callback using the
|
||||
* system workqueue, which runs in task context.
|
||||
*/
|
||||
#ifdef CONFIG_KDB_KEYBOARD
|
||||
static int kgdboc_reset_connect(struct input_handler *handler,
|
||||
struct input_dev *dev,
|
||||
@@ -86,10 +106,17 @@ static void kgdboc_restore_input_helper(struct work_struct *dummy)
|
||||
|
||||
static DECLARE_WORK(kgdboc_restore_input_work, kgdboc_restore_input_helper);
|
||||
|
||||
static void kgdboc_queue_restore_input_helper(struct irq_work *unused)
|
||||
{
|
||||
schedule_work(&kgdboc_restore_input_work);
|
||||
}
|
||||
|
||||
static DEFINE_IRQ_WORK(kgdboc_restore_input_irq_work, kgdboc_queue_restore_input_helper);
|
||||
|
||||
static void kgdboc_restore_input(void)
|
||||
{
|
||||
if (likely(system_state == SYSTEM_RUNNING))
|
||||
schedule_work(&kgdboc_restore_input_work);
|
||||
irq_work_queue(&kgdboc_restore_input_irq_work);
|
||||
}
|
||||
|
||||
static int kgdboc_register_kbd(char **cptr)
|
||||
@@ -120,6 +147,7 @@ static void kgdboc_unregister_kbd(void)
|
||||
i--;
|
||||
}
|
||||
}
|
||||
irq_work_sync(&kgdboc_restore_input_irq_work);
|
||||
flush_work(&kgdboc_restore_input_work);
|
||||
}
|
||||
#else /* ! CONFIG_KDB_KEYBOARD */
|
||||
|
||||
@@ -1128,11 +1128,13 @@ static void mxs_auart_set_ldisc(struct uart_port *port,
|
||||
|
||||
static irqreturn_t mxs_auart_irq_handle(int irq, void *context)
|
||||
{
|
||||
u32 istat;
|
||||
u32 istat, stat;
|
||||
struct mxs_auart_port *s = context;
|
||||
u32 mctrl_temp = s->mctrl_prev;
|
||||
u32 stat = mxs_read(s, REG_STAT);
|
||||
|
||||
uart_port_lock(&s->port);
|
||||
|
||||
stat = mxs_read(s, REG_STAT);
|
||||
istat = mxs_read(s, REG_INTR);
|
||||
|
||||
/* ack irq */
|
||||
@@ -1168,6 +1170,8 @@ static irqreturn_t mxs_auart_irq_handle(int irq, void *context)
|
||||
istat &= ~AUART_INTR_TXIS;
|
||||
}
|
||||
|
||||
uart_port_unlock(&s->port);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
|
||||
@@ -220,7 +220,6 @@ static bool pmz_receive_chars(struct uart_pmac_port *uap)
|
||||
{
|
||||
struct tty_port *port;
|
||||
unsigned char ch, r1, drop, error, flag;
|
||||
int loops = 0;
|
||||
|
||||
/* Sanity check, make sure the old bug is no longer happening */
|
||||
if (uap->port.state == NULL) {
|
||||
@@ -303,24 +302,11 @@ static bool pmz_receive_chars(struct uart_pmac_port *uap)
|
||||
if (r1 & Rx_OVR)
|
||||
tty_insert_flip_char(port, 0, TTY_OVERRUN);
|
||||
next_char:
|
||||
/* We can get stuck in an infinite loop getting char 0 when the
|
||||
* line is in a wrong HW state, we break that here.
|
||||
* When that happens, I disable the receive side of the driver.
|
||||
* Note that what I've been experiencing is a real irq loop where
|
||||
* I'm getting flooded regardless of the actual port speed.
|
||||
* Something strange is going on with the HW
|
||||
*/
|
||||
if ((++loops) > 1000)
|
||||
goto flood;
|
||||
ch = read_zsreg(uap, R0);
|
||||
if (!(ch & Rx_CH_AV))
|
||||
break;
|
||||
}
|
||||
|
||||
return true;
|
||||
flood:
|
||||
pmz_interrupt_control(uap, 0);
|
||||
pmz_error("pmz: rx irq flood !\n");
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
@@ -471,7 +471,6 @@ static ssize_t wdm_write
|
||||
static int service_outstanding_interrupt(struct wdm_device *desc)
|
||||
{
|
||||
int rv = 0;
|
||||
int used;
|
||||
|
||||
/* submit read urb only if the device is waiting for it */
|
||||
if (!desc->resp_count || !--desc->resp_count)
|
||||
@@ -486,10 +485,7 @@ static int service_outstanding_interrupt(struct wdm_device *desc)
|
||||
goto out;
|
||||
}
|
||||
|
||||
used = test_and_set_bit(WDM_RESPONDING, &desc->flags);
|
||||
if (used)
|
||||
goto out;
|
||||
|
||||
set_bit(WDM_RESPONDING, &desc->flags);
|
||||
spin_unlock_irq(&desc->iuspin);
|
||||
rv = usb_submit_urb(desc->response, GFP_KERNEL);
|
||||
spin_lock_irq(&desc->iuspin);
|
||||
|
||||
@@ -897,13 +897,15 @@ static int dwc2_cmpl_host_isoc_dma_desc(struct dwc2_hsotg *hsotg,
|
||||
struct dwc2_dma_desc *dma_desc;
|
||||
struct dwc2_hcd_iso_packet_desc *frame_desc;
|
||||
u16 frame_desc_idx;
|
||||
struct urb *usb_urb = qtd->urb->priv;
|
||||
struct urb *usb_urb;
|
||||
u16 remain = 0;
|
||||
int rc = 0;
|
||||
|
||||
if (!qtd->urb)
|
||||
return -EINVAL;
|
||||
|
||||
usb_urb = qtd->urb->priv;
|
||||
|
||||
dma_sync_single_for_cpu(hsotg->dev, qh->desc_list_dma + (idx *
|
||||
sizeof(struct dwc2_dma_desc)),
|
||||
sizeof(struct dwc2_dma_desc),
|
||||
|
||||
@@ -2062,7 +2062,7 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
|
||||
buf[5] = 0x01;
|
||||
switch (ctrl->bRequestType & USB_RECIP_MASK) {
|
||||
case USB_RECIP_DEVICE:
|
||||
if (w_index != 0x4 || (w_value >> 8))
|
||||
if (w_index != 0x4 || (w_value & 0xff))
|
||||
break;
|
||||
buf[6] = w_index;
|
||||
/* Number of ext compat interfaces */
|
||||
@@ -2084,9 +2084,9 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
|
||||
}
|
||||
break;
|
||||
case USB_RECIP_INTERFACE:
|
||||
if (w_index != 0x5 || (w_value >> 8))
|
||||
if (w_index != 0x5 || (w_value & 0xff))
|
||||
break;
|
||||
interface = w_value & 0xFF;
|
||||
interface = w_value >> 8;
|
||||
if (interface >= MAX_CONFIG_INTERFACES ||
|
||||
!os_desc_cfg->interface[interface])
|
||||
break;
|
||||
|
||||
@@ -3621,7 +3621,7 @@ static int ffs_func_setup(struct usb_function *f,
|
||||
__ffs_event_add(ffs, FUNCTIONFS_SETUP);
|
||||
spin_unlock_irqrestore(&ffs->ev.waitq.lock, flags);
|
||||
|
||||
return creq->wLength == 0 ? USB_GADGET_DELAYED_STATUS : 0;
|
||||
return ffs->ev.setup.wLength == 0 ? USB_GADGET_DELAYED_STATUS : 0;
|
||||
}
|
||||
|
||||
static bool ffs_func_req_match(struct usb_function *f,
|
||||
|
||||
@@ -255,6 +255,10 @@ static void option_instat_callback(struct urb *urb);
|
||||
#define QUECTEL_PRODUCT_EM061K_LMS 0x0124
|
||||
#define QUECTEL_PRODUCT_EC25 0x0125
|
||||
#define QUECTEL_PRODUCT_EM060K_128 0x0128
|
||||
#define QUECTEL_PRODUCT_EM060K_129 0x0129
|
||||
#define QUECTEL_PRODUCT_EM060K_12a 0x012a
|
||||
#define QUECTEL_PRODUCT_EM060K_12b 0x012b
|
||||
#define QUECTEL_PRODUCT_EM060K_12c 0x012c
|
||||
#define QUECTEL_PRODUCT_EG91 0x0191
|
||||
#define QUECTEL_PRODUCT_EG95 0x0195
|
||||
#define QUECTEL_PRODUCT_BG96 0x0296
|
||||
@@ -1218,6 +1222,18 @@ static const struct usb_device_id option_ids[] = {
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0xff, 0x30) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0x00, 0x40) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0xff, 0x40) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_129, 0xff, 0xff, 0x30) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_129, 0xff, 0x00, 0x40) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_129, 0xff, 0xff, 0x40) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12a, 0xff, 0xff, 0x30) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12a, 0xff, 0x00, 0x40) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12a, 0xff, 0xff, 0x40) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12b, 0xff, 0xff, 0x30) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12b, 0xff, 0x00, 0x40) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12b, 0xff, 0xff, 0x40) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12c, 0xff, 0xff, 0x30) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12c, 0xff, 0x00, 0x40) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12c, 0xff, 0xff, 0x40) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x30) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0x00, 0x40) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x40) },
|
||||
@@ -1360,6 +1376,12 @@ static const struct usb_device_id option_ids[] = {
|
||||
.driver_info = NCTRL(2) | RSVD(3) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff), /* Telit FE990 (ECM) */
|
||||
.driver_info = NCTRL(0) | RSVD(1) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a0, 0xff), /* Telit FN20C04 (rmnet) */
|
||||
.driver_info = RSVD(0) | NCTRL(3) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a4, 0xff), /* Telit FN20C04 (rmnet) */
|
||||
.driver_info = RSVD(0) | NCTRL(3) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a9, 0xff), /* Telit FN20C04 (rmnet) */
|
||||
.driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) },
|
||||
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
|
||||
.driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
|
||||
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
|
||||
@@ -2052,6 +2074,10 @@ static const struct usb_device_id option_ids[] = {
|
||||
.driver_info = RSVD(3) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(LONGCHEER_VENDOR_ID, 0x9803, 0xff),
|
||||
.driver_info = RSVD(4) },
|
||||
{ USB_DEVICE(LONGCHEER_VENDOR_ID, 0x9b05), /* Longsung U8300 */
|
||||
.driver_info = RSVD(4) | RSVD(5) },
|
||||
{ USB_DEVICE(LONGCHEER_VENDOR_ID, 0x9b3c), /* Longsung U9300 */
|
||||
.driver_info = RSVD(0) | RSVD(4) },
|
||||
{ USB_DEVICE(LONGCHEER_VENDOR_ID, ZOOM_PRODUCT_4597) },
|
||||
{ USB_DEVICE(LONGCHEER_VENDOR_ID, IBALL_3_5G_CONNECT) },
|
||||
{ USB_DEVICE(HAIER_VENDOR_ID, HAIER_PRODUCT_CE100) },
|
||||
@@ -2272,15 +2298,29 @@ static const struct usb_device_id option_ids[] = {
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0xff, 0x30) }, /* Fibocom FG150 Diag */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0, 0) }, /* Fibocom FG150 AT */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0111, 0xff) }, /* Fibocom FM160 (MBIM mode) */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0115, 0xff), /* Fibocom FM135 (laptop MBIM) */
|
||||
.driver_info = RSVD(5) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) }, /* Fibocom NL668-AM/NL652-EU (laptop MBIM) */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a2, 0xff) }, /* Fibocom FM101-GL (laptop MBIM) */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a3, 0xff) }, /* Fibocom FM101-GL (laptop MBIM) */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a4, 0xff), /* Fibocom FM101-GL (laptop MBIM) */
|
||||
.driver_info = RSVD(4) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a04, 0xff) }, /* Fibocom FM650-CN (ECM mode) */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a05, 0xff) }, /* Fibocom FM650-CN (NCM mode) */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a06, 0xff) }, /* Fibocom FM650-CN (RNDIS mode) */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a07, 0xff) }, /* Fibocom FM650-CN (MBIM mode) */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2df3, 0x9d03, 0xff) }, /* LongSung M5710 */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1404, 0xff) }, /* GosunCn GM500 RNDIS */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1405, 0xff) }, /* GosunCn GM500 MBIM */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1406, 0xff) }, /* GosunCn GM500 ECM/NCM */
|
||||
{ USB_DEVICE(0x33f8, 0x0104), /* Rolling RW101-GL (laptop RMNET) */
|
||||
.driver_info = RSVD(4) | RSVD(5) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x01a2, 0xff) }, /* Rolling RW101-GL (laptop MBIM) */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x01a3, 0xff) }, /* Rolling RW101-GL (laptop MBIM) */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x01a4, 0xff), /* Rolling RW101-GL (laptop MBIM) */
|
||||
.driver_info = RSVD(4) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x0115, 0xff), /* Rolling RW135-GL (laptop MBIM) */
|
||||
.driver_info = RSVD(5) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) },
|
||||
|
||||
@@ -2432,9 +2432,19 @@ bool vhost_vq_avail_empty(struct vhost_dev *dev, struct vhost_virtqueue *vq)
|
||||
r = vhost_get_avail(vq, avail_idx, &vq->avail->idx);
|
||||
if (unlikely(r))
|
||||
return false;
|
||||
vq->avail_idx = vhost16_to_cpu(vq, avail_idx);
|
||||
|
||||
return vq->avail_idx == vq->last_avail_idx;
|
||||
vq->avail_idx = vhost16_to_cpu(vq, avail_idx);
|
||||
if (vq->avail_idx != vq->last_avail_idx) {
|
||||
/* Since we have updated avail_idx, the following
|
||||
* call to vhost_get_vq_desc() will read available
|
||||
* ring entries. Make sure that read happens after
|
||||
* the avail_idx read.
|
||||
*/
|
||||
smp_rmb();
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(vhost_vq_avail_empty);
|
||||
|
||||
|
||||
@@ -691,6 +691,7 @@ const struct file_operations v9fs_file_operations = {
|
||||
.lock = v9fs_file_lock,
|
||||
.mmap = generic_file_readonly_mmap,
|
||||
.fsync = v9fs_file_fsync,
|
||||
.setlease = simple_nosetlease,
|
||||
};
|
||||
|
||||
const struct file_operations v9fs_file_operations_dotl = {
|
||||
@@ -726,4 +727,5 @@ const struct file_operations v9fs_mmap_file_operations_dotl = {
|
||||
.flock = v9fs_file_flock_dotl,
|
||||
.mmap = v9fs_mmap_file_mmap,
|
||||
.fsync = v9fs_file_fsync_dotl,
|
||||
.setlease = simple_nosetlease,
|
||||
};
|
||||
|
||||
@@ -101,7 +101,7 @@ static int p9mode2perm(struct v9fs_session_info *v9ses,
|
||||
int res;
|
||||
int mode = stat->mode;
|
||||
|
||||
res = mode & S_IALLUGO;
|
||||
res = mode & 0777; /* S_IRWXUGO */
|
||||
if (v9fs_proto_dotu(v9ses)) {
|
||||
if ((mode & P9_DMSETUID) == P9_DMSETUID)
|
||||
res |= S_ISUID;
|
||||
@@ -192,6 +192,9 @@ int v9fs_uflags2omode(int uflags, int extended)
|
||||
break;
|
||||
}
|
||||
|
||||
if (uflags & O_TRUNC)
|
||||
ret |= P9_OTRUNC;
|
||||
|
||||
if (extended) {
|
||||
if (uflags & O_EXCL)
|
||||
ret |= P9_OEXCL;
|
||||
|
||||
@@ -346,6 +346,7 @@ static const struct super_operations v9fs_super_ops = {
|
||||
.alloc_inode = v9fs_alloc_inode,
|
||||
.destroy_inode = v9fs_destroy_inode,
|
||||
.statfs = simple_statfs,
|
||||
.drop_inode = v9fs_drop_inode,
|
||||
.evict_inode = v9fs_evict_inode,
|
||||
.show_options = v9fs_show_options,
|
||||
.umount_begin = v9fs_umount_begin,
|
||||
|
||||
@@ -2236,20 +2236,14 @@ struct btrfs_data_container *init_data_container(u32 total_bytes)
|
||||
size_t alloc_bytes;
|
||||
|
||||
alloc_bytes = max_t(size_t, total_bytes, sizeof(*data));
|
||||
data = kvmalloc(alloc_bytes, GFP_KERNEL);
|
||||
data = kvzalloc(alloc_bytes, GFP_KERNEL);
|
||||
if (!data)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
if (total_bytes >= sizeof(*data)) {
|
||||
if (total_bytes >= sizeof(*data))
|
||||
data->bytes_left = total_bytes - sizeof(*data);
|
||||
data->bytes_missing = 0;
|
||||
} else {
|
||||
else
|
||||
data->bytes_missing = sizeof(*data) - total_bytes;
|
||||
data->bytes_left = 0;
|
||||
}
|
||||
|
||||
data->elem_cnt = 0;
|
||||
data->elem_missed = 0;
|
||||
|
||||
return data;
|
||||
}
|
||||
|
||||
@@ -1133,6 +1133,9 @@ __btrfs_commit_inode_delayed_items(struct btrfs_trans_handle *trans,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = btrfs_record_root_in_trans(trans, node->root);
|
||||
if (ret)
|
||||
return ret;
|
||||
ret = btrfs_update_delayed_inode(trans, node->root, path, node);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -1906,7 +1906,7 @@ static void btrfs_clear_bit_hook(void *private_data,
|
||||
*/
|
||||
if (*bits & EXTENT_CLEAR_META_RESV &&
|
||||
root != fs_info->tree_root)
|
||||
btrfs_delalloc_release_metadata(inode, len, false);
|
||||
btrfs_delalloc_release_metadata(inode, len, true);
|
||||
|
||||
/* For sanity tests. */
|
||||
if (btrfs_is_testing(fs_info))
|
||||
|
||||
@@ -1271,6 +1271,7 @@ static noinline int commit_fs_roots(struct btrfs_trans_handle *trans)
|
||||
radix_tree_tag_clear(&fs_info->fs_roots_radix,
|
||||
(unsigned long)root->root_key.objectid,
|
||||
BTRFS_ROOT_TRANS_TAG);
|
||||
btrfs_qgroup_free_meta_all_pertrans(root);
|
||||
spin_unlock(&fs_info->fs_roots_radix_lock);
|
||||
|
||||
btrfs_free_log(trans, root);
|
||||
@@ -1295,7 +1296,6 @@ static noinline int commit_fs_roots(struct btrfs_trans_handle *trans)
|
||||
if (ret2)
|
||||
return ret2;
|
||||
spin_lock(&fs_info->fs_roots_radix_lock);
|
||||
btrfs_qgroup_free_meta_all_pertrans(root);
|
||||
}
|
||||
}
|
||||
spin_unlock(&fs_info->fs_roots_radix_lock);
|
||||
|
||||
@@ -2957,6 +2957,7 @@ static int btrfs_relocate_sys_chunks(struct btrfs_fs_info *fs_info)
|
||||
* alignment and size).
|
||||
*/
|
||||
ret = -EUCLEAN;
|
||||
mutex_unlock(&fs_info->delete_unused_bgs_mutex);
|
||||
goto error;
|
||||
}
|
||||
|
||||
|
||||
@@ -1751,7 +1751,8 @@ static int punch_hole(struct gfs2_inode *ip, u64 offset, u64 length)
|
||||
struct buffer_head *dibh, *bh;
|
||||
struct gfs2_holder rd_gh;
|
||||
unsigned int bsize_shift = sdp->sd_sb.sb_bsize_shift;
|
||||
u64 lblock = (offset + (1 << bsize_shift) - 1) >> bsize_shift;
|
||||
unsigned int bsize = 1 << bsize_shift;
|
||||
u64 lblock = (offset + bsize - 1) >> bsize_shift;
|
||||
__u16 start_list[GFS2_MAX_META_HEIGHT];
|
||||
__u16 __end_list[GFS2_MAX_META_HEIGHT], *end_list = NULL;
|
||||
unsigned int start_aligned, end_aligned;
|
||||
@@ -1762,7 +1763,7 @@ static int punch_hole(struct gfs2_inode *ip, u64 offset, u64 length)
|
||||
u64 prev_bnr = 0;
|
||||
__be64 *start, *end;
|
||||
|
||||
if (offset >= maxsize) {
|
||||
if (offset + bsize - 1 >= maxsize) {
|
||||
/*
|
||||
* The starting point lies beyond the allocated meta-data;
|
||||
* there are no blocks do deallocate.
|
||||
|
||||
@@ -243,7 +243,7 @@ nilfs_filetype_table[NILFS_FT_MAX] = {
|
||||
|
||||
#define S_SHIFT 12
|
||||
static unsigned char
|
||||
nilfs_type_by_mode[S_IFMT >> S_SHIFT] = {
|
||||
nilfs_type_by_mode[(S_IFMT >> S_SHIFT) + 1] = {
|
||||
[S_IFREG >> S_SHIFT] = NILFS_FT_REG_FILE,
|
||||
[S_IFDIR >> S_SHIFT] = NILFS_FT_DIR,
|
||||
[S_IFCHR >> S_SHIFT] = NILFS_FT_CHRDEV,
|
||||
|
||||
@@ -431,6 +431,8 @@ struct kernfs_node *sysfs_break_active_protection(struct kobject *kobj,
|
||||
kn = kernfs_find_and_get(kobj->sd, attr->name);
|
||||
if (kn)
|
||||
kernfs_break_active_protection(kn);
|
||||
else
|
||||
kobject_put(kobj);
|
||||
return kn;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(sysfs_break_active_protection);
|
||||
|
||||
@@ -522,6 +522,52 @@ static inline unsigned long compare_ether_header(const void *a, const void *b)
|
||||
#endif
|
||||
}
|
||||
|
||||
/**
|
||||
* eth_hw_addr_gen - Generate and assign Ethernet address to a port
|
||||
* @dev: pointer to port's net_device structure
|
||||
* @base_addr: base Ethernet address
|
||||
* @id: offset to add to the base address
|
||||
*
|
||||
* Generate a MAC address using a base address and an offset and assign it
|
||||
* to a net_device. Commonly used by switch drivers which need to compute
|
||||
* addresses for all their ports. addr_assign_type is not changed.
|
||||
*/
|
||||
static inline void eth_hw_addr_gen(struct net_device *dev, const u8 *base_addr,
|
||||
unsigned int id)
|
||||
{
|
||||
u64 u = ether_addr_to_u64(base_addr);
|
||||
u8 addr[ETH_ALEN];
|
||||
|
||||
u += id;
|
||||
u64_to_ether_addr(u, addr);
|
||||
eth_hw_addr_set(dev, addr);
|
||||
}
|
||||
|
||||
/**
|
||||
* eth_skb_pkt_type - Assign packet type if destination address does not match
|
||||
* @skb: Assigned a packet type if address does not match @dev address
|
||||
* @dev: Network device used to compare packet address against
|
||||
*
|
||||
* If the destination MAC address of the packet does not match the network
|
||||
* device address, assign an appropriate packet type.
|
||||
*/
|
||||
static inline void eth_skb_pkt_type(struct sk_buff *skb,
|
||||
const struct net_device *dev)
|
||||
{
|
||||
const struct ethhdr *eth = eth_hdr(skb);
|
||||
|
||||
if (unlikely(!ether_addr_equal_64bits(eth->h_dest, dev->dev_addr))) {
|
||||
if (unlikely(is_multicast_ether_addr_64bits(eth->h_dest))) {
|
||||
if (ether_addr_equal_64bits(eth->h_dest, dev->broadcast))
|
||||
skb->pkt_type = PACKET_BROADCAST;
|
||||
else
|
||||
skb->pkt_type = PACKET_MULTICAST;
|
||||
} else {
|
||||
skb->pkt_type = PACKET_OTHERHOST;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* eth_skb_pad - Pad buffer to mininum number of octets for Ethernet frame
|
||||
* @skb: Buffer to pad
|
||||
|
||||
@@ -264,6 +264,85 @@ struct uart_port {
|
||||
void *private_data; /* generic platform data pointer */
|
||||
};
|
||||
|
||||
/**
|
||||
* uart_port_lock - Lock the UART port
|
||||
* @up: Pointer to UART port structure
|
||||
*/
|
||||
static inline void uart_port_lock(struct uart_port *up)
|
||||
{
|
||||
spin_lock(&up->lock);
|
||||
}
|
||||
|
||||
/**
|
||||
* uart_port_lock_irq - Lock the UART port and disable interrupts
|
||||
* @up: Pointer to UART port structure
|
||||
*/
|
||||
static inline void uart_port_lock_irq(struct uart_port *up)
|
||||
{
|
||||
spin_lock_irq(&up->lock);
|
||||
}
|
||||
|
||||
/**
|
||||
* uart_port_lock_irqsave - Lock the UART port, save and disable interrupts
|
||||
* @up: Pointer to UART port structure
|
||||
* @flags: Pointer to interrupt flags storage
|
||||
*/
|
||||
static inline void uart_port_lock_irqsave(struct uart_port *up, unsigned long *flags)
|
||||
{
|
||||
spin_lock_irqsave(&up->lock, *flags);
|
||||
}
|
||||
|
||||
/**
|
||||
* uart_port_trylock - Try to lock the UART port
|
||||
* @up: Pointer to UART port structure
|
||||
*
|
||||
* Returns: True if lock was acquired, false otherwise
|
||||
*/
|
||||
static inline bool uart_port_trylock(struct uart_port *up)
|
||||
{
|
||||
return spin_trylock(&up->lock);
|
||||
}
|
||||
|
||||
/**
|
||||
* uart_port_trylock_irqsave - Try to lock the UART port, save and disable interrupts
|
||||
* @up: Pointer to UART port structure
|
||||
* @flags: Pointer to interrupt flags storage
|
||||
*
|
||||
* Returns: True if lock was acquired, false otherwise
|
||||
*/
|
||||
static inline bool uart_port_trylock_irqsave(struct uart_port *up, unsigned long *flags)
|
||||
{
|
||||
return spin_trylock_irqsave(&up->lock, *flags);
|
||||
}
|
||||
|
||||
/**
|
||||
* uart_port_unlock - Unlock the UART port
|
||||
* @up: Pointer to UART port structure
|
||||
*/
|
||||
static inline void uart_port_unlock(struct uart_port *up)
|
||||
{
|
||||
spin_unlock(&up->lock);
|
||||
}
|
||||
|
||||
/**
|
||||
* uart_port_unlock_irq - Unlock the UART port and re-enable interrupts
|
||||
* @up: Pointer to UART port structure
|
||||
*/
|
||||
static inline void uart_port_unlock_irq(struct uart_port *up)
|
||||
{
|
||||
spin_unlock_irq(&up->lock);
|
||||
}
|
||||
|
||||
/**
|
||||
* uart_port_unlock_irqrestore - Unlock the UART port, restore interrupts
|
||||
* @up: Pointer to UART port structure
|
||||
* @flags: The saved interrupt flags for restore
|
||||
*/
|
||||
static inline void uart_port_unlock_irqrestore(struct uart_port *up, unsigned long flags)
|
||||
{
|
||||
spin_unlock_irqrestore(&up->lock, flags);
|
||||
}
|
||||
|
||||
static inline int serial_port_in(struct uart_port *up, int offset)
|
||||
{
|
||||
return up->serial_in(up, offset);
|
||||
|
||||
@@ -494,4 +494,24 @@ static inline void memcpy_and_pad(void *dest, size_t dest_len,
|
||||
memcpy(dest, src, dest_len);
|
||||
}
|
||||
|
||||
/**
|
||||
* str_has_prefix - Test if a string has a given prefix
|
||||
* @str: The string to test
|
||||
* @prefix: The string to see if @str starts with
|
||||
*
|
||||
* A common way to test a prefix of a string is to do:
|
||||
* strncmp(str, prefix, sizeof(prefix) - 1)
|
||||
*
|
||||
* But this can lead to bugs due to typos, or if prefix is a pointer
|
||||
* and not a constant. Instead use str_has_prefix().
|
||||
*
|
||||
* Returns: 0 if @str does not start with @prefix
|
||||
strlen(@prefix) if @str does start with @prefix
|
||||
*/
|
||||
static __always_inline size_t str_has_prefix(const char *str, const char *prefix)
|
||||
{
|
||||
size_t len = strlen(prefix);
|
||||
return strncmp(str, prefix, len) == 0 ? len : 0;
|
||||
}
|
||||
|
||||
#endif /* _LINUX_STRING_H_ */
|
||||
|
||||
@@ -416,7 +416,7 @@ struct trace_event_file {
|
||||
} \
|
||||
early_initcall(trace_init_perf_perm_##name);
|
||||
|
||||
#define PERF_MAX_TRACE_SIZE 2048
|
||||
#define PERF_MAX_TRACE_SIZE 8192
|
||||
|
||||
#define MAX_FILTER_STR_VAL 256 /* Should handle KSYM_SYMBOL_LEN */
|
||||
|
||||
@@ -531,8 +531,6 @@ extern int trace_event_raw_init(struct trace_event_call *call);
|
||||
extern int trace_define_field(struct trace_event_call *call, const char *type,
|
||||
const char *name, int offset, int size,
|
||||
int is_signed, int filter_type);
|
||||
extern int trace_add_event_call_nolock(struct trace_event_call *call);
|
||||
extern int trace_remove_event_call_nolock(struct trace_event_call *call);
|
||||
extern int trace_add_event_call(struct trace_event_call *call);
|
||||
extern int trace_remove_event_call(struct trace_event_call *call);
|
||||
extern int trace_event_get_offsets(struct trace_event_call *call);
|
||||
|
||||
@@ -455,6 +455,10 @@ static inline void in6_ifa_hold(struct inet6_ifaddr *ifp)
|
||||
refcount_inc(&ifp->refcnt);
|
||||
}
|
||||
|
||||
static inline bool in6_ifa_hold_safe(struct inet6_ifaddr *ifp)
|
||||
{
|
||||
return refcount_inc_not_zero(&ifp->refcnt);
|
||||
}
|
||||
|
||||
/*
|
||||
* compute link-local solicited-node multicast address
|
||||
|
||||
@@ -52,7 +52,7 @@ struct unix_sock {
|
||||
struct mutex iolock, bindlock;
|
||||
struct sock *peer;
|
||||
struct list_head link;
|
||||
atomic_long_t inflight;
|
||||
unsigned long inflight;
|
||||
spinlock_t lock;
|
||||
unsigned long gc_flags;
|
||||
#define UNIX_GC_CANDIDATE 0
|
||||
@@ -72,6 +72,9 @@ enum unix_socket_lock_class {
|
||||
U_LOCK_NORMAL,
|
||||
U_LOCK_SECOND, /* for double locking, see unix_state_double_lock(). */
|
||||
U_LOCK_DIAG, /* used while dumping icons, see sk_diag_dump_icons(). */
|
||||
U_LOCK_GC_LISTENER, /* used for listening socket while determining gc
|
||||
* candidates to close a small race window.
|
||||
*/
|
||||
};
|
||||
|
||||
static inline void unix_state_lock_nested(struct sock *sk,
|
||||
|
||||
@@ -24,7 +24,7 @@ struct dst_ops {
|
||||
void (*destroy)(struct dst_entry *);
|
||||
void (*ifdown)(struct dst_entry *,
|
||||
struct net_device *dev, int how);
|
||||
struct dst_entry * (*negative_advice)(struct dst_entry *);
|
||||
void (*negative_advice)(struct sock *sk, struct dst_entry *);
|
||||
void (*link_failure)(struct sk_buff *);
|
||||
void (*update_pmtu)(struct dst_entry *dst, struct sock *sk,
|
||||
struct sk_buff *skb, u32 mtu,
|
||||
|
||||
@@ -348,6 +348,39 @@ static inline bool pskb_inet_may_pull(struct sk_buff *skb)
|
||||
return pskb_network_may_pull(skb, nhlen);
|
||||
}
|
||||
|
||||
/* Variant of pskb_inet_may_pull().
|
||||
*/
|
||||
static inline bool skb_vlan_inet_prepare(struct sk_buff *skb)
|
||||
{
|
||||
int nhlen = 0, maclen = ETH_HLEN;
|
||||
__be16 type = skb->protocol;
|
||||
|
||||
/* Essentially this is skb_protocol(skb, true)
|
||||
* And we get MAC len.
|
||||
*/
|
||||
if (eth_type_vlan(type))
|
||||
type = __vlan_get_protocol(skb, type, &maclen);
|
||||
|
||||
switch (type) {
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
case htons(ETH_P_IPV6):
|
||||
nhlen = sizeof(struct ipv6hdr);
|
||||
break;
|
||||
#endif
|
||||
case htons(ETH_P_IP):
|
||||
nhlen = sizeof(struct iphdr);
|
||||
break;
|
||||
}
|
||||
/* For ETH_P_IPV6/ETH_P_IP we make sure to pull
|
||||
* a base network header in skb->head.
|
||||
*/
|
||||
if (!pskb_may_pull(skb, maclen + nhlen))
|
||||
return false;
|
||||
|
||||
skb_set_network_header(skb, maclen);
|
||||
return true;
|
||||
}
|
||||
|
||||
static inline int ip_encap_hlen(struct ip_tunnel_encap *e)
|
||||
{
|
||||
const struct ip_tunnel_encap_ops *ops;
|
||||
|
||||
@@ -1924,19 +1924,12 @@ sk_dst_get(struct sock *sk)
|
||||
|
||||
static inline void dst_negative_advice(struct sock *sk)
|
||||
{
|
||||
struct dst_entry *ndst, *dst = __sk_dst_get(sk);
|
||||
struct dst_entry *dst = __sk_dst_get(sk);
|
||||
|
||||
sk_rethink_txhash(sk);
|
||||
|
||||
if (dst && dst->ops->negative_advice) {
|
||||
ndst = dst->ops->negative_advice(dst);
|
||||
|
||||
if (ndst != dst) {
|
||||
rcu_assign_pointer(sk->sk_dst_cache, ndst);
|
||||
sk_tx_queue_clear(sk);
|
||||
WRITE_ONCE(sk->sk_dst_pending_confirm, 0);
|
||||
}
|
||||
}
|
||||
if (dst && dst->ops->negative_advice)
|
||||
dst->ops->negative_advice(sk, dst);
|
||||
}
|
||||
|
||||
static inline void
|
||||
|
||||
@@ -22,8 +22,8 @@
|
||||
#define RUSAGE_THREAD 1 /* only the calling thread */
|
||||
|
||||
struct rusage {
|
||||
struct __kernel_old_timeval ru_utime; /* user time used */
|
||||
struct __kernel_old_timeval ru_stime; /* system time used */
|
||||
struct timeval ru_utime; /* user time used */
|
||||
struct timeval ru_stime; /* system time used */
|
||||
__kernel_long_t ru_maxrss; /* maximum resident set size */
|
||||
__kernel_long_t ru_ixrss; /* integral shared memory size */
|
||||
__kernel_long_t ru_idrss; /* integral unshared data size */
|
||||
|
||||
@@ -1565,10 +1565,17 @@ static int check_kprobe_address_safe(struct kprobe *p,
|
||||
jump_label_lock();
|
||||
preempt_disable();
|
||||
|
||||
/* Ensure it is not in reserved area nor out of text */
|
||||
if (!(core_kernel_text((unsigned long) p->addr) ||
|
||||
is_module_text_address((unsigned long) p->addr)) ||
|
||||
in_gate_area_no_mm((unsigned long) p->addr) ||
|
||||
/* Ensure the address is in a text area, and find a module if exists. */
|
||||
*probed_mod = NULL;
|
||||
if (!core_kernel_text((unsigned long) p->addr)) {
|
||||
*probed_mod = __module_text_address((unsigned long) p->addr);
|
||||
if (!(*probed_mod)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
/* Ensure it is not in reserved area. */
|
||||
if (in_gate_area_no_mm((unsigned long) p->addr) ||
|
||||
within_kprobe_blacklist((unsigned long) p->addr) ||
|
||||
jump_label_text_reserved(p->addr, p->addr) ||
|
||||
find_bug((unsigned long)p->addr)) {
|
||||
@@ -1576,8 +1583,7 @@ static int check_kprobe_address_safe(struct kprobe *p,
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Check if are we probing a module */
|
||||
*probed_mod = __module_text_address((unsigned long) p->addr);
|
||||
/* Get module refcount and reject __init functions for loaded modules. */
|
||||
if (*probed_mod) {
|
||||
/*
|
||||
* We must hold a refcount of the probed module while updating
|
||||
|
||||
@@ -1803,8 +1803,8 @@ void getrusage(struct task_struct *p, int who, struct rusage *r)
|
||||
|
||||
out_children:
|
||||
r->ru_maxrss = maxrss * (PAGE_SIZE / 1024); /* convert pages to KBs */
|
||||
r->ru_utime = ns_to_kernel_old_timeval(utime);
|
||||
r->ru_stime = ns_to_kernel_old_timeval(stime);
|
||||
r->ru_utime = ns_to_timeval(utime);
|
||||
r->ru_stime = ns_to_timeval(stime);
|
||||
}
|
||||
|
||||
SYSCALL_DEFINE2(getrusage, int, who, struct rusage __user *, ru)
|
||||
|
||||
@@ -546,6 +546,9 @@ config BPF_EVENTS
|
||||
help
|
||||
This allows the user to attach BPF programs to kprobe events.
|
||||
|
||||
config DYNAMIC_EVENTS
|
||||
def_bool n
|
||||
|
||||
config PROBE_EVENTS
|
||||
def_bool n
|
||||
|
||||
@@ -658,6 +661,7 @@ config HIST_TRIGGERS
|
||||
depends on ARCH_HAVE_NMI_SAFE_CMPXCHG
|
||||
select TRACING_MAP
|
||||
select TRACING
|
||||
select DYNAMIC_EVENTS
|
||||
default n
|
||||
help
|
||||
Hist triggers allow one or more arbitrary trace event fields
|
||||
|
||||
@@ -78,6 +78,7 @@ endif
|
||||
ifeq ($(CONFIG_TRACING),y)
|
||||
obj-$(CONFIG_KGDB_KDB) += trace_kdb.o
|
||||
endif
|
||||
obj-$(CONFIG_DYNAMIC_EVENTS) += trace_dynevent.o
|
||||
obj-$(CONFIG_PROBE_EVENTS) += trace_probe.o
|
||||
obj-$(CONFIG_UPROBE_EVENTS) += trace_uprobe.o
|
||||
|
||||
|
||||
@@ -4476,7 +4476,7 @@ static int trace_set_options(struct trace_array *tr, char *option)
|
||||
|
||||
cmp = strstrip(option);
|
||||
|
||||
if (strncmp(cmp, "no", 2) == 0) {
|
||||
if (str_has_prefix(cmp, "no")) {
|
||||
neg = 1;
|
||||
cmp += 2;
|
||||
}
|
||||
@@ -4671,6 +4671,10 @@ static const char readme_msg[] =
|
||||
"\t\t\t traces\n"
|
||||
#endif
|
||||
#endif /* CONFIG_STACK_TRACER */
|
||||
#ifdef CONFIG_DYNAMIC_EVENTS
|
||||
" dynamic_events\t\t- Add/remove/show the generic dynamic events\n"
|
||||
"\t\t\t Write into this file to define/undefine new trace events.\n"
|
||||
#endif
|
||||
#ifdef CONFIG_KPROBE_EVENTS
|
||||
" kprobe_events\t\t- Add/remove/show the kernel dynamic events\n"
|
||||
"\t\t\t Write into this file to define/undefine new trace events.\n"
|
||||
@@ -4683,6 +4687,9 @@ static const char readme_msg[] =
|
||||
"\t accepts: event-definitions (one definition per line)\n"
|
||||
"\t Format: p[:[<group>/]<event>] <place> [<args>]\n"
|
||||
"\t r[maxactive][:[<group>/]<event>] <place> [<args>]\n"
|
||||
#ifdef CONFIG_HIST_TRIGGERS
|
||||
"\t s:[synthetic/]<event> <field> [<field>]\n"
|
||||
#endif
|
||||
"\t -:[<group>/]<event>\n"
|
||||
#ifdef CONFIG_KPROBE_EVENTS
|
||||
"\t place: [<module>:]<symbol>[+<offset>]|<memaddr>\n"
|
||||
@@ -4696,6 +4703,11 @@ static const char readme_msg[] =
|
||||
"\t $stack<index>, $stack, $retval, $comm\n"
|
||||
"\t type: s8/16/32/64, u8/16/32/64, x8/16/32/64, string,\n"
|
||||
"\t b<bit-width>@<bit-offset>/<container-size>\n"
|
||||
#ifdef CONFIG_HIST_TRIGGERS
|
||||
"\t field: <stype> <name>;\n"
|
||||
"\t stype: u8/u16/u32/u64, s8/s16/s32/s64, pid_t,\n"
|
||||
"\t [unsigned] char/int/long\n"
|
||||
#endif
|
||||
#endif
|
||||
" events/\t\t- Directory containing all trace event subsystems:\n"
|
||||
" enable\t\t- Write 0/1 to enable/disable tracing of all events\n"
|
||||
@@ -4748,6 +4760,7 @@ static const char readme_msg[] =
|
||||
"\t [:size=#entries]\n"
|
||||
"\t [:pause][:continue][:clear]\n"
|
||||
"\t [:name=histname1]\n"
|
||||
"\t [:<handler>.<action>]\n"
|
||||
"\t [if <filter>]\n\n"
|
||||
"\t Note, special fields can be used as well:\n"
|
||||
"\t common_timestamp - to record current timestamp\n"
|
||||
@@ -4793,7 +4806,16 @@ static const char readme_msg[] =
|
||||
"\t The enable_hist and disable_hist triggers can be used to\n"
|
||||
"\t have one event conditionally start and stop another event's\n"
|
||||
"\t already-attached hist trigger. The syntax is analagous to\n"
|
||||
"\t the enable_event and disable_event triggers.\n"
|
||||
"\t the enable_event and disable_event triggers.\n\n"
|
||||
"\t Hist trigger handlers and actions are executed whenever a\n"
|
||||
"\t a histogram entry is added or updated. They take the form:\n\n"
|
||||
"\t <handler>.<action>\n\n"
|
||||
"\t The available handlers are:\n\n"
|
||||
"\t onmatch(matching.event) - invoke on addition or update\n"
|
||||
"\t onmax(var) - invoke if var exceeds current max\n\n"
|
||||
"\t The available actions are:\n\n"
|
||||
"\t <synthetic_event>(param list) - generate synthetic event\n"
|
||||
"\t save(field,...) - save current event fields\n"
|
||||
#endif
|
||||
;
|
||||
|
||||
|
||||
210
kernel/trace/trace_dynevent.c
Normal file
210
kernel/trace/trace_dynevent.c
Normal file
@@ -0,0 +1,210 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Generic dynamic event control interface
|
||||
*
|
||||
* Copyright (C) 2018 Masami Hiramatsu <mhiramat@kernel.org>
|
||||
*/
|
||||
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/tracefs.h>
|
||||
|
||||
#include "trace.h"
|
||||
#include "trace_dynevent.h"
|
||||
|
||||
static DEFINE_MUTEX(dyn_event_ops_mutex);
|
||||
static LIST_HEAD(dyn_event_ops_list);
|
||||
|
||||
int dyn_event_register(struct dyn_event_operations *ops)
|
||||
{
|
||||
if (!ops || !ops->create || !ops->show || !ops->is_busy ||
|
||||
!ops->free || !ops->match)
|
||||
return -EINVAL;
|
||||
|
||||
INIT_LIST_HEAD(&ops->list);
|
||||
mutex_lock(&dyn_event_ops_mutex);
|
||||
list_add_tail(&ops->list, &dyn_event_ops_list);
|
||||
mutex_unlock(&dyn_event_ops_mutex);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int dyn_event_release(int argc, char **argv, struct dyn_event_operations *type)
|
||||
{
|
||||
struct dyn_event *pos, *n;
|
||||
char *system = NULL, *event, *p;
|
||||
int ret = -ENOENT;
|
||||
|
||||
if (argv[0][1] != ':')
|
||||
return -EINVAL;
|
||||
|
||||
event = &argv[0][2];
|
||||
p = strchr(event, '/');
|
||||
if (p) {
|
||||
system = event;
|
||||
event = p + 1;
|
||||
*p = '\0';
|
||||
}
|
||||
if (event[0] == '\0')
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
for_each_dyn_event_safe(pos, n) {
|
||||
if (type && type != pos->ops)
|
||||
continue;
|
||||
if (pos->ops->match(system, event, pos)) {
|
||||
ret = pos->ops->free(pos);
|
||||
break;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int create_dyn_event(int argc, char **argv)
|
||||
{
|
||||
struct dyn_event_operations *ops;
|
||||
int ret;
|
||||
|
||||
if (argv[0][0] == '-')
|
||||
return dyn_event_release(argc, argv, NULL);
|
||||
|
||||
mutex_lock(&dyn_event_ops_mutex);
|
||||
list_for_each_entry(ops, &dyn_event_ops_list, list) {
|
||||
ret = ops->create(argc, (const char **)argv);
|
||||
if (!ret || ret != -ECANCELED)
|
||||
break;
|
||||
}
|
||||
mutex_unlock(&dyn_event_ops_mutex);
|
||||
if (ret == -ECANCELED)
|
||||
ret = -EINVAL;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Protected by event_mutex */
|
||||
LIST_HEAD(dyn_event_list);
|
||||
|
||||
void *dyn_event_seq_start(struct seq_file *m, loff_t *pos)
|
||||
{
|
||||
mutex_lock(&event_mutex);
|
||||
return seq_list_start(&dyn_event_list, *pos);
|
||||
}
|
||||
|
||||
void *dyn_event_seq_next(struct seq_file *m, void *v, loff_t *pos)
|
||||
{
|
||||
return seq_list_next(v, &dyn_event_list, pos);
|
||||
}
|
||||
|
||||
void dyn_event_seq_stop(struct seq_file *m, void *v)
|
||||
{
|
||||
mutex_unlock(&event_mutex);
|
||||
}
|
||||
|
||||
static int dyn_event_seq_show(struct seq_file *m, void *v)
|
||||
{
|
||||
struct dyn_event *ev = v;
|
||||
|
||||
if (ev && ev->ops)
|
||||
return ev->ops->show(m, ev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct seq_operations dyn_event_seq_op = {
|
||||
.start = dyn_event_seq_start,
|
||||
.next = dyn_event_seq_next,
|
||||
.stop = dyn_event_seq_stop,
|
||||
.show = dyn_event_seq_show
|
||||
};
|
||||
|
||||
/*
|
||||
* dyn_events_release_all - Release all specific events
|
||||
* @type: the dyn_event_operations * which filters releasing events
|
||||
*
|
||||
* This releases all events which ->ops matches @type. If @type is NULL,
|
||||
* all events are released.
|
||||
* Return -EBUSY if any of them are in use, and return other errors when
|
||||
* it failed to free the given event. Except for -EBUSY, event releasing
|
||||
* process will be aborted at that point and there may be some other
|
||||
* releasable events on the list.
|
||||
*/
|
||||
int dyn_events_release_all(struct dyn_event_operations *type)
|
||||
{
|
||||
struct dyn_event *ev, *tmp;
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
for_each_dyn_event(ev) {
|
||||
if (type && ev->ops != type)
|
||||
continue;
|
||||
if (ev->ops->is_busy(ev)) {
|
||||
ret = -EBUSY;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
for_each_dyn_event_safe(ev, tmp) {
|
||||
if (type && ev->ops != type)
|
||||
continue;
|
||||
ret = ev->ops->free(ev);
|
||||
if (ret)
|
||||
break;
|
||||
}
|
||||
out:
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int dyn_event_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if ((file->f_mode & FMODE_WRITE) && (file->f_flags & O_TRUNC)) {
|
||||
ret = dyn_events_release_all(NULL);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return seq_open(file, &dyn_event_seq_op);
|
||||
}
|
||||
|
||||
static ssize_t dyn_event_write(struct file *file, const char __user *buffer,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
return trace_parse_run_command(file, buffer, count, ppos,
|
||||
create_dyn_event);
|
||||
}
|
||||
|
||||
static const struct file_operations dynamic_events_ops = {
|
||||
.owner = THIS_MODULE,
|
||||
.open = dyn_event_open,
|
||||
.read = seq_read,
|
||||
.llseek = seq_lseek,
|
||||
.release = seq_release,
|
||||
.write = dyn_event_write,
|
||||
};
|
||||
|
||||
/* Make a tracefs interface for controlling dynamic events */
|
||||
static __init int init_dynamic_event(void)
|
||||
{
|
||||
struct dentry *d_tracer;
|
||||
struct dentry *entry;
|
||||
|
||||
d_tracer = tracing_init_dentry();
|
||||
if (IS_ERR(d_tracer))
|
||||
return 0;
|
||||
|
||||
entry = tracefs_create_file("dynamic_events", 0644, d_tracer,
|
||||
NULL, &dynamic_events_ops);
|
||||
|
||||
/* Event list interface */
|
||||
if (!entry)
|
||||
pr_warn("Could not create tracefs 'dynamic_events' entry\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
fs_initcall(init_dynamic_event);
|
||||
119
kernel/trace/trace_dynevent.h
Normal file
119
kernel/trace/trace_dynevent.h
Normal file
@@ -0,0 +1,119 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Common header file for generic dynamic events.
|
||||
*/
|
||||
|
||||
#ifndef _TRACE_DYNEVENT_H
|
||||
#define _TRACE_DYNEVENT_H
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/seq_file.h>
|
||||
|
||||
#include "trace.h"
|
||||
|
||||
struct dyn_event;
|
||||
|
||||
/**
|
||||
* struct dyn_event_operations - Methods for each type of dynamic events
|
||||
*
|
||||
* These methods must be set for each type, since there is no default method.
|
||||
* Before using this for dyn_event_init(), it must be registered by
|
||||
* dyn_event_register().
|
||||
*
|
||||
* @create: Parse and create event method. This is invoked when user passes
|
||||
* a event definition to dynamic_events interface. This must not destruct
|
||||
* the arguments and return -ECANCELED if given arguments doesn't match its
|
||||
* command prefix.
|
||||
* @show: Showing method. This is invoked when user reads the event definitions
|
||||
* via dynamic_events interface.
|
||||
* @is_busy: Check whether given event is busy so that it can not be deleted.
|
||||
* Return true if it is busy, otherwides false.
|
||||
* @free: Delete the given event. Return 0 if success, otherwides error.
|
||||
* @match: Check whether given event and system name match this event.
|
||||
* Return true if it matches, otherwides false.
|
||||
*
|
||||
* Except for @create, these methods are called under holding event_mutex.
|
||||
*/
|
||||
struct dyn_event_operations {
|
||||
struct list_head list;
|
||||
int (*create)(int argc, const char *argv[]);
|
||||
int (*show)(struct seq_file *m, struct dyn_event *ev);
|
||||
bool (*is_busy)(struct dyn_event *ev);
|
||||
int (*free)(struct dyn_event *ev);
|
||||
bool (*match)(const char *system, const char *event,
|
||||
struct dyn_event *ev);
|
||||
};
|
||||
|
||||
/* Register new dyn_event type -- must be called at first */
|
||||
int dyn_event_register(struct dyn_event_operations *ops);
|
||||
|
||||
/**
|
||||
* struct dyn_event - Dynamic event list header
|
||||
*
|
||||
* The dyn_event structure encapsulates a list and a pointer to the operators
|
||||
* for making a global list of dynamic events.
|
||||
* User must includes this in each event structure, so that those events can
|
||||
* be added/removed via dynamic_events interface.
|
||||
*/
|
||||
struct dyn_event {
|
||||
struct list_head list;
|
||||
struct dyn_event_operations *ops;
|
||||
};
|
||||
|
||||
extern struct list_head dyn_event_list;
|
||||
|
||||
static inline
|
||||
int dyn_event_init(struct dyn_event *ev, struct dyn_event_operations *ops)
|
||||
{
|
||||
if (!ev || !ops)
|
||||
return -EINVAL;
|
||||
|
||||
INIT_LIST_HEAD(&ev->list);
|
||||
ev->ops = ops;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int dyn_event_add(struct dyn_event *ev)
|
||||
{
|
||||
lockdep_assert_held(&event_mutex);
|
||||
|
||||
if (!ev || !ev->ops)
|
||||
return -EINVAL;
|
||||
|
||||
list_add_tail(&ev->list, &dyn_event_list);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void dyn_event_remove(struct dyn_event *ev)
|
||||
{
|
||||
lockdep_assert_held(&event_mutex);
|
||||
list_del_init(&ev->list);
|
||||
}
|
||||
|
||||
void *dyn_event_seq_start(struct seq_file *m, loff_t *pos);
|
||||
void *dyn_event_seq_next(struct seq_file *m, void *v, loff_t *pos);
|
||||
void dyn_event_seq_stop(struct seq_file *m, void *v);
|
||||
int dyn_events_release_all(struct dyn_event_operations *type);
|
||||
int dyn_event_release(int argc, char **argv, struct dyn_event_operations *type);
|
||||
|
||||
/*
|
||||
* for_each_dyn_event - iterate over the dyn_event list
|
||||
* @pos: the struct dyn_event * to use as a loop cursor
|
||||
*
|
||||
* This is just a basement of for_each macro. Wrap this for
|
||||
* each actual event structure with ops filtering.
|
||||
*/
|
||||
#define for_each_dyn_event(pos) \
|
||||
list_for_each_entry(pos, &dyn_event_list, list)
|
||||
|
||||
/*
|
||||
* for_each_dyn_event - iterate over the dyn_event list safely
|
||||
* @pos: the struct dyn_event * to use as a loop cursor
|
||||
* @n: the struct dyn_event * to use as temporary storage
|
||||
*/
|
||||
#define for_each_dyn_event_safe(pos, n) \
|
||||
list_for_each_entry_safe(pos, n, &dyn_event_list, list)
|
||||
|
||||
#endif
|
||||
@@ -399,7 +399,8 @@ void *perf_trace_buf_alloc(int size, struct pt_regs **regs, int *rctxp)
|
||||
BUILD_BUG_ON(PERF_MAX_TRACE_SIZE % sizeof(unsigned long));
|
||||
|
||||
if (WARN_ONCE(size > PERF_MAX_TRACE_SIZE,
|
||||
"perf buffer not large enough"))
|
||||
"perf buffer not large enough, wanted %d, have %d",
|
||||
size, PERF_MAX_TRACE_SIZE))
|
||||
return NULL;
|
||||
|
||||
*rctxp = rctx = perf_swevent_get_recursion_context();
|
||||
|
||||
@@ -1249,7 +1249,7 @@ static int f_show(struct seq_file *m, void *v)
|
||||
*/
|
||||
array_descriptor = strchr(field->type, '[');
|
||||
|
||||
if (!strncmp(field->type, "__data_loc", 10))
|
||||
if (str_has_prefix(field->type, "__data_loc"))
|
||||
array_descriptor = NULL;
|
||||
|
||||
if (!array_descriptor)
|
||||
@@ -1309,6 +1309,7 @@ static int trace_format_open(struct inode *inode, struct file *file)
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PERF_EVENTS
|
||||
static ssize_t
|
||||
event_id_read(struct file *filp, char __user *ubuf, size_t cnt, loff_t *ppos)
|
||||
{
|
||||
@@ -1323,6 +1324,7 @@ event_id_read(struct file *filp, char __user *ubuf, size_t cnt, loff_t *ppos)
|
||||
|
||||
return simple_read_from_buffer(ubuf, cnt, ppos, buf, len);
|
||||
}
|
||||
#endif
|
||||
|
||||
static ssize_t
|
||||
event_filter_read(struct file *filp, char __user *ubuf, size_t cnt,
|
||||
@@ -1727,10 +1729,12 @@ static const struct file_operations ftrace_event_format_fops = {
|
||||
.release = seq_release,
|
||||
};
|
||||
|
||||
#ifdef CONFIG_PERF_EVENTS
|
||||
static const struct file_operations ftrace_event_id_fops = {
|
||||
.read = event_id_read,
|
||||
.llseek = default_llseek,
|
||||
};
|
||||
#endif
|
||||
|
||||
static const struct file_operations ftrace_event_filter_fops = {
|
||||
.open = tracing_open_generic,
|
||||
@@ -2308,7 +2312,8 @@ __trace_early_add_new_event(struct trace_event_call *call,
|
||||
struct ftrace_module_file_ops;
|
||||
static void __add_event_to_tracers(struct trace_event_call *call);
|
||||
|
||||
int trace_add_event_call_nolock(struct trace_event_call *call)
|
||||
/* Add an additional event_call dynamically */
|
||||
int trace_add_event_call(struct trace_event_call *call)
|
||||
{
|
||||
int ret;
|
||||
lockdep_assert_held(&event_mutex);
|
||||
@@ -2323,17 +2328,6 @@ int trace_add_event_call_nolock(struct trace_event_call *call)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Add an additional event_call dynamically */
|
||||
int trace_add_event_call(struct trace_event_call *call)
|
||||
{
|
||||
int ret;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
ret = trace_add_event_call_nolock(call);
|
||||
mutex_unlock(&event_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Must be called under locking of trace_types_lock, event_mutex and
|
||||
* trace_event_sem.
|
||||
@@ -2379,8 +2373,8 @@ static int probe_remove_event_call(struct trace_event_call *call)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* no event_mutex version */
|
||||
int trace_remove_event_call_nolock(struct trace_event_call *call)
|
||||
/* Remove an event_call */
|
||||
int trace_remove_event_call(struct trace_event_call *call)
|
||||
{
|
||||
int ret;
|
||||
|
||||
@@ -2395,18 +2389,6 @@ int trace_remove_event_call_nolock(struct trace_event_call *call)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Remove an event_call */
|
||||
int trace_remove_event_call(struct trace_event_call *call)
|
||||
{
|
||||
int ret;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
ret = trace_remove_event_call_nolock(call);
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
#define for_each_event(event, start, end) \
|
||||
for (event = start; \
|
||||
(unsigned long)event < (unsigned long)end; \
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1133,10 +1133,8 @@ register_snapshot_trigger(char *glob, struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data,
|
||||
struct trace_event_file *file)
|
||||
{
|
||||
int ret = tracing_alloc_snapshot_instance(file->tr);
|
||||
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (tracing_alloc_snapshot_instance(file->tr) != 0)
|
||||
return 0;
|
||||
|
||||
return register_trigger(glob, ops, data, file);
|
||||
}
|
||||
|
||||
@@ -342,7 +342,7 @@ static int parse_probe_vars(char *arg, const struct fetch_type *t,
|
||||
f->fn = t->fetch[FETCH_MTD_retval];
|
||||
else
|
||||
ret = -EINVAL;
|
||||
} else if (strncmp(arg, "stack", 5) == 0) {
|
||||
} else if (str_has_prefix(arg, "stack")) {
|
||||
if (arg[5] == '\0') {
|
||||
if (strcmp(t->name, DEFAULT_FETCH_TYPE_STR))
|
||||
return -EINVAL;
|
||||
|
||||
@@ -453,7 +453,7 @@ static char stack_trace_filter_buf[COMMAND_LINE_SIZE+1] __initdata;
|
||||
|
||||
static __init int enable_stacktrace(char *str)
|
||||
{
|
||||
if (strncmp(str, "_filter=", 8) == 0)
|
||||
if (str_has_prefix(str, "_filter="))
|
||||
strncpy(stack_trace_filter_buf, str+8, COMMAND_LINE_SIZE);
|
||||
|
||||
stack_tracer_enabled = 1;
|
||||
|
||||
@@ -242,7 +242,11 @@ static int ddebug_tokenize(char *buf, char *words[], int maxwords)
|
||||
} else {
|
||||
for (end = buf; *end && !isspace(*end); end++)
|
||||
;
|
||||
BUG_ON(end == buf);
|
||||
if (end == buf) {
|
||||
pr_err("parse err after word:%d=%s\n", nwords,
|
||||
nwords ? words[nwords - 1] : "<none>");
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
/* `buf' is start of word, `end' is one past its end */
|
||||
|
||||
@@ -255,10 +255,10 @@ depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
|
||||
/*
|
||||
* Zero out zone modifiers, as we don't have specific zone
|
||||
* requirements. Keep the flags related to allocation in atomic
|
||||
* contexts and I/O.
|
||||
* contexts, I/O, nolockdep.
|
||||
*/
|
||||
alloc_flags &= ~GFP_ZONEMASK;
|
||||
alloc_flags &= (GFP_ATOMIC | GFP_KERNEL);
|
||||
alloc_flags &= (GFP_ATOMIC | GFP_KERNEL | __GFP_NOLOCKDEP);
|
||||
alloc_flags |= __GFP_NOWARN;
|
||||
page = alloc_pages(alloc_flags, STACK_ALLOC_ORDER);
|
||||
if (page)
|
||||
|
||||
@@ -4198,7 +4198,7 @@ void batadv_tt_local_resize_to_mtu(struct net_device *soft_iface)
|
||||
|
||||
spin_lock_bh(&bat_priv->tt.commit_lock);
|
||||
|
||||
while (true) {
|
||||
while (timeout) {
|
||||
table_size = batadv_tt_local_table_transmit_size(bat_priv);
|
||||
if (packet_size_max >= table_size)
|
||||
break;
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user