Merge 4.19.67 into android-4.19-q
Changes in 4.19.67 iio: cros_ec_accel_legacy: Fix incorrect channel setting iio: adc: max9611: Fix misuse of GENMASK macro staging: gasket: apex: fix copy-paste typo staging: android: ion: Bail out upon SIGKILL when allocating memory. crypto: ccp - Fix oops by properly managing allocated structures crypto: ccp - Add support for valid authsize values less than 16 crypto: ccp - Ignore tag length when decrypting GCM ciphertext usb: usbfs: fix double-free of usb memory upon submiturb error usb: iowarrior: fix deadlock on disconnect sound: fix a memory leak bug mmc: cavium: Set the correct dma max segment size for mmc_host mmc: cavium: Add the missing dma unmap when the dma has finished. loop: set PF_MEMALLOC_NOIO for the worker thread Input: usbtouchscreen - initialize PM mutex before using it Input: elantech - enable SMBus on new (2018+) systems Input: synaptics - enable RMI mode for HP Spectre X360 x86/mm: Check for pfn instead of page in vmalloc_sync_one() x86/mm: Sync also unmappings in vmalloc_sync_all() mm/vmalloc: Sync unmappings in __purge_vmap_area_lazy() perf annotate: Fix s390 gap between kernel end and module start perf db-export: Fix thread__exec_comm() perf record: Fix module size on s390 x86/purgatory: Use CFLAGS_REMOVE rather than reset KBUILD_CFLAGS gfs2: gfs2_walk_metadata fix usb: host: xhci-rcar: Fix timeout in xhci_suspend() usb: yurex: Fix use-after-free in yurex_delete usb: typec: tcpm: free log buf memory when remove debug file usb: typec: tcpm: remove tcpm dir if no children usb: typec: tcpm: Add NULL check before dereferencing config usb: typec: tcpm: Ignore unsupported/unknown alternate mode requests can: rcar_canfd: fix possible IRQ storm on high load can: peak_usb: fix potential double kfree_skb() netfilter: nfnetlink: avoid deadlock due to synchronous request_module vfio-ccw: Set pa_nr to 0 if memory allocation fails for pa_iova_pfn netfilter: Fix rpfilter dropping vrf packets by mistake netfilter: conntrack: always store window size un-scaled netfilter: nft_hash: fix symhash with modulus one scripts/sphinx-pre-install: fix script for RHEL/CentOS drm/amd/display: Wait for backlight programming completion in set backlight level drm/amd/display: use encoder's engine id to find matched free audio device drm/amd/display: Fix dc_create failure handling and 666 color depths drm/amd/display: Only enable audio if speaker allocation exists drm/amd/display: Increase size of audios array iscsi_ibft: make ISCSI_IBFT dependson ACPI instead of ISCSI_IBFT_FIND nl80211: fix NL80211_HE_MAX_CAPABILITY_LEN mac80211: don't warn about CW params when not using them allocate_flower_entry: should check for null deref hwmon: (nct6775) Fix register address and added missed tolerance for nct6106 drm: silence variable 'conn' set but not used cpufreq/pasemi: fix use-after-free in pas_cpufreq_cpu_init() s390/qdio: add sanity checks to the fast-requeue path ALSA: compress: Fix regression on compressed capture streams ALSA: compress: Prevent bypasses of set_params ALSA: compress: Don't allow paritial drain operations on capture streams ALSA: compress: Be more restrictive about when a drain is allowed perf tools: Fix proper buffer size for feature processing perf probe: Avoid calling freeing routine multiple times for same pointer drbd: dynamically allocate shash descriptor ACPI/IORT: Fix off-by-one check in iort_dev_find_its_id() nvme: fix multipath crash when ANA is deactivated ARM: davinci: fix sleep.S build error on ARMv4 ARM: dts: bcm: bcm47094: add missing #cells for mdio-bus-mux scsi: megaraid_sas: fix panic on loading firmware crashdump scsi: ibmvfc: fix WARN_ON during event pool release scsi: scsi_dh_alua: always use a 2 second delay before retrying RTPG test_firmware: fix a memory leak bug tty/ldsem, locking/rwsem: Add missing ACQUIRE to read_failed sleep loop perf/core: Fix creating kernel counters for PMUs that override event->cpu s390/dma: provide proper ARCH_ZONE_DMA_BITS value HID: sony: Fix race condition between rumble and device remove. x86/purgatory: Do not use __builtin_memcpy and __builtin_memset ALSA: usb-audio: fix a memory leak bug can: peak_usb: pcan_usb_pro: Fix info-leaks to USB devices can: peak_usb: pcan_usb_fd: Fix info-leaks to USB devices hwmon: (nct7802) Fix wrong detection of in4 presence drm/i915: Fix wrong escape clock divisor init for GLK ALSA: firewire: fix a memory leak bug ALSA: hiface: fix multiple memory leak bugs ALSA: hda - Don't override global PCM hw info flag ALSA: hda - Workaround for crackled sound on AMD controller (1022:1457) mac80211: don't WARN on short WMM parameters from AP dax: dax_layout_busy_page() should not unmap cow pages SMB3: Fix deadlock in validate negotiate hits reconnect smb3: send CAP_DFS capability during session setup NFSv4: Fix an Oops in nfs4_do_setattr KVM: Fix leak vCPU's VMCS value into other pCPU mwifiex: fix 802.11n/WPA detection iwlwifi: don't unmap as page memory that was mapped as single iwlwifi: mvm: fix an out-of-bound access iwlwifi: mvm: don't send GEO_TX_POWER_LIMIT on version < 41 iwlwifi: mvm: fix version check for GEO_TX_POWER_LIMIT support Linux 4.19.67 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I801f14d173819204ed7a4180554b92f61add2df9
This commit is contained in:
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 19
|
||||
SUBLEVEL = 66
|
||||
SUBLEVEL = 67
|
||||
EXTRAVERSION =
|
||||
NAME = "People's Front"
|
||||
|
||||
|
||||
@@ -125,6 +125,9 @@
|
||||
};
|
||||
|
||||
mdio-bus-mux {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
/* BIT(9) = 1 => external mdio */
|
||||
mdio_ext: mdio@200 {
|
||||
reg = <0x200>;
|
||||
|
||||
@@ -37,6 +37,7 @@
|
||||
#define DEEPSLEEP_SLEEPENABLE_BIT BIT(31)
|
||||
|
||||
.text
|
||||
.arch armv5te
|
||||
/*
|
||||
* Move DaVinci into deep sleep state
|
||||
*
|
||||
|
||||
@@ -61,6 +61,11 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *v)
|
||||
return !!(v->arch.pending_exceptions) || kvm_request_pending(v);
|
||||
}
|
||||
|
||||
bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return kvm_arch_vcpu_runnable(vcpu);
|
||||
}
|
||||
|
||||
bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return false;
|
||||
|
||||
@@ -176,6 +176,8 @@ static inline int devmem_is_allowed(unsigned long pfn)
|
||||
#define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | \
|
||||
VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
|
||||
|
||||
#define ARCH_ZONE_DMA_BITS 31
|
||||
|
||||
#include <asm-generic/memory_model.h>
|
||||
#include <asm-generic/getorder.h>
|
||||
|
||||
|
||||
@@ -34,6 +34,14 @@ int memcmp(const void *s1, const void *s2, size_t len)
|
||||
return diff;
|
||||
}
|
||||
|
||||
/*
|
||||
* Clang may lower `memcmp == 0` to `bcmp == 0`.
|
||||
*/
|
||||
int bcmp(const void *s1, const void *s2, size_t len)
|
||||
{
|
||||
return memcmp(s1, s2, len);
|
||||
}
|
||||
|
||||
int strcmp(const char *str1, const char *str2)
|
||||
{
|
||||
const unsigned char *s1 = (const unsigned char *)str1;
|
||||
|
||||
@@ -1113,6 +1113,7 @@ struct kvm_x86_ops {
|
||||
int (*update_pi_irte)(struct kvm *kvm, unsigned int host_irq,
|
||||
uint32_t guest_irq, bool set);
|
||||
void (*apicv_post_state_restore)(struct kvm_vcpu *vcpu);
|
||||
bool (*dy_apicv_has_pending_interrupt)(struct kvm_vcpu *vcpu);
|
||||
|
||||
int (*set_hv_timer)(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc);
|
||||
void (*cancel_hv_timer)(struct kvm_vcpu *vcpu);
|
||||
|
||||
@@ -5146,6 +5146,11 @@ static void svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec)
|
||||
kvm_vcpu_wake_up(vcpu);
|
||||
}
|
||||
|
||||
static bool svm_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static void svm_ir_list_del(struct vcpu_svm *svm, struct amd_iommu_pi_data *pi)
|
||||
{
|
||||
unsigned long flags;
|
||||
@@ -7203,6 +7208,7 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
|
||||
|
||||
.pmu_ops = &amd_pmu_ops,
|
||||
.deliver_posted_interrupt = svm_deliver_avic_intr,
|
||||
.dy_apicv_has_pending_interrupt = svm_dy_apicv_has_pending_interrupt,
|
||||
.update_pi_irte = svm_update_pi_irte,
|
||||
.setup_mce = svm_setup_mce,
|
||||
|
||||
|
||||
@@ -10411,6 +10411,11 @@ static u8 vmx_has_apicv_interrupt(struct kvm_vcpu *vcpu)
|
||||
return ((rvi & 0xf0) > (vppr & 0xf0));
|
||||
}
|
||||
|
||||
static bool vmx_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return pi_test_on(vcpu_to_pi_desc(vcpu));
|
||||
}
|
||||
|
||||
static void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
|
||||
{
|
||||
if (!kvm_vcpu_apicv_active(vcpu))
|
||||
@@ -14387,6 +14392,7 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
|
||||
.guest_apic_has_interrupt = vmx_guest_apic_has_interrupt,
|
||||
.sync_pir_to_irr = vmx_sync_pir_to_irr,
|
||||
.deliver_posted_interrupt = vmx_deliver_posted_interrupt,
|
||||
.dy_apicv_has_pending_interrupt = vmx_dy_apicv_has_pending_interrupt,
|
||||
|
||||
.set_tss_addr = vmx_set_tss_addr,
|
||||
.set_identity_map_addr = vmx_set_identity_map_addr,
|
||||
|
||||
@@ -9336,6 +9336,22 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
|
||||
return kvm_vcpu_running(vcpu) || kvm_vcpu_has_events(vcpu);
|
||||
}
|
||||
|
||||
bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
if (READ_ONCE(vcpu->arch.pv.pv_unhalted))
|
||||
return true;
|
||||
|
||||
if (kvm_test_request(KVM_REQ_NMI, vcpu) ||
|
||||
kvm_test_request(KVM_REQ_SMI, vcpu) ||
|
||||
kvm_test_request(KVM_REQ_EVENT, vcpu))
|
||||
return true;
|
||||
|
||||
if (vcpu->arch.apicv_active && kvm_x86_ops->dy_apicv_has_pending_interrupt(vcpu))
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return vcpu->arch.preempted_in_kernel;
|
||||
|
||||
@@ -261,13 +261,14 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, unsigned long address)
|
||||
|
||||
pmd = pmd_offset(pud, address);
|
||||
pmd_k = pmd_offset(pud_k, address);
|
||||
|
||||
if (pmd_present(*pmd) != pmd_present(*pmd_k))
|
||||
set_pmd(pmd, *pmd_k);
|
||||
|
||||
if (!pmd_present(*pmd_k))
|
||||
return NULL;
|
||||
|
||||
if (!pmd_present(*pmd))
|
||||
set_pmd(pmd, *pmd_k);
|
||||
else
|
||||
BUG_ON(pmd_page(*pmd) != pmd_page(*pmd_k));
|
||||
BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k));
|
||||
|
||||
return pmd_k;
|
||||
}
|
||||
@@ -287,17 +288,13 @@ void vmalloc_sync_all(void)
|
||||
spin_lock(&pgd_lock);
|
||||
list_for_each_entry(page, &pgd_list, lru) {
|
||||
spinlock_t *pgt_lock;
|
||||
pmd_t *ret;
|
||||
|
||||
/* the pgt_lock only for Xen */
|
||||
pgt_lock = &pgd_page_get_mm(page)->page_table_lock;
|
||||
|
||||
spin_lock(pgt_lock);
|
||||
ret = vmalloc_sync_one(page_address(page), address);
|
||||
vmalloc_sync_one(page_address(page), address);
|
||||
spin_unlock(pgt_lock);
|
||||
|
||||
if (!ret)
|
||||
break;
|
||||
}
|
||||
spin_unlock(&pgd_lock);
|
||||
}
|
||||
|
||||
@@ -6,6 +6,9 @@ purgatory-y := purgatory.o stack.o setup-x86_$(BITS).o sha256.o entry64.o string
|
||||
targets += $(purgatory-y)
|
||||
PURGATORY_OBJS = $(addprefix $(obj)/,$(purgatory-y))
|
||||
|
||||
$(obj)/string.o: $(srctree)/arch/x86/boot/compressed/string.c FORCE
|
||||
$(call if_changed_rule,cc_o_c)
|
||||
|
||||
$(obj)/sha256.o: $(srctree)/lib/sha256.c FORCE
|
||||
$(call if_changed_rule,cc_o_c)
|
||||
|
||||
@@ -17,11 +20,34 @@ KCOV_INSTRUMENT := n
|
||||
|
||||
# Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That
|
||||
# in turn leaves some undefined symbols like __fentry__ in purgatory and not
|
||||
# sure how to relocate those. Like kexec-tools, use custom flags.
|
||||
# sure how to relocate those.
|
||||
ifdef CONFIG_FUNCTION_TRACER
|
||||
CFLAGS_REMOVE_sha256.o += $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_REMOVE_purgatory.o += $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_REMOVE_string.o += $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_REMOVE_kexec-purgatory.o += $(CC_FLAGS_FTRACE)
|
||||
endif
|
||||
|
||||
KBUILD_CFLAGS := -fno-strict-aliasing -Wall -Wstrict-prototypes -fno-zero-initialized-in-bss -fno-builtin -ffreestanding -c -Os -mcmodel=large
|
||||
KBUILD_CFLAGS += -m$(BITS)
|
||||
KBUILD_CFLAGS += $(call cc-option,-fno-PIE)
|
||||
ifdef CONFIG_STACKPROTECTOR
|
||||
CFLAGS_REMOVE_sha256.o += -fstack-protector
|
||||
CFLAGS_REMOVE_purgatory.o += -fstack-protector
|
||||
CFLAGS_REMOVE_string.o += -fstack-protector
|
||||
CFLAGS_REMOVE_kexec-purgatory.o += -fstack-protector
|
||||
endif
|
||||
|
||||
ifdef CONFIG_STACKPROTECTOR_STRONG
|
||||
CFLAGS_REMOVE_sha256.o += -fstack-protector-strong
|
||||
CFLAGS_REMOVE_purgatory.o += -fstack-protector-strong
|
||||
CFLAGS_REMOVE_string.o += -fstack-protector-strong
|
||||
CFLAGS_REMOVE_kexec-purgatory.o += -fstack-protector-strong
|
||||
endif
|
||||
|
||||
ifdef CONFIG_RETPOLINE
|
||||
CFLAGS_REMOVE_sha256.o += $(RETPOLINE_CFLAGS)
|
||||
CFLAGS_REMOVE_purgatory.o += $(RETPOLINE_CFLAGS)
|
||||
CFLAGS_REMOVE_string.o += $(RETPOLINE_CFLAGS)
|
||||
CFLAGS_REMOVE_kexec-purgatory.o += $(RETPOLINE_CFLAGS)
|
||||
endif
|
||||
|
||||
$(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE
|
||||
$(call if_changed,ld)
|
||||
|
||||
@@ -70,3 +70,9 @@ void purgatory(void)
|
||||
}
|
||||
copy_backup_region();
|
||||
}
|
||||
|
||||
/*
|
||||
* Defined in order to reuse memcpy() and memset() from
|
||||
* arch/x86/boot/compressed/string.c
|
||||
*/
|
||||
void warn(const char *msg) {}
|
||||
|
||||
@@ -1,25 +0,0 @@
|
||||
/*
|
||||
* Simple string functions.
|
||||
*
|
||||
* Copyright (C) 2014 Red Hat Inc.
|
||||
*
|
||||
* Author:
|
||||
* Vivek Goyal <vgoyal@redhat.com>
|
||||
*
|
||||
* This source code is licensed under the GNU General Public License,
|
||||
* Version 2. See the file COPYING for more details.
|
||||
*/
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "../boot/string.c"
|
||||
|
||||
void *memcpy(void *dst, const void *src, size_t len)
|
||||
{
|
||||
return __builtin_memcpy(dst, src, len);
|
||||
}
|
||||
|
||||
void *memset(void *dst, int c, size_t len)
|
||||
{
|
||||
return __builtin_memset(dst, c, len);
|
||||
}
|
||||
@@ -616,8 +616,8 @@ static int iort_dev_find_its_id(struct device *dev, u32 req_id,
|
||||
|
||||
/* Move to ITS specific data */
|
||||
its = (struct acpi_iort_its_group *)node->node_data;
|
||||
if (idx > its->its_count) {
|
||||
dev_err(dev, "requested ITS ID index [%d] is greater than available [%d]\n",
|
||||
if (idx >= its->its_count) {
|
||||
dev_err(dev, "requested ITS ID index [%d] overruns ITS entries [%d]\n",
|
||||
idx, its->its_count);
|
||||
return -ENXIO;
|
||||
}
|
||||
|
||||
@@ -5240,7 +5240,7 @@ static int drbd_do_auth(struct drbd_connection *connection)
|
||||
unsigned int key_len;
|
||||
char secret[SHARED_SECRET_MAX]; /* 64 byte */
|
||||
unsigned int resp_size;
|
||||
SHASH_DESC_ON_STACK(desc, connection->cram_hmac_tfm);
|
||||
struct shash_desc *desc;
|
||||
struct packet_info pi;
|
||||
struct net_conf *nc;
|
||||
int err, rv;
|
||||
@@ -5253,6 +5253,13 @@ static int drbd_do_auth(struct drbd_connection *connection)
|
||||
memcpy(secret, nc->shared_secret, key_len);
|
||||
rcu_read_unlock();
|
||||
|
||||
desc = kmalloc(sizeof(struct shash_desc) +
|
||||
crypto_shash_descsize(connection->cram_hmac_tfm),
|
||||
GFP_KERNEL);
|
||||
if (!desc) {
|
||||
rv = -1;
|
||||
goto fail;
|
||||
}
|
||||
desc->tfm = connection->cram_hmac_tfm;
|
||||
desc->flags = 0;
|
||||
|
||||
@@ -5395,7 +5402,10 @@ static int drbd_do_auth(struct drbd_connection *connection)
|
||||
kfree(peers_ch);
|
||||
kfree(response);
|
||||
kfree(right_response);
|
||||
shash_desc_zero(desc);
|
||||
if (desc) {
|
||||
shash_desc_zero(desc);
|
||||
kfree(desc);
|
||||
}
|
||||
|
||||
return rv;
|
||||
}
|
||||
|
||||
@@ -886,7 +886,7 @@ static void loop_unprepare_queue(struct loop_device *lo)
|
||||
|
||||
static int loop_kthread_worker_fn(void *worker_ptr)
|
||||
{
|
||||
current->flags |= PF_LESS_THROTTLE;
|
||||
current->flags |= PF_LESS_THROTTLE | PF_MEMALLOC_NOIO;
|
||||
return kthread_worker_fn(worker_ptr);
|
||||
}
|
||||
|
||||
|
||||
@@ -145,11 +145,19 @@ static int pas_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
int err = -ENODEV;
|
||||
|
||||
cpu = of_get_cpu_node(policy->cpu, NULL);
|
||||
|
||||
of_node_put(cpu);
|
||||
if (!cpu)
|
||||
goto out;
|
||||
|
||||
max_freqp = of_get_property(cpu, "clock-frequency", NULL);
|
||||
of_node_put(cpu);
|
||||
if (!max_freqp) {
|
||||
err = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* we need the freq in kHz */
|
||||
max_freq = *max_freqp / 1000;
|
||||
|
||||
dn = of_find_compatible_node(NULL, NULL, "1682m-sdc");
|
||||
if (!dn)
|
||||
dn = of_find_compatible_node(NULL, NULL,
|
||||
@@ -185,16 +193,6 @@ static int pas_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
}
|
||||
|
||||
pr_debug("init cpufreq on CPU %d\n", policy->cpu);
|
||||
|
||||
max_freqp = of_get_property(cpu, "clock-frequency", NULL);
|
||||
if (!max_freqp) {
|
||||
err = -EINVAL;
|
||||
goto out_unmap_sdcpwr;
|
||||
}
|
||||
|
||||
/* we need the freq in kHz */
|
||||
max_freq = *max_freqp / 1000;
|
||||
|
||||
pr_debug("max clock-frequency is at %u kHz\n", max_freq);
|
||||
pr_debug("initializing frequency table\n");
|
||||
|
||||
@@ -212,9 +210,6 @@ static int pas_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
|
||||
return cpufreq_generic_init(policy, pas_freqs, get_gizmo_latency());
|
||||
|
||||
out_unmap_sdcpwr:
|
||||
iounmap(sdcpwr_mapbase);
|
||||
|
||||
out_unmap_sdcasr:
|
||||
iounmap(sdcasr_mapbase);
|
||||
out:
|
||||
|
||||
@@ -61,6 +61,19 @@ static int ccp_aes_gcm_setkey(struct crypto_aead *tfm, const u8 *key,
|
||||
static int ccp_aes_gcm_setauthsize(struct crypto_aead *tfm,
|
||||
unsigned int authsize)
|
||||
{
|
||||
switch (authsize) {
|
||||
case 16:
|
||||
case 15:
|
||||
case 14:
|
||||
case 13:
|
||||
case 12:
|
||||
case 8:
|
||||
case 4:
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -107,6 +120,7 @@ static int ccp_aes_gcm_crypt(struct aead_request *req, bool encrypt)
|
||||
memset(&rctx->cmd, 0, sizeof(rctx->cmd));
|
||||
INIT_LIST_HEAD(&rctx->cmd.entry);
|
||||
rctx->cmd.engine = CCP_ENGINE_AES;
|
||||
rctx->cmd.u.aes.authsize = crypto_aead_authsize(tfm);
|
||||
rctx->cmd.u.aes.type = ctx->u.aes.type;
|
||||
rctx->cmd.u.aes.mode = ctx->u.aes.mode;
|
||||
rctx->cmd.u.aes.action = encrypt;
|
||||
|
||||
@@ -625,6 +625,7 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q,
|
||||
|
||||
unsigned long long *final;
|
||||
unsigned int dm_offset;
|
||||
unsigned int authsize;
|
||||
unsigned int jobid;
|
||||
unsigned int ilen;
|
||||
bool in_place = true; /* Default value */
|
||||
@@ -646,6 +647,21 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q,
|
||||
if (!aes->key) /* Gotta have a key SGL */
|
||||
return -EINVAL;
|
||||
|
||||
/* Zero defaults to 16 bytes, the maximum size */
|
||||
authsize = aes->authsize ? aes->authsize : AES_BLOCK_SIZE;
|
||||
switch (authsize) {
|
||||
case 16:
|
||||
case 15:
|
||||
case 14:
|
||||
case 13:
|
||||
case 12:
|
||||
case 8:
|
||||
case 4:
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* First, decompose the source buffer into AAD & PT,
|
||||
* and the destination buffer into AAD, CT & tag, or
|
||||
* the input into CT & tag.
|
||||
@@ -660,7 +676,7 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q,
|
||||
p_tag = scatterwalk_ffwd(sg_tag, p_outp, ilen);
|
||||
} else {
|
||||
/* Input length for decryption includes tag */
|
||||
ilen = aes->src_len - AES_BLOCK_SIZE;
|
||||
ilen = aes->src_len - authsize;
|
||||
p_tag = scatterwalk_ffwd(sg_tag, p_inp, ilen);
|
||||
}
|
||||
|
||||
@@ -769,8 +785,7 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q,
|
||||
while (src.sg_wa.bytes_left) {
|
||||
ccp_prepare_data(&src, &dst, &op, AES_BLOCK_SIZE, true);
|
||||
if (!src.sg_wa.bytes_left) {
|
||||
unsigned int nbytes = aes->src_len
|
||||
% AES_BLOCK_SIZE;
|
||||
unsigned int nbytes = ilen % AES_BLOCK_SIZE;
|
||||
|
||||
if (nbytes) {
|
||||
op.eom = 1;
|
||||
@@ -842,19 +857,19 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q,
|
||||
|
||||
if (aes->action == CCP_AES_ACTION_ENCRYPT) {
|
||||
/* Put the ciphered tag after the ciphertext. */
|
||||
ccp_get_dm_area(&final_wa, 0, p_tag, 0, AES_BLOCK_SIZE);
|
||||
ccp_get_dm_area(&final_wa, 0, p_tag, 0, authsize);
|
||||
} else {
|
||||
/* Does this ciphered tag match the input? */
|
||||
ret = ccp_init_dm_workarea(&tag, cmd_q, AES_BLOCK_SIZE,
|
||||
ret = ccp_init_dm_workarea(&tag, cmd_q, authsize,
|
||||
DMA_BIDIRECTIONAL);
|
||||
if (ret)
|
||||
goto e_tag;
|
||||
ret = ccp_set_dm_area(&tag, 0, p_tag, 0, AES_BLOCK_SIZE);
|
||||
ret = ccp_set_dm_area(&tag, 0, p_tag, 0, authsize);
|
||||
if (ret)
|
||||
goto e_tag;
|
||||
|
||||
ret = crypto_memneq(tag.address, final_wa.address,
|
||||
AES_BLOCK_SIZE) ? -EBADMSG : 0;
|
||||
authsize) ? -EBADMSG : 0;
|
||||
ccp_dm_free(&tag);
|
||||
}
|
||||
|
||||
@@ -862,11 +877,11 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q,
|
||||
ccp_dm_free(&final_wa);
|
||||
|
||||
e_dst:
|
||||
if (aes->src_len && !in_place)
|
||||
if (ilen > 0 && !in_place)
|
||||
ccp_free_data(&dst, cmd_q);
|
||||
|
||||
e_src:
|
||||
if (aes->src_len)
|
||||
if (ilen > 0)
|
||||
ccp_free_data(&src, cmd_q);
|
||||
|
||||
e_aad:
|
||||
|
||||
@@ -198,7 +198,7 @@ config DMI_SCAN_MACHINE_NON_EFI_FALLBACK
|
||||
|
||||
config ISCSI_IBFT_FIND
|
||||
bool "iSCSI Boot Firmware Table Attributes"
|
||||
depends on X86 && ACPI
|
||||
depends on X86 && ISCSI_IBFT
|
||||
default n
|
||||
help
|
||||
This option enables the kernel to find the region of memory
|
||||
@@ -209,7 +209,8 @@ config ISCSI_IBFT_FIND
|
||||
config ISCSI_IBFT
|
||||
tristate "iSCSI Boot Firmware Table Attributes module"
|
||||
select ISCSI_BOOT_SYSFS
|
||||
depends on ISCSI_IBFT_FIND && SCSI && SCSI_LOWLEVEL
|
||||
select ISCSI_IBFT_FIND if X86
|
||||
depends on ACPI && SCSI && SCSI_LOWLEVEL
|
||||
default n
|
||||
help
|
||||
This option enables support for detection and exposing of iSCSI
|
||||
|
||||
@@ -93,6 +93,10 @@ MODULE_DESCRIPTION("sysfs interface to BIOS iBFT information");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_VERSION(IBFT_ISCSI_VERSION);
|
||||
|
||||
#ifndef CONFIG_ISCSI_IBFT_FIND
|
||||
struct acpi_table_ibft *ibft_addr;
|
||||
#endif
|
||||
|
||||
struct ibft_hdr {
|
||||
u8 id;
|
||||
u8 version;
|
||||
|
||||
@@ -462,8 +462,10 @@ void dc_link_set_test_pattern(struct dc_link *link,
|
||||
|
||||
static void destruct(struct dc *dc)
|
||||
{
|
||||
dc_release_state(dc->current_state);
|
||||
dc->current_state = NULL;
|
||||
if (dc->current_state) {
|
||||
dc_release_state(dc->current_state);
|
||||
dc->current_state = NULL;
|
||||
}
|
||||
|
||||
destroy_links(dc);
|
||||
|
||||
|
||||
@@ -222,7 +222,7 @@ bool resource_construct(
|
||||
* PORT_CONNECTIVITY == 1 (as instructed by HW team).
|
||||
*/
|
||||
update_num_audio(&straps, &num_audio, &pool->audio_support);
|
||||
for (i = 0; i < pool->pipe_count && i < num_audio; i++) {
|
||||
for (i = 0; i < caps->num_audio; i++) {
|
||||
struct audio *aud = create_funcs->create_audio(ctx, i);
|
||||
|
||||
if (aud == NULL) {
|
||||
@@ -1713,6 +1713,12 @@ static struct audio *find_first_free_audio(
|
||||
return pool->audios[i];
|
||||
}
|
||||
}
|
||||
|
||||
/* use engine id to find free audio */
|
||||
if ((id < pool->audio_count) && (res_ctx->is_audio_acquired[id] == false)) {
|
||||
return pool->audios[id];
|
||||
}
|
||||
|
||||
/*not found the matching one, first come first serve*/
|
||||
for (i = 0; i < pool->audio_count; i++) {
|
||||
if (res_ctx->is_audio_acquired[i] == false) {
|
||||
@@ -1866,6 +1872,7 @@ static int get_norm_pix_clk(const struct dc_crtc_timing *timing)
|
||||
pix_clk /= 2;
|
||||
if (timing->pixel_encoding != PIXEL_ENCODING_YCBCR422) {
|
||||
switch (timing->display_color_depth) {
|
||||
case COLOR_DEPTH_666:
|
||||
case COLOR_DEPTH_888:
|
||||
normalized_pix_clk = pix_clk;
|
||||
break;
|
||||
@@ -1949,7 +1956,7 @@ enum dc_status resource_map_pool_resources(
|
||||
/* TODO: Add check if ASIC support and EDID audio */
|
||||
if (!stream->sink->converter_disable_audio &&
|
||||
dc_is_audio_capable_signal(pipe_ctx->stream->signal) &&
|
||||
stream->audio_info.mode_count) {
|
||||
stream->audio_info.mode_count && stream->audio_info.flags.all) {
|
||||
pipe_ctx->stream_res.audio = find_first_free_audio(
|
||||
&context->res_ctx, pool, pipe_ctx->stream_res.stream_enc->id);
|
||||
|
||||
|
||||
@@ -242,6 +242,10 @@ static void dmcu_set_backlight_level(
|
||||
s2 |= (level << ATOM_S2_CURRENT_BL_LEVEL_SHIFT);
|
||||
|
||||
REG_WRITE(BIOS_SCRATCH_2, s2);
|
||||
|
||||
/* waitDMCUReadyForCmd */
|
||||
REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT,
|
||||
0, 1, 80000);
|
||||
}
|
||||
|
||||
static void dce_abm_init(struct abm *abm)
|
||||
|
||||
@@ -159,7 +159,7 @@ struct resource_pool {
|
||||
struct clock_source *clock_sources[MAX_CLOCK_SOURCES];
|
||||
unsigned int clk_src_count;
|
||||
|
||||
struct audio *audios[MAX_PIPES];
|
||||
struct audio *audios[MAX_AUDIOS];
|
||||
unsigned int audio_count;
|
||||
struct audio_support audio_support;
|
||||
|
||||
|
||||
@@ -34,6 +34,7 @@
|
||||
* Data types shared between different Virtual HW blocks
|
||||
******************************************************************************/
|
||||
|
||||
#define MAX_AUDIOS 7
|
||||
#define MAX_PIPES 6
|
||||
|
||||
struct gamma_curve {
|
||||
|
||||
@@ -793,7 +793,7 @@ static int atomic_remove_fb(struct drm_framebuffer *fb)
|
||||
struct drm_device *dev = fb->dev;
|
||||
struct drm_atomic_state *state;
|
||||
struct drm_plane *plane;
|
||||
struct drm_connector *conn;
|
||||
struct drm_connector *conn __maybe_unused;
|
||||
struct drm_connector_state *conn_state;
|
||||
int i, ret;
|
||||
unsigned plane_mask;
|
||||
|
||||
@@ -413,8 +413,8 @@ static void glk_dsi_program_esc_clock(struct drm_device *dev,
|
||||
else
|
||||
txesc2_div = 10;
|
||||
|
||||
I915_WRITE(MIPIO_TXESC_CLK_DIV1, txesc1_div & GLK_TX_ESC_CLK_DIV1_MASK);
|
||||
I915_WRITE(MIPIO_TXESC_CLK_DIV2, txesc2_div & GLK_TX_ESC_CLK_DIV2_MASK);
|
||||
I915_WRITE(MIPIO_TXESC_CLK_DIV1, (1 << (txesc1_div - 1)) & GLK_TX_ESC_CLK_DIV1_MASK);
|
||||
I915_WRITE(MIPIO_TXESC_CLK_DIV2, (1 << (txesc2_div - 1)) & GLK_TX_ESC_CLK_DIV2_MASK);
|
||||
}
|
||||
|
||||
/* Program BXT Mipi clocks and dividers */
|
||||
|
||||
@@ -587,10 +587,14 @@ static void sony_set_leds(struct sony_sc *sc);
|
||||
static inline void sony_schedule_work(struct sony_sc *sc,
|
||||
enum sony_worker which)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
switch (which) {
|
||||
case SONY_WORKER_STATE:
|
||||
if (!sc->defer_initialization)
|
||||
spin_lock_irqsave(&sc->lock, flags);
|
||||
if (!sc->defer_initialization && sc->state_worker_initialized)
|
||||
schedule_work(&sc->state_worker);
|
||||
spin_unlock_irqrestore(&sc->lock, flags);
|
||||
break;
|
||||
case SONY_WORKER_HOTPLUG:
|
||||
if (sc->hotplug_worker_initialized)
|
||||
@@ -2553,13 +2557,18 @@ static inline void sony_init_output_report(struct sony_sc *sc,
|
||||
|
||||
static inline void sony_cancel_work_sync(struct sony_sc *sc)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
if (sc->hotplug_worker_initialized)
|
||||
cancel_work_sync(&sc->hotplug_worker);
|
||||
if (sc->state_worker_initialized)
|
||||
if (sc->state_worker_initialized) {
|
||||
spin_lock_irqsave(&sc->lock, flags);
|
||||
sc->state_worker_initialized = 0;
|
||||
spin_unlock_irqrestore(&sc->lock, flags);
|
||||
cancel_work_sync(&sc->state_worker);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
static int sony_input_configured(struct hid_device *hdev,
|
||||
struct hid_input *hidinput)
|
||||
{
|
||||
|
||||
@@ -818,7 +818,7 @@ static const u16 NCT6106_REG_TARGET[] = { 0x111, 0x121, 0x131 };
|
||||
static const u16 NCT6106_REG_WEIGHT_TEMP_SEL[] = { 0x168, 0x178, 0x188 };
|
||||
static const u16 NCT6106_REG_WEIGHT_TEMP_STEP[] = { 0x169, 0x179, 0x189 };
|
||||
static const u16 NCT6106_REG_WEIGHT_TEMP_STEP_TOL[] = { 0x16a, 0x17a, 0x18a };
|
||||
static const u16 NCT6106_REG_WEIGHT_DUTY_STEP[] = { 0x16b, 0x17b, 0x17c };
|
||||
static const u16 NCT6106_REG_WEIGHT_DUTY_STEP[] = { 0x16b, 0x17b, 0x18b };
|
||||
static const u16 NCT6106_REG_WEIGHT_TEMP_BASE[] = { 0x16c, 0x17c, 0x18c };
|
||||
static const u16 NCT6106_REG_WEIGHT_DUTY_BASE[] = { 0x16d, 0x17d, 0x18d };
|
||||
|
||||
@@ -3673,6 +3673,7 @@ static int nct6775_probe(struct platform_device *pdev)
|
||||
data->REG_FAN_TIME[0] = NCT6106_REG_FAN_STOP_TIME;
|
||||
data->REG_FAN_TIME[1] = NCT6106_REG_FAN_STEP_UP_TIME;
|
||||
data->REG_FAN_TIME[2] = NCT6106_REG_FAN_STEP_DOWN_TIME;
|
||||
data->REG_TOLERANCE_H = NCT6106_REG_TOLERANCE_H;
|
||||
data->REG_PWM[0] = NCT6106_REG_PWM;
|
||||
data->REG_PWM[1] = NCT6106_REG_FAN_START_OUTPUT;
|
||||
data->REG_PWM[2] = NCT6106_REG_FAN_STOP_OUTPUT;
|
||||
|
||||
@@ -768,7 +768,7 @@ static struct attribute *nct7802_in_attrs[] = {
|
||||
&sensor_dev_attr_in3_alarm.dev_attr.attr,
|
||||
&sensor_dev_attr_in3_beep.dev_attr.attr,
|
||||
|
||||
&sensor_dev_attr_in4_input.dev_attr.attr, /* 17 */
|
||||
&sensor_dev_attr_in4_input.dev_attr.attr, /* 16 */
|
||||
&sensor_dev_attr_in4_min.dev_attr.attr,
|
||||
&sensor_dev_attr_in4_max.dev_attr.attr,
|
||||
&sensor_dev_attr_in4_alarm.dev_attr.attr,
|
||||
@@ -794,9 +794,9 @@ static umode_t nct7802_in_is_visible(struct kobject *kobj,
|
||||
|
||||
if (index >= 6 && index < 11 && (reg & 0x03) != 0x03) /* VSEN1 */
|
||||
return 0;
|
||||
if (index >= 11 && index < 17 && (reg & 0x0c) != 0x0c) /* VSEN2 */
|
||||
if (index >= 11 && index < 16 && (reg & 0x0c) != 0x0c) /* VSEN2 */
|
||||
return 0;
|
||||
if (index >= 17 && (reg & 0x30) != 0x30) /* VSEN3 */
|
||||
if (index >= 16 && (reg & 0x30) != 0x30) /* VSEN3 */
|
||||
return 0;
|
||||
|
||||
return attr->mode;
|
||||
|
||||
@@ -328,7 +328,6 @@ static const struct iio_chan_spec_ext_info cros_ec_accel_legacy_ext_info[] = {
|
||||
.modified = 1, \
|
||||
.info_mask_separate = \
|
||||
BIT(IIO_CHAN_INFO_RAW) | \
|
||||
BIT(IIO_CHAN_INFO_SCALE) | \
|
||||
BIT(IIO_CHAN_INFO_CALIBBIAS), \
|
||||
.info_mask_shared_by_all = BIT(IIO_CHAN_INFO_SCALE), \
|
||||
.ext_info = cros_ec_accel_legacy_ext_info, \
|
||||
|
||||
@@ -86,7 +86,7 @@
|
||||
#define MAX9611_TEMP_MAX_POS 0x7f80
|
||||
#define MAX9611_TEMP_MAX_NEG 0xff80
|
||||
#define MAX9611_TEMP_MIN_NEG 0xd980
|
||||
#define MAX9611_TEMP_MASK GENMASK(7, 15)
|
||||
#define MAX9611_TEMP_MASK GENMASK(15, 7)
|
||||
#define MAX9611_TEMP_SHIFT 0x07
|
||||
#define MAX9611_TEMP_RAW(_r) ((_r) >> MAX9611_TEMP_SHIFT)
|
||||
#define MAX9611_TEMP_SCALE_NUM 1000000
|
||||
|
||||
@@ -1810,6 +1810,30 @@ static int elantech_create_smbus(struct psmouse *psmouse,
|
||||
leave_breadcrumbs);
|
||||
}
|
||||
|
||||
static bool elantech_use_host_notify(struct psmouse *psmouse,
|
||||
struct elantech_device_info *info)
|
||||
{
|
||||
if (ETP_NEW_IC_SMBUS_HOST_NOTIFY(info->fw_version))
|
||||
return true;
|
||||
|
||||
switch (info->bus) {
|
||||
case ETP_BUS_PS2_ONLY:
|
||||
/* expected case */
|
||||
break;
|
||||
case ETP_BUS_SMB_HST_NTFY_ONLY:
|
||||
case ETP_BUS_PS2_SMB_HST_NTFY:
|
||||
/* SMbus implementation is stable since 2018 */
|
||||
if (dmi_get_bios_year() >= 2018)
|
||||
return true;
|
||||
default:
|
||||
psmouse_dbg(psmouse,
|
||||
"Ignoring SMBus bus provider %d\n", info->bus);
|
||||
break;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* elantech_setup_smbus - called once the PS/2 devices are enumerated
|
||||
* and decides to instantiate a SMBus InterTouch device.
|
||||
@@ -1829,7 +1853,7 @@ static int elantech_setup_smbus(struct psmouse *psmouse,
|
||||
* i2c_blacklist_pnp_ids.
|
||||
* Old ICs are up to the user to decide.
|
||||
*/
|
||||
if (!ETP_NEW_IC_SMBUS_HOST_NOTIFY(info->fw_version) ||
|
||||
if (!elantech_use_host_notify(psmouse, info) ||
|
||||
psmouse_matches_pnp_id(psmouse, i2c_blacklist_pnp_ids))
|
||||
return -ENXIO;
|
||||
}
|
||||
@@ -1849,34 +1873,6 @@ static int elantech_setup_smbus(struct psmouse *psmouse,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool elantech_use_host_notify(struct psmouse *psmouse,
|
||||
struct elantech_device_info *info)
|
||||
{
|
||||
if (ETP_NEW_IC_SMBUS_HOST_NOTIFY(info->fw_version))
|
||||
return true;
|
||||
|
||||
switch (info->bus) {
|
||||
case ETP_BUS_PS2_ONLY:
|
||||
/* expected case */
|
||||
break;
|
||||
case ETP_BUS_SMB_ALERT_ONLY:
|
||||
/* fall-through */
|
||||
case ETP_BUS_PS2_SMB_ALERT:
|
||||
psmouse_dbg(psmouse, "Ignoring SMBus provider through alert protocol.\n");
|
||||
break;
|
||||
case ETP_BUS_SMB_HST_NTFY_ONLY:
|
||||
/* fall-through */
|
||||
case ETP_BUS_PS2_SMB_HST_NTFY:
|
||||
return true;
|
||||
default:
|
||||
psmouse_dbg(psmouse,
|
||||
"Ignoring SMBus bus provider %d.\n",
|
||||
info->bus);
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
int elantech_init_smbus(struct psmouse *psmouse)
|
||||
{
|
||||
struct elantech_device_info info;
|
||||
|
||||
@@ -185,6 +185,7 @@ static const char * const smbus_pnp_ids[] = {
|
||||
"LEN2055", /* E580 */
|
||||
"SYN3052", /* HP EliteBook 840 G4 */
|
||||
"SYN3221", /* HP 15-ay000 */
|
||||
"SYN323d", /* HP Spectre X360 13-w013dx */
|
||||
NULL
|
||||
};
|
||||
|
||||
|
||||
@@ -1672,6 +1672,8 @@ static int usbtouch_probe(struct usb_interface *intf,
|
||||
if (!usbtouch || !input_dev)
|
||||
goto out_free;
|
||||
|
||||
mutex_init(&usbtouch->pm_mutex);
|
||||
|
||||
type = &usbtouch_dev_info[id->driver_info];
|
||||
usbtouch->type = type;
|
||||
if (!type->process_pkt)
|
||||
|
||||
@@ -374,6 +374,7 @@ static int finish_dma_single(struct cvm_mmc_host *host, struct mmc_data *data)
|
||||
{
|
||||
data->bytes_xfered = data->blocks * data->blksz;
|
||||
data->error = 0;
|
||||
dma_unmap_sg(host->dev, data->sg, data->sg_len, get_dma_dir(data));
|
||||
return 1;
|
||||
}
|
||||
|
||||
@@ -1046,7 +1047,8 @@ int cvm_mmc_of_slot_probe(struct device *dev, struct cvm_mmc_host *host)
|
||||
mmc->max_segs = 1;
|
||||
|
||||
/* DMA size field can address up to 8 MB */
|
||||
mmc->max_seg_size = 8 * 1024 * 1024;
|
||||
mmc->max_seg_size = min_t(unsigned int, 8 * 1024 * 1024,
|
||||
dma_get_max_seg_size(host->dev));
|
||||
mmc->max_req_size = mmc->max_seg_size;
|
||||
/* External DMA is in 512 byte blocks */
|
||||
mmc->max_blk_size = 512;
|
||||
|
||||
@@ -1512,10 +1512,11 @@ static int rcar_canfd_rx_poll(struct napi_struct *napi, int quota)
|
||||
|
||||
/* All packets processed */
|
||||
if (num_pkts < quota) {
|
||||
napi_complete_done(napi, num_pkts);
|
||||
/* Enable Rx FIFO interrupts */
|
||||
rcar_canfd_set_bit(priv->base, RCANFD_RFCC(ridx),
|
||||
RCANFD_RFCC_RFIE);
|
||||
if (napi_complete_done(napi, num_pkts)) {
|
||||
/* Enable Rx FIFO interrupts */
|
||||
rcar_canfd_set_bit(priv->base, RCANFD_RFCC(ridx),
|
||||
RCANFD_RFCC_RFIE);
|
||||
}
|
||||
}
|
||||
return num_pkts;
|
||||
}
|
||||
|
||||
@@ -576,16 +576,16 @@ static int peak_usb_ndo_stop(struct net_device *netdev)
|
||||
dev->state &= ~PCAN_USB_STATE_STARTED;
|
||||
netif_stop_queue(netdev);
|
||||
|
||||
close_candev(netdev);
|
||||
|
||||
dev->can.state = CAN_STATE_STOPPED;
|
||||
|
||||
/* unlink all pending urbs and free used memory */
|
||||
peak_usb_unlink_all_urbs(dev);
|
||||
|
||||
if (dev->adapter->dev_stop)
|
||||
dev->adapter->dev_stop(dev);
|
||||
|
||||
close_candev(netdev);
|
||||
|
||||
dev->can.state = CAN_STATE_STOPPED;
|
||||
|
||||
/* can set bus off now */
|
||||
if (dev->adapter->dev_set_bus) {
|
||||
int err = dev->adapter->dev_set_bus(dev, 0);
|
||||
|
||||
@@ -849,7 +849,7 @@ static int pcan_usb_fd_init(struct peak_usb_device *dev)
|
||||
goto err_out;
|
||||
|
||||
/* allocate command buffer once for all for the interface */
|
||||
pdev->cmd_buffer_addr = kmalloc(PCAN_UFD_CMD_BUFFER_SIZE,
|
||||
pdev->cmd_buffer_addr = kzalloc(PCAN_UFD_CMD_BUFFER_SIZE,
|
||||
GFP_KERNEL);
|
||||
if (!pdev->cmd_buffer_addr)
|
||||
goto err_out_1;
|
||||
|
||||
@@ -502,7 +502,7 @@ static int pcan_usb_pro_drv_loaded(struct peak_usb_device *dev, int loaded)
|
||||
u8 *buffer;
|
||||
int err;
|
||||
|
||||
buffer = kmalloc(PCAN_USBPRO_FCT_DRVLD_REQ_LEN, GFP_KERNEL);
|
||||
buffer = kzalloc(PCAN_USBPRO_FCT_DRVLD_REQ_LEN, GFP_KERNEL);
|
||||
if (!buffer)
|
||||
return -ENOMEM;
|
||||
|
||||
|
||||
@@ -67,7 +67,8 @@ static struct ch_tc_pedit_fields pedits[] = {
|
||||
static struct ch_tc_flower_entry *allocate_flower_entry(void)
|
||||
{
|
||||
struct ch_tc_flower_entry *new = kzalloc(sizeof(*new), GFP_KERNEL);
|
||||
spin_lock_init(&new->lock);
|
||||
if (new)
|
||||
spin_lock_init(&new->lock);
|
||||
return new;
|
||||
}
|
||||
|
||||
|
||||
@@ -724,7 +724,7 @@ static int iwl_mvm_sar_get_ewrd_table(struct iwl_mvm *mvm)
|
||||
|
||||
for (i = 0; i < n_profiles; i++) {
|
||||
/* the tables start at element 3 */
|
||||
static int pos = 3;
|
||||
int pos = 3;
|
||||
|
||||
/* The EWRD profiles officially go from 2 to 4, but we
|
||||
* save them in sar_profiles[1-3] (because we don't
|
||||
@@ -836,6 +836,22 @@ int iwl_mvm_sar_select_profile(struct iwl_mvm *mvm, int prof_a, int prof_b)
|
||||
return iwl_mvm_send_cmd_pdu(mvm, REDUCE_TX_POWER_CMD, 0, len, &cmd);
|
||||
}
|
||||
|
||||
static bool iwl_mvm_sar_geo_support(struct iwl_mvm *mvm)
|
||||
{
|
||||
/*
|
||||
* The GEO_TX_POWER_LIMIT command is not supported on earlier
|
||||
* firmware versions. Unfortunately, we don't have a TLV API
|
||||
* flag to rely on, so rely on the major version which is in
|
||||
* the first byte of ucode_ver. This was implemented
|
||||
* initially on version 38 and then backported to 36, 29 and
|
||||
* 17.
|
||||
*/
|
||||
return IWL_UCODE_SERIAL(mvm->fw->ucode_ver) >= 38 ||
|
||||
IWL_UCODE_SERIAL(mvm->fw->ucode_ver) == 36 ||
|
||||
IWL_UCODE_SERIAL(mvm->fw->ucode_ver) == 29 ||
|
||||
IWL_UCODE_SERIAL(mvm->fw->ucode_ver) == 17;
|
||||
}
|
||||
|
||||
int iwl_mvm_get_sar_geo_profile(struct iwl_mvm *mvm)
|
||||
{
|
||||
struct iwl_geo_tx_power_profiles_resp *resp;
|
||||
@@ -851,6 +867,9 @@ int iwl_mvm_get_sar_geo_profile(struct iwl_mvm *mvm)
|
||||
.data = { &geo_cmd },
|
||||
};
|
||||
|
||||
if (!iwl_mvm_sar_geo_support(mvm))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
ret = iwl_mvm_send_cmd(mvm, &cmd);
|
||||
if (ret) {
|
||||
IWL_ERR(mvm, "Failed to get geographic profile info %d\n", ret);
|
||||
@@ -876,13 +895,7 @@ static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm)
|
||||
int ret, i, j;
|
||||
u16 cmd_wide_id = WIDE_ID(PHY_OPS_GROUP, GEO_TX_POWER_LIMIT);
|
||||
|
||||
/*
|
||||
* This command is not supported on earlier firmware versions.
|
||||
* Unfortunately, we don't have a TLV API flag to rely on, so
|
||||
* rely on the major version which is in the first byte of
|
||||
* ucode_ver.
|
||||
*/
|
||||
if (IWL_UCODE_SERIAL(mvm->fw->ucode_ver) < 41)
|
||||
if (!iwl_mvm_sar_geo_support(mvm))
|
||||
return 0;
|
||||
|
||||
ret = iwl_mvm_sar_get_wgds_table(mvm);
|
||||
|
||||
@@ -403,6 +403,8 @@ static void iwl_pcie_tfd_unmap(struct iwl_trans *trans,
|
||||
DMA_TO_DEVICE);
|
||||
}
|
||||
|
||||
meta->tbs = 0;
|
||||
|
||||
if (trans->cfg->use_tfh) {
|
||||
struct iwl_tfh_tfd *tfd_fh = (void *)tfd;
|
||||
|
||||
|
||||
@@ -124,6 +124,7 @@ enum {
|
||||
|
||||
#define MWIFIEX_MAX_TOTAL_SCAN_TIME (MWIFIEX_TIMER_10S - MWIFIEX_TIMER_1S)
|
||||
|
||||
#define WPA_GTK_OUI_OFFSET 2
|
||||
#define RSN_GTK_OUI_OFFSET 2
|
||||
|
||||
#define MWIFIEX_OUI_NOT_PRESENT 0
|
||||
|
||||
@@ -181,7 +181,8 @@ mwifiex_is_wpa_oui_present(struct mwifiex_bssdescriptor *bss_desc, u32 cipher)
|
||||
u8 ret = MWIFIEX_OUI_NOT_PRESENT;
|
||||
|
||||
if (has_vendor_hdr(bss_desc->bcn_wpa_ie, WLAN_EID_VENDOR_SPECIFIC)) {
|
||||
iebody = (struct ie_body *) bss_desc->bcn_wpa_ie->data;
|
||||
iebody = (struct ie_body *)((u8 *)bss_desc->bcn_wpa_ie->data +
|
||||
WPA_GTK_OUI_OFFSET);
|
||||
oui = &mwifiex_wpa_oui[cipher][0];
|
||||
ret = mwifiex_search_oui_in_ie(iebody, oui);
|
||||
if (ret)
|
||||
|
||||
@@ -20,11 +20,6 @@ module_param(multipath, bool, 0444);
|
||||
MODULE_PARM_DESC(multipath,
|
||||
"turn on native support for multiple controllers per subsystem");
|
||||
|
||||
inline bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl)
|
||||
{
|
||||
return multipath && ctrl->subsys && (ctrl->subsys->cmic & (1 << 3));
|
||||
}
|
||||
|
||||
/*
|
||||
* If multipathing is enabled we need to always use the subsystem instance
|
||||
* number for numbering our devices to avoid conflicts between subsystems that
|
||||
@@ -516,7 +511,8 @@ int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
|
||||
{
|
||||
int error;
|
||||
|
||||
if (!nvme_ctrl_use_ana(ctrl))
|
||||
/* check if multipath is enabled and we have the capability */
|
||||
if (!multipath || !ctrl->subsys || !(ctrl->subsys->cmic & (1 << 3)))
|
||||
return 0;
|
||||
|
||||
ctrl->anacap = id->anacap;
|
||||
|
||||
@@ -464,7 +464,11 @@ extern const struct attribute_group nvme_ns_id_attr_group;
|
||||
extern const struct block_device_operations nvme_ns_head_ops;
|
||||
|
||||
#ifdef CONFIG_NVME_MULTIPATH
|
||||
bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl);
|
||||
static inline bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl)
|
||||
{
|
||||
return ctrl->ana_log_buf != NULL;
|
||||
}
|
||||
|
||||
void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns,
|
||||
struct nvme_ctrl *ctrl, int *flags);
|
||||
void nvme_failover_req(struct request *req);
|
||||
|
||||
@@ -1569,13 +1569,13 @@ static int handle_outbound(struct qdio_q *q, unsigned int callflags,
|
||||
rc = qdio_kick_outbound_q(q, phys_aob);
|
||||
} else if (need_siga_sync(q)) {
|
||||
rc = qdio_siga_sync_q(q);
|
||||
} else if (count < QDIO_MAX_BUFFERS_PER_Q &&
|
||||
get_buf_state(q, prev_buf(bufnr), &state, 0) > 0 &&
|
||||
state == SLSB_CU_OUTPUT_PRIMED) {
|
||||
/* The previous buffer is not processed yet, tack on. */
|
||||
qperf_inc(q, fast_requeue);
|
||||
} else {
|
||||
/* try to fast requeue buffers */
|
||||
get_buf_state(q, prev_buf(bufnr), &state, 0);
|
||||
if (state != SLSB_CU_OUTPUT_PRIMED)
|
||||
rc = qdio_kick_outbound_q(q, 0);
|
||||
else
|
||||
qperf_inc(q, fast_requeue);
|
||||
rc = qdio_kick_outbound_q(q, 0);
|
||||
}
|
||||
|
||||
/* in case of SIGA errors we must process the error immediately */
|
||||
|
||||
@@ -89,8 +89,10 @@ static int pfn_array_alloc_pin(struct pfn_array *pa, struct device *mdev,
|
||||
sizeof(*pa->pa_iova_pfn) +
|
||||
sizeof(*pa->pa_pfn),
|
||||
GFP_KERNEL);
|
||||
if (unlikely(!pa->pa_iova_pfn))
|
||||
if (unlikely(!pa->pa_iova_pfn)) {
|
||||
pa->pa_nr = 0;
|
||||
return -ENOMEM;
|
||||
}
|
||||
pa->pa_pfn = pa->pa_iova_pfn + pa->pa_nr;
|
||||
|
||||
pa->pa_iova_pfn[0] = pa->pa_iova >> PAGE_SHIFT;
|
||||
|
||||
@@ -54,6 +54,7 @@
|
||||
#define ALUA_FAILOVER_TIMEOUT 60
|
||||
#define ALUA_FAILOVER_RETRIES 5
|
||||
#define ALUA_RTPG_DELAY_MSECS 5
|
||||
#define ALUA_RTPG_RETRY_DELAY 2
|
||||
|
||||
/* device handler flags */
|
||||
#define ALUA_OPTIMIZE_STPG 0x01
|
||||
@@ -696,7 +697,7 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
|
||||
case SCSI_ACCESS_STATE_TRANSITIONING:
|
||||
if (time_before(jiffies, pg->expiry)) {
|
||||
/* State transition, retry */
|
||||
pg->interval = 2;
|
||||
pg->interval = ALUA_RTPG_RETRY_DELAY;
|
||||
err = SCSI_DH_RETRY;
|
||||
} else {
|
||||
struct alua_dh_data *h;
|
||||
@@ -821,6 +822,8 @@ static void alua_rtpg_work(struct work_struct *work)
|
||||
spin_lock_irqsave(&pg->lock, flags);
|
||||
pg->flags &= ~ALUA_PG_RUNNING;
|
||||
pg->flags |= ALUA_PG_RUN_RTPG;
|
||||
if (!pg->interval)
|
||||
pg->interval = ALUA_RTPG_RETRY_DELAY;
|
||||
spin_unlock_irqrestore(&pg->lock, flags);
|
||||
queue_delayed_work(kaluad_wq, &pg->rtpg_work,
|
||||
pg->interval * HZ);
|
||||
@@ -832,6 +835,8 @@ static void alua_rtpg_work(struct work_struct *work)
|
||||
spin_lock_irqsave(&pg->lock, flags);
|
||||
if (err == SCSI_DH_RETRY || pg->flags & ALUA_PG_RUN_RTPG) {
|
||||
pg->flags &= ~ALUA_PG_RUNNING;
|
||||
if (!pg->interval && !(pg->flags & ALUA_PG_RUN_RTPG))
|
||||
pg->interval = ALUA_RTPG_RETRY_DELAY;
|
||||
pg->flags |= ALUA_PG_RUN_RTPG;
|
||||
spin_unlock_irqrestore(&pg->lock, flags);
|
||||
queue_delayed_work(kaluad_wq, &pg->rtpg_work,
|
||||
|
||||
@@ -4874,8 +4874,8 @@ static int ibmvfc_remove(struct vio_dev *vdev)
|
||||
|
||||
spin_lock_irqsave(vhost->host->host_lock, flags);
|
||||
ibmvfc_purge_requests(vhost, DID_ERROR);
|
||||
ibmvfc_free_event_pool(vhost);
|
||||
spin_unlock_irqrestore(vhost->host->host_lock, flags);
|
||||
ibmvfc_free_event_pool(vhost);
|
||||
|
||||
ibmvfc_free_mem(vhost);
|
||||
spin_lock(&ibmvfc_driver_lock);
|
||||
|
||||
@@ -3025,6 +3025,7 @@ megasas_fw_crash_buffer_show(struct device *cdev,
|
||||
u32 size;
|
||||
unsigned long buff_addr;
|
||||
unsigned long dmachunk = CRASH_DMA_BUF_SIZE;
|
||||
unsigned long chunk_left_bytes;
|
||||
unsigned long src_addr;
|
||||
unsigned long flags;
|
||||
u32 buff_offset;
|
||||
@@ -3050,6 +3051,8 @@ megasas_fw_crash_buffer_show(struct device *cdev,
|
||||
}
|
||||
|
||||
size = (instance->fw_crash_buffer_size * dmachunk) - buff_offset;
|
||||
chunk_left_bytes = dmachunk - (buff_offset % dmachunk);
|
||||
size = (size > chunk_left_bytes) ? chunk_left_bytes : size;
|
||||
size = (size >= PAGE_SIZE) ? (PAGE_SIZE - 1) : size;
|
||||
|
||||
src_addr = (unsigned long)instance->crash_buf[buff_offset / dmachunk] +
|
||||
|
||||
@@ -8,11 +8,14 @@
|
||||
#include <linux/list.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/swap.h>
|
||||
#include <linux/sched/signal.h>
|
||||
|
||||
#include "ion.h"
|
||||
|
||||
static inline struct page *ion_page_pool_alloc_pages(struct ion_page_pool *pool)
|
||||
{
|
||||
if (fatal_signal_pending(current))
|
||||
return NULL;
|
||||
return alloc_pages(pool->gfp_mask, pool->order);
|
||||
}
|
||||
|
||||
|
||||
@@ -538,7 +538,7 @@ static ssize_t sysfs_show(struct device *device, struct device_attribute *attr,
|
||||
break;
|
||||
case ATTR_KERNEL_HIB_SIMPLE_PAGE_TABLE_SIZE:
|
||||
ret = scnprintf(buf, PAGE_SIZE, "%u\n",
|
||||
gasket_page_table_num_entries(
|
||||
gasket_page_table_num_simple_entries(
|
||||
gasket_dev->page_table[0]));
|
||||
break;
|
||||
case ATTR_KERNEL_HIB_NUM_ACTIVE_PAGES:
|
||||
|
||||
@@ -116,8 +116,7 @@ static void __ldsem_wake_readers(struct ld_semaphore *sem)
|
||||
|
||||
list_for_each_entry_safe(waiter, next, &sem->read_wait, list) {
|
||||
tsk = waiter->task;
|
||||
smp_mb();
|
||||
waiter->task = NULL;
|
||||
smp_store_release(&waiter->task, NULL);
|
||||
wake_up_process(tsk);
|
||||
put_task_struct(tsk);
|
||||
}
|
||||
@@ -217,7 +216,7 @@ down_read_failed(struct ld_semaphore *sem, long count, long timeout)
|
||||
for (;;) {
|
||||
set_current_state(TASK_UNINTERRUPTIBLE);
|
||||
|
||||
if (!waiter.task)
|
||||
if (!smp_load_acquire(&waiter.task))
|
||||
break;
|
||||
if (!timeout)
|
||||
break;
|
||||
|
||||
@@ -1792,8 +1792,6 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
|
||||
return 0;
|
||||
|
||||
error:
|
||||
if (as && as->usbm)
|
||||
dec_usb_memory_use_count(as->usbm, &as->usbm->urb_use_count);
|
||||
kfree(isopkt);
|
||||
kfree(dr);
|
||||
if (as)
|
||||
|
||||
@@ -238,10 +238,15 @@ int xhci_rcar_init_quirk(struct usb_hcd *hcd)
|
||||
* pointers. So, this driver clears the AC64 bit of xhci->hcc_params
|
||||
* to call dma_set_coherent_mask(dev, DMA_BIT_MASK(32)) in
|
||||
* xhci_gen_setup().
|
||||
*
|
||||
* And, since the firmware/internal CPU control the USBSTS.STS_HALT
|
||||
* and the process speed is down when the roothub port enters U3,
|
||||
* long delay for the handshake of STS_HALT is neeed in xhci_suspend().
|
||||
*/
|
||||
if (xhci_rcar_is_gen2(hcd->self.controller) ||
|
||||
xhci_rcar_is_gen3(hcd->self.controller))
|
||||
xhci->quirks |= XHCI_NO_64BIT_SUPPORT;
|
||||
xhci_rcar_is_gen3(hcd->self.controller)) {
|
||||
xhci->quirks |= XHCI_NO_64BIT_SUPPORT | XHCI_SLOW_SUSPEND;
|
||||
}
|
||||
|
||||
if (!xhci_rcar_wait_for_pll_active(hcd))
|
||||
return -ETIMEDOUT;
|
||||
|
||||
@@ -866,19 +866,20 @@ static void iowarrior_disconnect(struct usb_interface *interface)
|
||||
dev = usb_get_intfdata(interface);
|
||||
mutex_lock(&iowarrior_open_disc_lock);
|
||||
usb_set_intfdata(interface, NULL);
|
||||
/* prevent device read, write and ioctl */
|
||||
dev->present = 0;
|
||||
|
||||
minor = dev->minor;
|
||||
mutex_unlock(&iowarrior_open_disc_lock);
|
||||
/* give back our minor - this will call close() locks need to be dropped at this point*/
|
||||
|
||||
/* give back our minor */
|
||||
usb_deregister_dev(interface, &iowarrior_class);
|
||||
|
||||
mutex_lock(&dev->mutex);
|
||||
|
||||
/* prevent device read, write and ioctl */
|
||||
dev->present = 0;
|
||||
|
||||
mutex_unlock(&dev->mutex);
|
||||
mutex_unlock(&iowarrior_open_disc_lock);
|
||||
|
||||
if (dev->opened) {
|
||||
/* There is a process that holds a filedescriptor to the device ,
|
||||
|
||||
@@ -92,7 +92,6 @@ static void yurex_delete(struct kref *kref)
|
||||
|
||||
dev_dbg(&dev->interface->dev, "%s\n", __func__);
|
||||
|
||||
usb_put_dev(dev->udev);
|
||||
if (dev->cntl_urb) {
|
||||
usb_kill_urb(dev->cntl_urb);
|
||||
kfree(dev->cntl_req);
|
||||
@@ -108,6 +107,7 @@ static void yurex_delete(struct kref *kref)
|
||||
dev->int_buffer, dev->urb->transfer_dma);
|
||||
usb_free_urb(dev->urb);
|
||||
}
|
||||
usb_put_dev(dev->udev);
|
||||
kfree(dev);
|
||||
}
|
||||
|
||||
|
||||
@@ -378,7 +378,8 @@ static enum tcpm_state tcpm_default_state(struct tcpm_port *port)
|
||||
return SNK_UNATTACHED;
|
||||
else if (port->try_role == TYPEC_SOURCE)
|
||||
return SRC_UNATTACHED;
|
||||
else if (port->tcpc->config->default_role == TYPEC_SINK)
|
||||
else if (port->tcpc->config &&
|
||||
port->tcpc->config->default_role == TYPEC_SINK)
|
||||
return SNK_UNATTACHED;
|
||||
/* Fall through to return SRC_UNATTACHED */
|
||||
} else if (port->port_type == TYPEC_PORT_SNK) {
|
||||
@@ -585,7 +586,20 @@ static void tcpm_debugfs_init(struct tcpm_port *port)
|
||||
|
||||
static void tcpm_debugfs_exit(struct tcpm_port *port)
|
||||
{
|
||||
int i;
|
||||
|
||||
mutex_lock(&port->logbuffer_lock);
|
||||
for (i = 0; i < LOG_BUFFER_ENTRIES; i++) {
|
||||
kfree(port->logbuffer[i]);
|
||||
port->logbuffer[i] = NULL;
|
||||
}
|
||||
mutex_unlock(&port->logbuffer_lock);
|
||||
|
||||
debugfs_remove(port->dentry);
|
||||
if (list_empty(&rootdir->d_subdirs)) {
|
||||
debugfs_remove(rootdir);
|
||||
rootdir = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
#else
|
||||
@@ -1094,7 +1108,8 @@ static int tcpm_pd_svdm(struct tcpm_port *port, const __le32 *payload, int cnt,
|
||||
break;
|
||||
case CMD_ATTENTION:
|
||||
/* Attention command does not have response */
|
||||
typec_altmode_attention(adev, p[1]);
|
||||
if (adev)
|
||||
typec_altmode_attention(adev, p[1]);
|
||||
return 0;
|
||||
default:
|
||||
break;
|
||||
@@ -1146,20 +1161,26 @@ static int tcpm_pd_svdm(struct tcpm_port *port, const __le32 *payload, int cnt,
|
||||
}
|
||||
break;
|
||||
case CMD_ENTER_MODE:
|
||||
typec_altmode_update_active(pdev, true);
|
||||
if (adev && pdev) {
|
||||
typec_altmode_update_active(pdev, true);
|
||||
|
||||
if (typec_altmode_vdm(adev, p[0], &p[1], cnt)) {
|
||||
response[0] = VDO(adev->svid, 1, CMD_EXIT_MODE);
|
||||
response[0] |= VDO_OPOS(adev->mode);
|
||||
return 1;
|
||||
if (typec_altmode_vdm(adev, p[0], &p[1], cnt)) {
|
||||
response[0] = VDO(adev->svid, 1,
|
||||
CMD_EXIT_MODE);
|
||||
response[0] |= VDO_OPOS(adev->mode);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
case CMD_EXIT_MODE:
|
||||
typec_altmode_update_active(pdev, false);
|
||||
if (adev && pdev) {
|
||||
typec_altmode_update_active(pdev, false);
|
||||
|
||||
/* Back to USB Operation */
|
||||
WARN_ON(typec_altmode_notify(adev, TYPEC_STATE_USB,
|
||||
NULL));
|
||||
/* Back to USB Operation */
|
||||
WARN_ON(typec_altmode_notify(adev,
|
||||
TYPEC_STATE_USB,
|
||||
NULL));
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
@@ -1169,8 +1190,10 @@ static int tcpm_pd_svdm(struct tcpm_port *port, const __le32 *payload, int cnt,
|
||||
switch (cmd) {
|
||||
case CMD_ENTER_MODE:
|
||||
/* Back to USB Operation */
|
||||
WARN_ON(typec_altmode_notify(adev, TYPEC_STATE_USB,
|
||||
NULL));
|
||||
if (adev)
|
||||
WARN_ON(typec_altmode_notify(adev,
|
||||
TYPEC_STATE_USB,
|
||||
NULL));
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
@@ -1181,7 +1204,8 @@ static int tcpm_pd_svdm(struct tcpm_port *port, const __le32 *payload, int cnt,
|
||||
}
|
||||
|
||||
/* Informing the alternate mode drivers about everything */
|
||||
typec_altmode_vdm(adev, p[0], &p[1], cnt);
|
||||
if (adev)
|
||||
typec_altmode_vdm(adev, p[0], &p[1], cnt);
|
||||
|
||||
return rlen;
|
||||
}
|
||||
@@ -4083,7 +4107,7 @@ static int tcpm_try_role(const struct typec_capability *cap, int role)
|
||||
mutex_lock(&port->lock);
|
||||
if (tcpc->try_role)
|
||||
ret = tcpc->try_role(tcpc, role);
|
||||
if (!ret && !tcpc->config->try_role_hw)
|
||||
if (!ret && (!tcpc->config || !tcpc->config->try_role_hw))
|
||||
port->try_role = role;
|
||||
port->try_src_count = 0;
|
||||
port->try_snk_count = 0;
|
||||
@@ -4730,7 +4754,7 @@ static int tcpm_copy_caps(struct tcpm_port *port,
|
||||
port->typec_caps.prefer_role = tcfg->default_role;
|
||||
port->typec_caps.type = tcfg->type;
|
||||
port->typec_caps.data = tcfg->data;
|
||||
port->self_powered = port->tcpc->config->self_powered;
|
||||
port->self_powered = tcfg->self_powered;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -168,7 +168,7 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon)
|
||||
if (tcon == NULL)
|
||||
return 0;
|
||||
|
||||
if (smb2_command == SMB2_TREE_CONNECT)
|
||||
if (smb2_command == SMB2_TREE_CONNECT || smb2_command == SMB2_IOCTL)
|
||||
return 0;
|
||||
|
||||
if (tcon->tidStatus == CifsExiting) {
|
||||
@@ -1006,7 +1006,12 @@ SMB2_sess_alloc_buffer(struct SMB2_sess_data *sess_data)
|
||||
else
|
||||
req->SecurityMode = 0;
|
||||
|
||||
#ifdef CONFIG_CIFS_DFS_UPCALL
|
||||
req->Capabilities = cpu_to_le32(SMB2_GLOBAL_CAP_DFS);
|
||||
#else
|
||||
req->Capabilities = 0;
|
||||
#endif /* DFS_UPCALL */
|
||||
|
||||
req->Channel = 0; /* MBZ */
|
||||
|
||||
sess_data->iov[0].iov_base = (char *)req;
|
||||
|
||||
2
fs/dax.c
2
fs/dax.c
@@ -659,7 +659,7 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
|
||||
* guaranteed to either see new references or prevent new
|
||||
* references from being established.
|
||||
*/
|
||||
unmap_mapping_range(mapping, 0, 0, 1);
|
||||
unmap_mapping_range(mapping, 0, 0, 0);
|
||||
|
||||
while (index < end && pagevec_lookup_entries(&pvec, mapping, index,
|
||||
min(end - index, (pgoff_t)PAGEVEC_SIZE),
|
||||
|
||||
164
fs/gfs2/bmap.c
164
fs/gfs2/bmap.c
@@ -392,6 +392,19 @@ static int fillup_metapath(struct gfs2_inode *ip, struct metapath *mp, int h)
|
||||
return mp->mp_aheight - x - 1;
|
||||
}
|
||||
|
||||
static sector_t metapath_to_block(struct gfs2_sbd *sdp, struct metapath *mp)
|
||||
{
|
||||
sector_t factor = 1, block = 0;
|
||||
int hgt;
|
||||
|
||||
for (hgt = mp->mp_fheight - 1; hgt >= 0; hgt--) {
|
||||
if (hgt < mp->mp_aheight)
|
||||
block += mp->mp_list[hgt] * factor;
|
||||
factor *= sdp->sd_inptrs;
|
||||
}
|
||||
return block;
|
||||
}
|
||||
|
||||
static void release_metapath(struct metapath *mp)
|
||||
{
|
||||
int i;
|
||||
@@ -432,60 +445,84 @@ static inline unsigned int gfs2_extent_length(struct buffer_head *bh, __be64 *pt
|
||||
return ptr - first;
|
||||
}
|
||||
|
||||
typedef const __be64 *(*gfs2_metadata_walker)(
|
||||
struct metapath *mp,
|
||||
const __be64 *start, const __be64 *end,
|
||||
u64 factor, void *data);
|
||||
enum walker_status { WALK_STOP, WALK_FOLLOW, WALK_CONTINUE };
|
||||
|
||||
#define WALK_STOP ((__be64 *)0)
|
||||
#define WALK_NEXT ((__be64 *)1)
|
||||
/*
|
||||
* gfs2_metadata_walker - walk an indirect block
|
||||
* @mp: Metapath to indirect block
|
||||
* @ptrs: Number of pointers to look at
|
||||
*
|
||||
* When returning WALK_FOLLOW, the walker must update @mp to point at the right
|
||||
* indirect block to follow.
|
||||
*/
|
||||
typedef enum walker_status (*gfs2_metadata_walker)(struct metapath *mp,
|
||||
unsigned int ptrs);
|
||||
|
||||
static int gfs2_walk_metadata(struct inode *inode, sector_t lblock,
|
||||
u64 len, struct metapath *mp, gfs2_metadata_walker walker,
|
||||
void *data)
|
||||
/*
|
||||
* gfs2_walk_metadata - walk a tree of indirect blocks
|
||||
* @inode: The inode
|
||||
* @mp: Starting point of walk
|
||||
* @max_len: Maximum number of blocks to walk
|
||||
* @walker: Called during the walk
|
||||
*
|
||||
* Returns 1 if the walk was stopped by @walker, 0 if we went past @max_len or
|
||||
* past the end of metadata, and a negative error code otherwise.
|
||||
*/
|
||||
|
||||
static int gfs2_walk_metadata(struct inode *inode, struct metapath *mp,
|
||||
u64 max_len, gfs2_metadata_walker walker)
|
||||
{
|
||||
struct metapath clone;
|
||||
struct gfs2_inode *ip = GFS2_I(inode);
|
||||
struct gfs2_sbd *sdp = GFS2_SB(inode);
|
||||
const __be64 *start, *end, *ptr;
|
||||
u64 factor = 1;
|
||||
unsigned int hgt;
|
||||
int ret = 0;
|
||||
int ret;
|
||||
|
||||
for (hgt = ip->i_height - 1; hgt >= mp->mp_aheight; hgt--)
|
||||
/*
|
||||
* The walk starts in the lowest allocated indirect block, which may be
|
||||
* before the position indicated by @mp. Adjust @max_len accordingly
|
||||
* to avoid a short walk.
|
||||
*/
|
||||
for (hgt = mp->mp_fheight - 1; hgt >= mp->mp_aheight; hgt--) {
|
||||
max_len += mp->mp_list[hgt] * factor;
|
||||
mp->mp_list[hgt] = 0;
|
||||
factor *= sdp->sd_inptrs;
|
||||
}
|
||||
|
||||
for (;;) {
|
||||
u64 step;
|
||||
u16 start = mp->mp_list[hgt];
|
||||
enum walker_status status;
|
||||
unsigned int ptrs;
|
||||
u64 len;
|
||||
|
||||
/* Walk indirect block. */
|
||||
start = metapointer(hgt, mp);
|
||||
end = metaend(hgt, mp);
|
||||
|
||||
step = (end - start) * factor;
|
||||
if (step > len)
|
||||
end = start + DIV_ROUND_UP_ULL(len, factor);
|
||||
|
||||
ptr = walker(mp, start, end, factor, data);
|
||||
if (ptr == WALK_STOP)
|
||||
ptrs = (hgt >= 1 ? sdp->sd_inptrs : sdp->sd_diptrs) - start;
|
||||
len = ptrs * factor;
|
||||
if (len > max_len)
|
||||
ptrs = DIV_ROUND_UP_ULL(max_len, factor);
|
||||
status = walker(mp, ptrs);
|
||||
switch (status) {
|
||||
case WALK_STOP:
|
||||
return 1;
|
||||
case WALK_FOLLOW:
|
||||
BUG_ON(mp->mp_aheight == mp->mp_fheight);
|
||||
ptrs = mp->mp_list[hgt] - start;
|
||||
len = ptrs * factor;
|
||||
break;
|
||||
if (step >= len)
|
||||
case WALK_CONTINUE:
|
||||
break;
|
||||
len -= step;
|
||||
if (ptr != WALK_NEXT) {
|
||||
BUG_ON(!*ptr);
|
||||
mp->mp_list[hgt] += ptr - start;
|
||||
goto fill_up_metapath;
|
||||
}
|
||||
if (len >= max_len)
|
||||
break;
|
||||
max_len -= len;
|
||||
if (status == WALK_FOLLOW)
|
||||
goto fill_up_metapath;
|
||||
|
||||
lower_metapath:
|
||||
/* Decrease height of metapath. */
|
||||
if (mp != &clone) {
|
||||
clone_metapath(&clone, mp);
|
||||
mp = &clone;
|
||||
}
|
||||
brelse(mp->mp_bh[hgt]);
|
||||
mp->mp_bh[hgt] = NULL;
|
||||
mp->mp_list[hgt] = 0;
|
||||
if (!hgt)
|
||||
break;
|
||||
hgt--;
|
||||
@@ -493,10 +530,7 @@ static int gfs2_walk_metadata(struct inode *inode, sector_t lblock,
|
||||
|
||||
/* Advance in metadata tree. */
|
||||
(mp->mp_list[hgt])++;
|
||||
start = metapointer(hgt, mp);
|
||||
end = metaend(hgt, mp);
|
||||
if (start >= end) {
|
||||
mp->mp_list[hgt] = 0;
|
||||
if (mp->mp_list[hgt] >= sdp->sd_inptrs) {
|
||||
if (!hgt)
|
||||
break;
|
||||
goto lower_metapath;
|
||||
@@ -504,44 +538,36 @@ static int gfs2_walk_metadata(struct inode *inode, sector_t lblock,
|
||||
|
||||
fill_up_metapath:
|
||||
/* Increase height of metapath. */
|
||||
if (mp != &clone) {
|
||||
clone_metapath(&clone, mp);
|
||||
mp = &clone;
|
||||
}
|
||||
ret = fillup_metapath(ip, mp, ip->i_height - 1);
|
||||
if (ret < 0)
|
||||
break;
|
||||
return ret;
|
||||
hgt += ret;
|
||||
for (; ret; ret--)
|
||||
do_div(factor, sdp->sd_inptrs);
|
||||
mp->mp_aheight = hgt + 1;
|
||||
}
|
||||
if (mp == &clone)
|
||||
release_metapath(mp);
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct gfs2_hole_walker_args {
|
||||
u64 blocks;
|
||||
};
|
||||
|
||||
static const __be64 *gfs2_hole_walker(struct metapath *mp,
|
||||
const __be64 *start, const __be64 *end,
|
||||
u64 factor, void *data)
|
||||
static enum walker_status gfs2_hole_walker(struct metapath *mp,
|
||||
unsigned int ptrs)
|
||||
{
|
||||
struct gfs2_hole_walker_args *args = data;
|
||||
const __be64 *ptr;
|
||||
const __be64 *start, *ptr, *end;
|
||||
unsigned int hgt;
|
||||
|
||||
hgt = mp->mp_aheight - 1;
|
||||
start = metapointer(hgt, mp);
|
||||
end = start + ptrs;
|
||||
|
||||
for (ptr = start; ptr < end; ptr++) {
|
||||
if (*ptr) {
|
||||
args->blocks += (ptr - start) * factor;
|
||||
mp->mp_list[hgt] += ptr - start;
|
||||
if (mp->mp_aheight == mp->mp_fheight)
|
||||
return WALK_STOP;
|
||||
return ptr; /* increase height */
|
||||
return WALK_FOLLOW;
|
||||
}
|
||||
}
|
||||
args->blocks += (end - start) * factor;
|
||||
return WALK_NEXT;
|
||||
return WALK_CONTINUE;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -559,12 +585,24 @@ static const __be64 *gfs2_hole_walker(struct metapath *mp,
|
||||
static int gfs2_hole_size(struct inode *inode, sector_t lblock, u64 len,
|
||||
struct metapath *mp, struct iomap *iomap)
|
||||
{
|
||||
struct gfs2_hole_walker_args args = { };
|
||||
int ret = 0;
|
||||
struct metapath clone;
|
||||
u64 hole_size;
|
||||
int ret;
|
||||
|
||||
ret = gfs2_walk_metadata(inode, lblock, len, mp, gfs2_hole_walker, &args);
|
||||
if (!ret)
|
||||
iomap->length = args.blocks << inode->i_blkbits;
|
||||
clone_metapath(&clone, mp);
|
||||
ret = gfs2_walk_metadata(inode, &clone, len, gfs2_hole_walker);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
if (ret == 1)
|
||||
hole_size = metapath_to_block(GFS2_SB(inode), &clone) - lblock;
|
||||
else
|
||||
hole_size = len;
|
||||
iomap->length = hole_size << inode->i_blkbits;
|
||||
ret = 0;
|
||||
|
||||
out:
|
||||
release_metapath(&clone);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
@@ -3133,7 +3133,7 @@ static int _nfs4_do_setattr(struct inode *inode,
|
||||
|
||||
if (nfs4_copy_delegation_stateid(inode, FMODE_WRITE, &arg->stateid, &delegation_cred)) {
|
||||
/* Use that stateid */
|
||||
} else if (ctx != NULL) {
|
||||
} else if (ctx != NULL && ctx->state) {
|
||||
struct nfs_lock_context *l_ctx;
|
||||
if (!nfs4_valid_open_stateid(ctx->state))
|
||||
return -EBADF;
|
||||
|
||||
@@ -173,6 +173,8 @@ struct ccp_aes_engine {
|
||||
enum ccp_aes_mode mode;
|
||||
enum ccp_aes_action action;
|
||||
|
||||
u32 authsize;
|
||||
|
||||
struct scatterlist *key;
|
||||
u32 key_len; /* In bytes */
|
||||
|
||||
|
||||
@@ -818,6 +818,7 @@ void kvm_arch_check_processor_compat(void *rtn);
|
||||
int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu);
|
||||
bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu);
|
||||
int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu);
|
||||
bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu);
|
||||
|
||||
#ifndef __KVM_HAVE_ARCH_VM_ALLOC
|
||||
/*
|
||||
|
||||
@@ -171,10 +171,7 @@ static inline void snd_compr_drain_notify(struct snd_compr_stream *stream)
|
||||
if (snd_BUG_ON(!stream))
|
||||
return;
|
||||
|
||||
if (stream->direction == SND_COMPRESS_PLAYBACK)
|
||||
stream->runtime->state = SNDRV_PCM_STATE_SETUP;
|
||||
else
|
||||
stream->runtime->state = SNDRV_PCM_STATE_PREPARED;
|
||||
stream->runtime->state = SNDRV_PCM_STATE_SETUP;
|
||||
|
||||
wake_up(&stream->runtime->sleep);
|
||||
}
|
||||
|
||||
@@ -2732,7 +2732,7 @@ enum nl80211_attrs {
|
||||
#define NL80211_HT_CAPABILITY_LEN 26
|
||||
#define NL80211_VHT_CAPABILITY_LEN 12
|
||||
#define NL80211_HE_MIN_CAPABILITY_LEN 16
|
||||
#define NL80211_HE_MAX_CAPABILITY_LEN 51
|
||||
#define NL80211_HE_MAX_CAPABILITY_LEN 54
|
||||
#define NL80211_MAX_NR_CIPHER_SUITES 5
|
||||
#define NL80211_MAX_NR_AKM_SUITES 2
|
||||
|
||||
|
||||
@@ -10965,7 +10965,7 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu,
|
||||
goto err_unlock;
|
||||
}
|
||||
|
||||
perf_install_in_context(ctx, event, cpu);
|
||||
perf_install_in_context(ctx, event, event->cpu);
|
||||
perf_unpin_context(ctx);
|
||||
mutex_unlock(&ctx->mutex);
|
||||
|
||||
|
||||
@@ -894,8 +894,11 @@ static int __init test_firmware_init(void)
|
||||
return -ENOMEM;
|
||||
|
||||
rc = __test_firmware_config_init();
|
||||
if (rc)
|
||||
if (rc) {
|
||||
kfree(test_fw_config);
|
||||
pr_err("could not init firmware test config: %d\n", rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
rc = misc_register(&test_fw_misc_device);
|
||||
if (rc) {
|
||||
|
||||
@@ -1751,6 +1751,12 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
|
||||
if (!addr)
|
||||
return NULL;
|
||||
|
||||
/*
|
||||
* First make sure the mappings are removed from all page-tables
|
||||
* before they are freed.
|
||||
*/
|
||||
vmalloc_sync_all();
|
||||
|
||||
/*
|
||||
* In this function, newly allocated vm_struct has VM_UNINITIALIZED
|
||||
* flag. It means that vm_struct is not fully initialized.
|
||||
@@ -2296,6 +2302,9 @@ EXPORT_SYMBOL(remap_vmalloc_range);
|
||||
/*
|
||||
* Implement a stub for vmalloc_sync_all() if the architecture chose not to
|
||||
* have one.
|
||||
*
|
||||
* The purpose of this function is to make sure the vmalloc area
|
||||
* mappings are identical in all page-tables in the system.
|
||||
*/
|
||||
void __weak vmalloc_sync_all(void)
|
||||
{
|
||||
|
||||
@@ -96,6 +96,7 @@ static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par)
|
||||
flow.flowi4_mark = info->flags & XT_RPFILTER_VALID_MARK ? skb->mark : 0;
|
||||
flow.flowi4_tos = RT_TOS(iph->tos);
|
||||
flow.flowi4_scope = RT_SCOPE_UNIVERSE;
|
||||
flow.flowi4_oif = l3mdev_master_ifindex_rcu(xt_in(par));
|
||||
|
||||
return rpfilter_lookup_reverse(xt_net(par), &flow, xt_in(par), info->flags) ^ invert;
|
||||
}
|
||||
|
||||
@@ -58,7 +58,9 @@ static bool rpfilter_lookup_reverse6(struct net *net, const struct sk_buff *skb,
|
||||
if (rpfilter_addr_linklocal(&iph->saddr)) {
|
||||
lookup_flags |= RT6_LOOKUP_F_IFACE;
|
||||
fl6.flowi6_oif = dev->ifindex;
|
||||
} else if ((flags & XT_RPFILTER_LOOSE) == 0)
|
||||
/* Set flowi6_oif for vrf devices to lookup route in l3mdev domain. */
|
||||
} else if (netif_is_l3_master(dev) || netif_is_l3_slave(dev) ||
|
||||
(flags & XT_RPFILTER_LOOSE) == 0)
|
||||
fl6.flowi6_oif = dev->ifindex;
|
||||
|
||||
rt = (void *)ip6_route_lookup(net, &fl6, skb, lookup_flags);
|
||||
@@ -73,7 +75,9 @@ static bool rpfilter_lookup_reverse6(struct net *net, const struct sk_buff *skb,
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (rt->rt6i_idev->dev == dev || (flags & XT_RPFILTER_LOOSE))
|
||||
if (rt->rt6i_idev->dev == dev ||
|
||||
l3mdev_master_ifindex_rcu(rt->rt6i_idev->dev) == dev->ifindex ||
|
||||
(flags & XT_RPFILTER_LOOSE))
|
||||
ret = true;
|
||||
out:
|
||||
ip6_rt_put(rt);
|
||||
|
||||
@@ -169,11 +169,16 @@ int drv_conf_tx(struct ieee80211_local *local,
|
||||
if (!check_sdata_in_driver(sdata))
|
||||
return -EIO;
|
||||
|
||||
if (WARN_ONCE(params->cw_min == 0 ||
|
||||
params->cw_min > params->cw_max,
|
||||
"%s: invalid CW_min/CW_max: %d/%d\n",
|
||||
sdata->name, params->cw_min, params->cw_max))
|
||||
if (params->cw_min == 0 || params->cw_min > params->cw_max) {
|
||||
/*
|
||||
* If we can't configure hardware anyway, don't warn. We may
|
||||
* never have initialized the CW parameters.
|
||||
*/
|
||||
WARN_ONCE(local->ops->conf_tx,
|
||||
"%s: invalid CW_min/CW_max: %d/%d\n",
|
||||
sdata->name, params->cw_min, params->cw_max);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
trace_drv_conf_tx(local, sdata, ac, params);
|
||||
if (local->ops->conf_tx)
|
||||
|
||||
@@ -1967,6 +1967,16 @@ ieee80211_sta_wmm_params(struct ieee80211_local *local,
|
||||
ieee80211_regulatory_limit_wmm_params(sdata, ¶ms[ac], ac);
|
||||
}
|
||||
|
||||
/* WMM specification requires all 4 ACIs. */
|
||||
for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) {
|
||||
if (params[ac].cw_min == 0) {
|
||||
sdata_info(sdata,
|
||||
"AP has invalid WMM params (missing AC %d), using defaults\n",
|
||||
ac);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) {
|
||||
mlme_dbg(sdata,
|
||||
"WMM AC=%d acm=%d aifs=%d cWmin=%d cWmax=%d txop=%d uapsd=%d, downgraded=%d\n",
|
||||
|
||||
@@ -480,6 +480,7 @@ static bool tcp_in_window(const struct nf_conn *ct,
|
||||
struct ip_ct_tcp_state *receiver = &state->seen[!dir];
|
||||
const struct nf_conntrack_tuple *tuple = &ct->tuplehash[dir].tuple;
|
||||
__u32 seq, ack, sack, end, win, swin;
|
||||
u16 win_raw;
|
||||
s32 receiver_offset;
|
||||
bool res, in_recv_win;
|
||||
|
||||
@@ -488,7 +489,8 @@ static bool tcp_in_window(const struct nf_conn *ct,
|
||||
*/
|
||||
seq = ntohl(tcph->seq);
|
||||
ack = sack = ntohl(tcph->ack_seq);
|
||||
win = ntohs(tcph->window);
|
||||
win_raw = ntohs(tcph->window);
|
||||
win = win_raw;
|
||||
end = segment_seq_plus_len(seq, skb->len, dataoff, tcph);
|
||||
|
||||
if (receiver->flags & IP_CT_TCP_FLAG_SACK_PERM)
|
||||
@@ -663,14 +665,14 @@ static bool tcp_in_window(const struct nf_conn *ct,
|
||||
&& state->last_seq == seq
|
||||
&& state->last_ack == ack
|
||||
&& state->last_end == end
|
||||
&& state->last_win == win)
|
||||
&& state->last_win == win_raw)
|
||||
state->retrans++;
|
||||
else {
|
||||
state->last_dir = dir;
|
||||
state->last_seq = seq;
|
||||
state->last_ack = ack;
|
||||
state->last_end = end;
|
||||
state->last_win = win;
|
||||
state->last_win = win_raw;
|
||||
state->retrans = 0;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -575,7 +575,7 @@ static int nfnetlink_bind(struct net *net, int group)
|
||||
ss = nfnetlink_get_subsys(type << 8);
|
||||
rcu_read_unlock();
|
||||
if (!ss)
|
||||
request_module("nfnetlink-subsys-%d", type);
|
||||
request_module_nowait("nfnetlink-subsys-%d", type);
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
@@ -196,7 +196,7 @@ static int nft_symhash_init(const struct nft_ctx *ctx,
|
||||
priv->dreg = nft_parse_register(tb[NFTA_HASH_DREG]);
|
||||
|
||||
priv->modulus = ntohl(nla_get_be32(tb[NFTA_HASH_MODULUS]));
|
||||
if (priv->modulus <= 1)
|
||||
if (priv->modulus < 1)
|
||||
return -ERANGE;
|
||||
|
||||
if (priv->offset + priv->modulus - 1 < priv->offset)
|
||||
|
||||
@@ -301,7 +301,7 @@ sub give_redhat_hints()
|
||||
#
|
||||
# Checks valid for RHEL/CentOS version 7.x.
|
||||
#
|
||||
if (! $system_release =~ /Fedora/) {
|
||||
if (!($system_release =~ /Fedora/)) {
|
||||
$map{"virtualenv"} = "python-virtualenv";
|
||||
}
|
||||
|
||||
|
||||
@@ -575,10 +575,7 @@ snd_compr_set_params(struct snd_compr_stream *stream, unsigned long arg)
|
||||
stream->metadata_set = false;
|
||||
stream->next_track = false;
|
||||
|
||||
if (stream->direction == SND_COMPRESS_PLAYBACK)
|
||||
stream->runtime->state = SNDRV_PCM_STATE_SETUP;
|
||||
else
|
||||
stream->runtime->state = SNDRV_PCM_STATE_PREPARED;
|
||||
stream->runtime->state = SNDRV_PCM_STATE_SETUP;
|
||||
} else {
|
||||
return -EPERM;
|
||||
}
|
||||
@@ -694,8 +691,17 @@ static int snd_compr_start(struct snd_compr_stream *stream)
|
||||
{
|
||||
int retval;
|
||||
|
||||
if (stream->runtime->state != SNDRV_PCM_STATE_PREPARED)
|
||||
switch (stream->runtime->state) {
|
||||
case SNDRV_PCM_STATE_SETUP:
|
||||
if (stream->direction != SND_COMPRESS_CAPTURE)
|
||||
return -EPERM;
|
||||
break;
|
||||
case SNDRV_PCM_STATE_PREPARED:
|
||||
break;
|
||||
default:
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
retval = stream->ops->trigger(stream, SNDRV_PCM_TRIGGER_START);
|
||||
if (!retval)
|
||||
stream->runtime->state = SNDRV_PCM_STATE_RUNNING;
|
||||
@@ -706,9 +712,15 @@ static int snd_compr_stop(struct snd_compr_stream *stream)
|
||||
{
|
||||
int retval;
|
||||
|
||||
if (stream->runtime->state == SNDRV_PCM_STATE_PREPARED ||
|
||||
stream->runtime->state == SNDRV_PCM_STATE_SETUP)
|
||||
switch (stream->runtime->state) {
|
||||
case SNDRV_PCM_STATE_OPEN:
|
||||
case SNDRV_PCM_STATE_SETUP:
|
||||
case SNDRV_PCM_STATE_PREPARED:
|
||||
return -EPERM;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
retval = stream->ops->trigger(stream, SNDRV_PCM_TRIGGER_STOP);
|
||||
if (!retval) {
|
||||
snd_compr_drain_notify(stream);
|
||||
@@ -796,9 +808,17 @@ static int snd_compr_drain(struct snd_compr_stream *stream)
|
||||
{
|
||||
int retval;
|
||||
|
||||
if (stream->runtime->state == SNDRV_PCM_STATE_PREPARED ||
|
||||
stream->runtime->state == SNDRV_PCM_STATE_SETUP)
|
||||
switch (stream->runtime->state) {
|
||||
case SNDRV_PCM_STATE_OPEN:
|
||||
case SNDRV_PCM_STATE_SETUP:
|
||||
case SNDRV_PCM_STATE_PREPARED:
|
||||
case SNDRV_PCM_STATE_PAUSED:
|
||||
return -EPERM;
|
||||
case SNDRV_PCM_STATE_XRUN:
|
||||
return -EPIPE;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
retval = stream->ops->trigger(stream, SND_COMPR_TRIGGER_DRAIN);
|
||||
if (retval) {
|
||||
@@ -818,6 +838,10 @@ static int snd_compr_next_track(struct snd_compr_stream *stream)
|
||||
if (stream->runtime->state != SNDRV_PCM_STATE_RUNNING)
|
||||
return -EPERM;
|
||||
|
||||
/* next track doesn't have any meaning for capture streams */
|
||||
if (stream->direction == SND_COMPRESS_CAPTURE)
|
||||
return -EPERM;
|
||||
|
||||
/* you can signal next track if this is intended to be a gapless stream
|
||||
* and current track metadata is set
|
||||
*/
|
||||
@@ -835,9 +859,23 @@ static int snd_compr_next_track(struct snd_compr_stream *stream)
|
||||
static int snd_compr_partial_drain(struct snd_compr_stream *stream)
|
||||
{
|
||||
int retval;
|
||||
if (stream->runtime->state == SNDRV_PCM_STATE_PREPARED ||
|
||||
stream->runtime->state == SNDRV_PCM_STATE_SETUP)
|
||||
|
||||
switch (stream->runtime->state) {
|
||||
case SNDRV_PCM_STATE_OPEN:
|
||||
case SNDRV_PCM_STATE_SETUP:
|
||||
case SNDRV_PCM_STATE_PREPARED:
|
||||
case SNDRV_PCM_STATE_PAUSED:
|
||||
return -EPERM;
|
||||
case SNDRV_PCM_STATE_XRUN:
|
||||
return -EPIPE;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
/* partial drain doesn't have any meaning for capture streams */
|
||||
if (stream->direction == SND_COMPRESS_CAPTURE)
|
||||
return -EPERM;
|
||||
|
||||
/* stream can be drained only when next track has been signalled */
|
||||
if (stream->next_track == false)
|
||||
return -EPERM;
|
||||
|
||||
@@ -37,7 +37,7 @@ int iso_packets_buffer_init(struct iso_packets_buffer *b, struct fw_unit *unit,
|
||||
packets_per_page = PAGE_SIZE / packet_size;
|
||||
if (WARN_ON(!packets_per_page)) {
|
||||
err = -EINVAL;
|
||||
goto error;
|
||||
goto err_packets;
|
||||
}
|
||||
pages = DIV_ROUND_UP(count, packets_per_page);
|
||||
|
||||
|
||||
@@ -609,11 +609,9 @@ static int azx_pcm_open(struct snd_pcm_substream *substream)
|
||||
}
|
||||
runtime->private_data = azx_dev;
|
||||
|
||||
if (chip->gts_present)
|
||||
azx_pcm_hw.info = azx_pcm_hw.info |
|
||||
SNDRV_PCM_INFO_HAS_LINK_SYNCHRONIZED_ATIME;
|
||||
|
||||
runtime->hw = azx_pcm_hw;
|
||||
if (chip->gts_present)
|
||||
runtime->hw.info |= SNDRV_PCM_INFO_HAS_LINK_SYNCHRONIZED_ATIME;
|
||||
runtime->hw.channels_min = hinfo->channels_min;
|
||||
runtime->hw.channels_max = hinfo->channels_max;
|
||||
runtime->hw.formats = hinfo->formats;
|
||||
@@ -626,6 +624,13 @@ static int azx_pcm_open(struct snd_pcm_substream *substream)
|
||||
20,
|
||||
178000000);
|
||||
|
||||
/* by some reason, the playback stream stalls on PulseAudio with
|
||||
* tsched=1 when a capture stream triggers. Until we figure out the
|
||||
* real cause, disable tsched mode by telling the PCM info flag.
|
||||
*/
|
||||
if (chip->driver_caps & AZX_DCAPS_AMD_WORKAROUND)
|
||||
runtime->hw.info |= SNDRV_PCM_INFO_BATCH;
|
||||
|
||||
if (chip->align_buffer_size)
|
||||
/* constrain buffer sizes to be multiple of 128
|
||||
bytes. This is more efficient in terms of memory
|
||||
|
||||
@@ -40,7 +40,7 @@
|
||||
/* 14 unused */
|
||||
#define AZX_DCAPS_CTX_WORKAROUND (1 << 15) /* X-Fi workaround */
|
||||
#define AZX_DCAPS_POSFIX_LPIB (1 << 16) /* Use LPIB as default */
|
||||
/* 17 unused */
|
||||
#define AZX_DCAPS_AMD_WORKAROUND (1 << 17) /* AMD-specific workaround */
|
||||
#define AZX_DCAPS_NO_64BIT (1 << 18) /* No 64bit address */
|
||||
#define AZX_DCAPS_SYNC_WRITE (1 << 19) /* sync each cmd write */
|
||||
#define AZX_DCAPS_OLD_SSYNC (1 << 20) /* Old SSYNC reg for ICH */
|
||||
|
||||
@@ -78,6 +78,7 @@ enum {
|
||||
POS_FIX_VIACOMBO,
|
||||
POS_FIX_COMBO,
|
||||
POS_FIX_SKL,
|
||||
POS_FIX_FIFO,
|
||||
};
|
||||
|
||||
/* Defines for ATI HD Audio support in SB450 south bridge */
|
||||
@@ -149,7 +150,7 @@ module_param_array(model, charp, NULL, 0444);
|
||||
MODULE_PARM_DESC(model, "Use the given board model.");
|
||||
module_param_array(position_fix, int, NULL, 0444);
|
||||
MODULE_PARM_DESC(position_fix, "DMA pointer read method."
|
||||
"(-1 = system default, 0 = auto, 1 = LPIB, 2 = POSBUF, 3 = VIACOMBO, 4 = COMBO, 5 = SKL+).");
|
||||
"(-1 = system default, 0 = auto, 1 = LPIB, 2 = POSBUF, 3 = VIACOMBO, 4 = COMBO, 5 = SKL+, 6 = FIFO).");
|
||||
module_param_array(bdl_pos_adj, int, NULL, 0644);
|
||||
MODULE_PARM_DESC(bdl_pos_adj, "BDL position adjustment offset.");
|
||||
module_param_array(probe_mask, int, NULL, 0444);
|
||||
@@ -350,6 +351,11 @@ enum {
|
||||
#define AZX_DCAPS_PRESET_ATI_HDMI_NS \
|
||||
(AZX_DCAPS_PRESET_ATI_HDMI | AZX_DCAPS_SNOOP_OFF)
|
||||
|
||||
/* quirks for AMD SB */
|
||||
#define AZX_DCAPS_PRESET_AMD_SB \
|
||||
(AZX_DCAPS_NO_TCSEL | AZX_DCAPS_SYNC_WRITE | AZX_DCAPS_AMD_WORKAROUND |\
|
||||
AZX_DCAPS_SNOOP_TYPE(ATI) | AZX_DCAPS_PM_RUNTIME)
|
||||
|
||||
/* quirks for Nvidia */
|
||||
#define AZX_DCAPS_PRESET_NVIDIA \
|
||||
(AZX_DCAPS_NO_MSI | AZX_DCAPS_CORBRP_SELF_CLEAR |\
|
||||
@@ -920,6 +926,49 @@ static unsigned int azx_via_get_position(struct azx *chip,
|
||||
return bound_pos + mod_dma_pos;
|
||||
}
|
||||
|
||||
#define AMD_FIFO_SIZE 32
|
||||
|
||||
/* get the current DMA position with FIFO size correction */
|
||||
static unsigned int azx_get_pos_fifo(struct azx *chip, struct azx_dev *azx_dev)
|
||||
{
|
||||
struct snd_pcm_substream *substream = azx_dev->core.substream;
|
||||
struct snd_pcm_runtime *runtime = substream->runtime;
|
||||
unsigned int pos, delay;
|
||||
|
||||
pos = snd_hdac_stream_get_pos_lpib(azx_stream(azx_dev));
|
||||
if (!runtime)
|
||||
return pos;
|
||||
|
||||
runtime->delay = AMD_FIFO_SIZE;
|
||||
delay = frames_to_bytes(runtime, AMD_FIFO_SIZE);
|
||||
if (azx_dev->insufficient) {
|
||||
if (pos < delay) {
|
||||
delay = pos;
|
||||
runtime->delay = bytes_to_frames(runtime, pos);
|
||||
} else {
|
||||
azx_dev->insufficient = 0;
|
||||
}
|
||||
}
|
||||
|
||||
/* correct the DMA position for capture stream */
|
||||
if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) {
|
||||
if (pos < delay)
|
||||
pos += azx_dev->core.bufsize;
|
||||
pos -= delay;
|
||||
}
|
||||
|
||||
return pos;
|
||||
}
|
||||
|
||||
static int azx_get_delay_from_fifo(struct azx *chip, struct azx_dev *azx_dev,
|
||||
unsigned int pos)
|
||||
{
|
||||
struct snd_pcm_substream *substream = azx_dev->core.substream;
|
||||
|
||||
/* just read back the calculated value in the above */
|
||||
return substream->runtime->delay;
|
||||
}
|
||||
|
||||
static unsigned int azx_skl_get_dpib_pos(struct azx *chip,
|
||||
struct azx_dev *azx_dev)
|
||||
{
|
||||
@@ -1528,6 +1577,7 @@ static int check_position_fix(struct azx *chip, int fix)
|
||||
case POS_FIX_VIACOMBO:
|
||||
case POS_FIX_COMBO:
|
||||
case POS_FIX_SKL:
|
||||
case POS_FIX_FIFO:
|
||||
return fix;
|
||||
}
|
||||
|
||||
@@ -1544,6 +1594,10 @@ static int check_position_fix(struct azx *chip, int fix)
|
||||
dev_dbg(chip->card->dev, "Using VIACOMBO position fix\n");
|
||||
return POS_FIX_VIACOMBO;
|
||||
}
|
||||
if (chip->driver_caps & AZX_DCAPS_AMD_WORKAROUND) {
|
||||
dev_dbg(chip->card->dev, "Using FIFO position fix\n");
|
||||
return POS_FIX_FIFO;
|
||||
}
|
||||
if (chip->driver_caps & AZX_DCAPS_POSFIX_LPIB) {
|
||||
dev_dbg(chip->card->dev, "Using LPIB position fix\n");
|
||||
return POS_FIX_LPIB;
|
||||
@@ -1564,6 +1618,7 @@ static void assign_position_fix(struct azx *chip, int fix)
|
||||
[POS_FIX_VIACOMBO] = azx_via_get_position,
|
||||
[POS_FIX_COMBO] = azx_get_pos_lpib,
|
||||
[POS_FIX_SKL] = azx_get_pos_skl,
|
||||
[POS_FIX_FIFO] = azx_get_pos_fifo,
|
||||
};
|
||||
|
||||
chip->get_position[0] = chip->get_position[1] = callbacks[fix];
|
||||
@@ -1578,6 +1633,9 @@ static void assign_position_fix(struct azx *chip, int fix)
|
||||
azx_get_delay_from_lpib;
|
||||
}
|
||||
|
||||
if (fix == POS_FIX_FIFO)
|
||||
chip->get_delay[0] = chip->get_delay[1] =
|
||||
azx_get_delay_from_fifo;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -2594,6 +2652,9 @@ static const struct pci_device_id azx_ids[] = {
|
||||
/* AMD Hudson */
|
||||
{ PCI_DEVICE(0x1022, 0x780d),
|
||||
.driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
|
||||
/* AMD, X370 & co */
|
||||
{ PCI_DEVICE(0x1022, 0x1457),
|
||||
.driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_AMD_SB },
|
||||
/* AMD Stoney */
|
||||
{ PCI_DEVICE(0x1022, 0x157a),
|
||||
.driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB |
|
||||
|
||||
@@ -280,7 +280,8 @@ static int sound_insert_unit(struct sound_unit **list, const struct file_operati
|
||||
goto retry;
|
||||
}
|
||||
spin_unlock(&sound_loader_lock);
|
||||
return -EBUSY;
|
||||
r = -EBUSY;
|
||||
goto fail;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -604,14 +604,13 @@ int hiface_pcm_init(struct hiface_chip *chip, u8 extra_freq)
|
||||
ret = hiface_pcm_init_urb(&rt->out_urbs[i], chip, OUT_EP,
|
||||
hiface_pcm_out_urb_handler);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
goto error;
|
||||
}
|
||||
|
||||
ret = snd_pcm_new(chip->card, "USB-SPDIF Audio", 0, 1, 0, &pcm);
|
||||
if (ret < 0) {
|
||||
kfree(rt);
|
||||
dev_err(&chip->dev->dev, "Cannot create pcm instance\n");
|
||||
return ret;
|
||||
goto error;
|
||||
}
|
||||
|
||||
pcm->private_data = rt;
|
||||
@@ -624,4 +623,10 @@ int hiface_pcm_init(struct hiface_chip *chip, u8 extra_freq)
|
||||
|
||||
chip->pcm = rt;
|
||||
return 0;
|
||||
|
||||
error:
|
||||
for (i = 0; i < PCM_N_URBS; i++)
|
||||
kfree(rt->out_urbs[i].buffer);
|
||||
kfree(rt);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -1053,6 +1053,7 @@ snd_usb_get_audioformat_uac3(struct snd_usb_audio *chip,
|
||||
|
||||
pd = kzalloc(sizeof(*pd), GFP_KERNEL);
|
||||
if (!pd) {
|
||||
kfree(fp->chmap);
|
||||
kfree(fp->rate_table);
|
||||
kfree(fp);
|
||||
return NULL;
|
||||
|
||||
@@ -6,8 +6,9 @@
|
||||
#include "machine.h"
|
||||
#include "api/fs/fs.h"
|
||||
#include "debug.h"
|
||||
#include "symbol.h"
|
||||
|
||||
int arch__fix_module_text_start(u64 *start, const char *name)
|
||||
int arch__fix_module_text_start(u64 *start, u64 *size, const char *name)
|
||||
{
|
||||
u64 m_start = *start;
|
||||
char path[PATH_MAX];
|
||||
@@ -17,7 +18,35 @@ int arch__fix_module_text_start(u64 *start, const char *name)
|
||||
if (sysfs__read_ull(path, (unsigned long long *)start) < 0) {
|
||||
pr_debug2("Using module %s start:%#lx\n", path, m_start);
|
||||
*start = m_start;
|
||||
} else {
|
||||
/* Successful read of the modules segment text start address.
|
||||
* Calculate difference between module start address
|
||||
* in memory and module text segment start address.
|
||||
* For example module load address is 0x3ff8011b000
|
||||
* (from /proc/modules) and module text segment start
|
||||
* address is 0x3ff8011b870 (from file above).
|
||||
*
|
||||
* Adjust the module size and subtract the GOT table
|
||||
* size located at the beginning of the module.
|
||||
*/
|
||||
*size -= (*start - m_start);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* On s390 kernel text segment start is located at very low memory addresses,
|
||||
* for example 0x10000. Modules are located at very high memory addresses,
|
||||
* for example 0x3ff xxxx xxxx. The gap between end of kernel text segment
|
||||
* and beginning of first module's text segment is very big.
|
||||
* Therefore do not fill this gap and do not assign it to the kernel dso map.
|
||||
*/
|
||||
void arch__symbols__fixup_end(struct symbol *p, struct symbol *c)
|
||||
{
|
||||
if (strchr(p->name, '[') == NULL && strchr(c->name, '['))
|
||||
/* Last kernel symbol mapped to end of page */
|
||||
p->end = roundup(p->end, page_size);
|
||||
else
|
||||
p->end = c->start;
|
||||
pr_debug4("%s sym:%s end:%#lx\n", __func__, p->name, p->end);
|
||||
}
|
||||
|
||||
@@ -711,6 +711,16 @@ __cmd_probe(int argc, const char **argv)
|
||||
|
||||
ret = perf_add_probe_events(params.events, params.nevents);
|
||||
if (ret < 0) {
|
||||
|
||||
/*
|
||||
* When perf_add_probe_events() fails it calls
|
||||
* cleanup_perf_probe_events(pevs, npevs), i.e.
|
||||
* cleanup_perf_probe_events(params.events, params.nevents), which
|
||||
* will call clear_perf_probe_event(), so set nevents to zero
|
||||
* to avoid cleanup_params() to call clear_perf_probe_event() again
|
||||
* on the same pevs.
|
||||
*/
|
||||
params.nevents = 0;
|
||||
pr_err_with_code(" Error: Failed to add events.", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -3472,7 +3472,7 @@ int perf_event__process_feature(struct perf_tool *tool,
|
||||
return 0;
|
||||
|
||||
ff.buf = (void *)fe->data;
|
||||
ff.size = event->header.size - sizeof(event->header);
|
||||
ff.size = event->header.size - sizeof(*fe);
|
||||
ff.ph = &session->header;
|
||||
|
||||
if (feat_ops[feat].process(&ff, NULL))
|
||||
|
||||
@@ -1295,6 +1295,7 @@ static int machine__set_modules_path(struct machine *machine)
|
||||
return map_groups__set_modules_path_dir(&machine->kmaps, modules_path, 0);
|
||||
}
|
||||
int __weak arch__fix_module_text_start(u64 *start __maybe_unused,
|
||||
u64 *size __maybe_unused,
|
||||
const char *name __maybe_unused)
|
||||
{
|
||||
return 0;
|
||||
@@ -1306,7 +1307,7 @@ static int machine__create_module(void *arg, const char *name, u64 start,
|
||||
struct machine *machine = arg;
|
||||
struct map *map;
|
||||
|
||||
if (arch__fix_module_text_start(&start, name) < 0)
|
||||
if (arch__fix_module_text_start(&start, &size, name) < 0)
|
||||
return -1;
|
||||
|
||||
map = machine__findnew_module_map(machine, start, name);
|
||||
|
||||
@@ -219,7 +219,7 @@ struct symbol *machine__find_kernel_symbol_by_name(struct machine *machine,
|
||||
|
||||
struct map *machine__findnew_module_map(struct machine *machine, u64 start,
|
||||
const char *filename);
|
||||
int arch__fix_module_text_start(u64 *start, const char *name);
|
||||
int arch__fix_module_text_start(u64 *start, u64 *size, const char *name);
|
||||
|
||||
int machine__load_kallsyms(struct machine *machine, const char *filename);
|
||||
|
||||
|
||||
@@ -86,6 +86,11 @@ static int prefix_underscores_count(const char *str)
|
||||
return tail - str;
|
||||
}
|
||||
|
||||
void __weak arch__symbols__fixup_end(struct symbol *p, struct symbol *c)
|
||||
{
|
||||
p->end = c->start;
|
||||
}
|
||||
|
||||
const char * __weak arch__normalize_symbol_name(const char *name)
|
||||
{
|
||||
return name;
|
||||
@@ -212,7 +217,7 @@ void symbols__fixup_end(struct rb_root *symbols)
|
||||
curr = rb_entry(nd, struct symbol, rb_node);
|
||||
|
||||
if (prev->end == prev->start && prev->end != curr->start)
|
||||
prev->end = curr->start;
|
||||
arch__symbols__fixup_end(prev, curr);
|
||||
}
|
||||
|
||||
/* Last entry */
|
||||
|
||||
@@ -349,6 +349,7 @@ const char *arch__normalize_symbol_name(const char *name);
|
||||
#define SYMBOL_A 0
|
||||
#define SYMBOL_B 1
|
||||
|
||||
void arch__symbols__fixup_end(struct symbol *p, struct symbol *c);
|
||||
int arch__compare_symbol_names(const char *namea, const char *nameb);
|
||||
int arch__compare_symbol_names_n(const char *namea, const char *nameb,
|
||||
unsigned int n);
|
||||
|
||||
@@ -192,14 +192,24 @@ struct comm *thread__comm(const struct thread *thread)
|
||||
|
||||
struct comm *thread__exec_comm(const struct thread *thread)
|
||||
{
|
||||
struct comm *comm, *last = NULL;
|
||||
struct comm *comm, *last = NULL, *second_last = NULL;
|
||||
|
||||
list_for_each_entry(comm, &thread->comm_list, list) {
|
||||
if (comm->exec)
|
||||
return comm;
|
||||
second_last = last;
|
||||
last = comm;
|
||||
}
|
||||
|
||||
/*
|
||||
* 'last' with no start time might be the parent's comm of a synthesized
|
||||
* thread (created by processing a synthesized fork event). For a main
|
||||
* thread, that is very probably wrong. Prefer a later comm to avoid
|
||||
* that case.
|
||||
*/
|
||||
if (second_last && !last->start && thread->pid_ == thread->tid)
|
||||
return second_last;
|
||||
|
||||
return last;
|
||||
}
|
||||
|
||||
|
||||
@@ -2317,6 +2317,29 @@ static bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu)
|
||||
#endif
|
||||
}
|
||||
|
||||
/*
|
||||
* Unlike kvm_arch_vcpu_runnable, this function is called outside
|
||||
* a vcpu_load/vcpu_put pair. However, for most architectures
|
||||
* kvm_arch_vcpu_runnable does not require vcpu_load.
|
||||
*/
|
||||
bool __weak kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
return kvm_arch_vcpu_runnable(vcpu);
|
||||
}
|
||||
|
||||
static bool vcpu_dy_runnable(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
if (kvm_arch_dy_runnable(vcpu))
|
||||
return true;
|
||||
|
||||
#ifdef CONFIG_KVM_ASYNC_PF
|
||||
if (!list_empty_careful(&vcpu->async_pf.done))
|
||||
return true;
|
||||
#endif
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode)
|
||||
{
|
||||
struct kvm *kvm = me->kvm;
|
||||
@@ -2346,7 +2369,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode)
|
||||
continue;
|
||||
if (vcpu == me)
|
||||
continue;
|
||||
if (swait_active(&vcpu->wq) && !kvm_arch_vcpu_runnable(vcpu))
|
||||
if (swait_active(&vcpu->wq) && !vcpu_dy_runnable(vcpu))
|
||||
continue;
|
||||
if (yield_to_kernel_mode && !kvm_arch_vcpu_in_kernel(vcpu))
|
||||
continue;
|
||||
|
||||
Reference in New Issue
Block a user