Merge 4.19.325 into android-4.19-stable
Changes in 4.19.325
netlink: terminate outstanding dump on socket close
ocfs2: uncache inode which has failed entering the group
nilfs2: fix null-ptr-deref in block_touch_buffer tracepoint
ocfs2: fix UBSAN warning in ocfs2_verify_volume()
nilfs2: fix null-ptr-deref in block_dirty_buffer tracepoint
Revert "mmc: dw_mmc: Fix IDMAC operation with pages bigger than 4K"
media: dvbdev: fix the logic when DVB_DYNAMIC_MINORS is not set
kbuild: Use uname for LINUX_COMPILE_HOST detection
mm: revert "mm: shmem: fix data-race in shmem_getattr()"
ASoC: Intel: bytcr_rt5640: Add DMI quirk for Vexia Edu Atla 10 tablet
mac80211: fix user-power when emulating chanctx
selftests/watchdog-test: Fix system accidentally reset after watchdog-test
x86/amd_nb: Fix compile-testing without CONFIG_AMD_NB
net: usb: qmi_wwan: add Quectel RG650V
proc/softirqs: replace seq_printf with seq_put_decimal_ull_width
nvme: fix metadata handling in nvme-passthrough
initramfs: avoid filename buffer overrun
m68k: mvme147: Fix SCSI controller IRQ numbers
m68k: mvme16x: Add and use "mvme16x.h"
m68k: mvme147: Reinstate early console
acpi/arm64: Adjust error handling procedure in gtdt_parse_timer_block()
s390/syscalls: Avoid creation of arch/arch/ directory
hfsplus: don't query the device logical block size multiple times
EDAC/fsl_ddr: Fix bad bit shift operations
crypto: pcrypt - Call crypto layer directly when padata_do_parallel() return -EBUSY
crypto: cavium - Fix the if condition to exit loop after timeout
crypto: bcm - add error check in the ahash_hmac_init function
crypto: cavium - Fix an error handling path in cpt_ucode_load_fw()
time: Fix references to _msecs_to_jiffies() handling of values
soc: qcom: geni-se: fix array underflow in geni_se_clk_tbl_get()
mmc: mmc_spi: drop buggy snprintf()
ARM: dts: cubieboard4: Fix DCDC5 regulator constraints
regmap: irq: Set lockdep class for hierarchical IRQ domains
firmware: arm_scpi: Check the DVFS OPP count returned by the firmware
drm/mm: Mark drm_mm_interval_tree*() functions with __maybe_unused
wifi: ath9k: add range check for conn_rsp_epid in htc_connect_service()
drm/omap: Fix locking in omap_gem_new_dmabuf()
bpf: Fix the xdp_adjust_tail sample prog issue
wifi: mwifiex: Fix memcpy() field-spanning write warning in mwifiex_config_scan()
drm/etnaviv: consolidate hardware fence handling in etnaviv_gpu
drm/etnaviv: dump: fix sparse warnings
drm/etnaviv: fix power register offset on GC300
drm/etnaviv: hold GPU lock across perfmon sampling
net: rfkill: gpio: Add check for clk_enable()
ALSA: us122l: Use snd_card_free_when_closed() at disconnection
ALSA: caiaq: Use snd_card_free_when_closed() at disconnection
ALSA: 6fire: Release resources at card release
netpoll: Use rcu_access_pointer() in netpoll_poll_lock
trace/trace_event_perf: remove duplicate samples on the first tracepoint event
powerpc/vdso: Flag VDSO64 entry points as functions
mfd: da9052-spi: Change read-mask to write-mask
cpufreq: loongson2: Unregister platform_driver on failure
mtd: rawnand: atmel: Fix possible memory leak
RDMA/bnxt_re: Check cqe flags to know imm_data vs inv_irkey
mfd: rt5033: Fix missing regmap_del_irq_chip()
scsi: bfa: Fix use-after-free in bfad_im_module_exit()
scsi: fusion: Remove unused variable 'rc'
scsi: qedi: Fix a possible memory leak in qedi_alloc_and_init_sb()
ocfs2: fix uninitialized value in ocfs2_file_read_iter()
powerpc/sstep: make emulate_vsx_load and emulate_vsx_store static
fbdev/sh7760fb: Alloc DMA memory from hardware device
fbdev: sh7760fb: Fix a possible memory leak in sh7760fb_alloc_mem()
dt-bindings: clock: adi,axi-clkgen: convert old binding to yaml format
dt-bindings: clock: axi-clkgen: include AXI clk
clk: axi-clkgen: use devm_platform_ioremap_resource() short-hand
clk: clk-axi-clkgen: make sure to enable the AXI bus clock
perf probe: Correct demangled symbols in C++ program
PCI: cpqphp: Use PCI_POSSIBLE_ERROR() to check config reads
PCI: cpqphp: Fix PCIBIOS_* return value confusion
m68k: mcfgpio: Fix incorrect register offset for CONFIG_M5441x
m68k: coldfire/device.c: only build FEC when HW macros are defined
rpmsg: glink: Add TX_DATA_CONT command while sending
rpmsg: glink: Send READ_NOTIFY command in FIFO full case
rpmsg: glink: Fix GLINK command prefix
rpmsg: glink: use only lower 16-bits of param2 for CMD_OPEN name length
NFSD: Prevent NULL dereference in nfsd4_process_cb_update()
NFSD: Cap the number of bytes copied by nfs4_reset_recoverydir()
vfio/pci: Properly hide first-in-list PCIe extended capability
power: supply: core: Remove might_sleep() from power_supply_put()
net: usb: lan78xx: Fix memory leak on device unplug by freeing PHY device
tg3: Set coherent DMA mask bits to 31 for BCM57766 chipsets
net: usb: lan78xx: Fix refcounting and autosuspend on invalid WoL configuration
marvell: pxa168_eth: fix call balance of pep->clk handling routines
net: stmmac: dwmac-socfpga: Set RX watchdog interrupt as broken
usb: using mutex lock and supporting O_NONBLOCK flag in iowarrior_read()
USB: chaoskey: fail open after removal
USB: chaoskey: Fix possible deadlock chaoskey_list_lock
misc: apds990x: Fix missing pm_runtime_disable()
apparmor: fix 'Do simple duplicate message elimination'
usb: ehci-spear: fix call balance of sehci clk handling routines
ext4: supress data-race warnings in ext4_free_inodes_{count,set}()
ext4: fix FS_IOC_GETFSMAP handling
jfs: xattr: check invalid xattr size more strictly
ASoC: codecs: Fix atomicity violation in snd_soc_component_get_drvdata()
PCI: Fix use-after-free of slot->bus on hot remove
tty: ldsic: fix tty_ldisc_autoload sysctl's proc_handler
Bluetooth: Fix type of len in rfcomm_sock_getsockopt{,_old}()
ALSA: usb-audio: Fix potential out-of-bound accesses for Extigy and Mbox devices
Revert "usb: gadget: composite: fix OS descriptors w_value logic"
serial: sh-sci: Clean sci_ports[0] after at earlycon exit
Revert "serial: sh-sci: Clean sci_ports[0] after at earlycon exit"
netfilter: ipset: add missing range check in bitmap_ip_uadt
spi: Fix acpi deferred irq probe
ubi: wl: Put source PEB into correct list if trying locking LEB failed
um: ubd: Do not use drvdata in release
um: net: Do not use drvdata in release
serial: 8250: omap: Move pm_runtime_get_sync
um: vector: Do not use drvdata in release
sh: cpuinfo: Fix a warning for CONFIG_CPUMASK_OFFSTACK
arm64: tls: Fix context-switching of tpidrro_el0 when kpti is enabled
block: fix ordering between checking BLK_MQ_S_STOPPED request adding
HID: wacom: Interpret tilt data from Intuos Pro BT as signed values
media: wl128x: Fix atomicity violation in fmc_send_cmd()
usb: dwc3: gadget: Fix checking for number of TRBs left
lib: string_helpers: silence snprintf() output truncation warning
NFSD: Prevent a potential integer overflow
rpmsg: glink: Propagate TX failures in intentless mode as well
um: Fix the return value of elf_core_copy_task_fpregs
NFSv4.0: Fix a use-after-free problem in the asynchronous open()
rtc: check if __rtc_read_time was successful in rtc_timer_do_work()
ubifs: Correct the total block count by deducting journal reservation
ubi: fastmap: Fix duplicate slab cache names while attaching
jffs2: fix use of uninitialized variable
block: return unsigned int from bdev_io_min
9p/xen: fix init sequence
9p/xen: fix release of IRQ
modpost: remove incorrect code in do_eisa_entry()
sh: intc: Fix use-after-free bug in register_intc_controller()
Linux 4.19.325
Change-Id: I50250c8bd11f9ff4b40da75225c1cfb060e0c258
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
67
Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml
Normal file
67
Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||||
|
%YAML 1.2
|
||||||
|
---
|
||||||
|
$id: http://devicetree.org/schemas/clock/adi,axi-clkgen.yaml#
|
||||||
|
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||||
|
|
||||||
|
title: Binding for Analog Devices AXI clkgen pcore clock generator
|
||||||
|
|
||||||
|
maintainers:
|
||||||
|
- Lars-Peter Clausen <lars@metafoo.de>
|
||||||
|
- Michael Hennerich <michael.hennerich@analog.com>
|
||||||
|
|
||||||
|
description: |
|
||||||
|
The axi_clkgen IP core is a software programmable clock generator,
|
||||||
|
that can be synthesized on various FPGA platforms.
|
||||||
|
|
||||||
|
Link: https://wiki.analog.com/resources/fpga/docs/axi_clkgen
|
||||||
|
|
||||||
|
properties:
|
||||||
|
compatible:
|
||||||
|
enum:
|
||||||
|
- adi,axi-clkgen-2.00.a
|
||||||
|
|
||||||
|
clocks:
|
||||||
|
description:
|
||||||
|
Specifies the reference clock(s) from which the output frequency is
|
||||||
|
derived. This must either reference one clock if only the first clock
|
||||||
|
input is connected or two if both clock inputs are connected. The last
|
||||||
|
clock is the AXI bus clock that needs to be enabled so we can access the
|
||||||
|
core registers.
|
||||||
|
minItems: 2
|
||||||
|
maxItems: 3
|
||||||
|
|
||||||
|
clock-names:
|
||||||
|
oneOf:
|
||||||
|
- items:
|
||||||
|
- const: clkin1
|
||||||
|
- const: s_axi_aclk
|
||||||
|
- items:
|
||||||
|
- const: clkin1
|
||||||
|
- const: clkin2
|
||||||
|
- const: s_axi_aclk
|
||||||
|
|
||||||
|
'#clock-cells':
|
||||||
|
const: 0
|
||||||
|
|
||||||
|
reg:
|
||||||
|
maxItems: 1
|
||||||
|
|
||||||
|
required:
|
||||||
|
- compatible
|
||||||
|
- reg
|
||||||
|
- clocks
|
||||||
|
- clock-names
|
||||||
|
- '#clock-cells'
|
||||||
|
|
||||||
|
additionalProperties: false
|
||||||
|
|
||||||
|
examples:
|
||||||
|
- |
|
||||||
|
clock-controller@ff000000 {
|
||||||
|
compatible = "adi,axi-clkgen-2.00.a";
|
||||||
|
#clock-cells = <0>;
|
||||||
|
reg = <0xff000000 0x1000>;
|
||||||
|
clocks = <&osc 1>, <&clkc 15>;
|
||||||
|
clock-names = "clkin1", "s_axi_aclk";
|
||||||
|
};
|
||||||
@@ -1,25 +0,0 @@
|
|||||||
Binding for the axi-clkgen clock generator
|
|
||||||
|
|
||||||
This binding uses the common clock binding[1].
|
|
||||||
|
|
||||||
[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
|
|
||||||
|
|
||||||
Required properties:
|
|
||||||
- compatible : shall be "adi,axi-clkgen-1.00.a" or "adi,axi-clkgen-2.00.a".
|
|
||||||
- #clock-cells : from common clock binding; Should always be set to 0.
|
|
||||||
- reg : Address and length of the axi-clkgen register set.
|
|
||||||
- clocks : Phandle and clock specifier for the parent clock(s). This must
|
|
||||||
either reference one clock if only the first clock input is connected or two
|
|
||||||
if both clock inputs are connected. For the later case the clock connected
|
|
||||||
to the first input must be specified first.
|
|
||||||
|
|
||||||
Optional properties:
|
|
||||||
- clock-output-names : From common clock binding.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
clock@ff000000 {
|
|
||||||
compatible = "adi,axi-clkgen";
|
|
||||||
#clock-cells = <0>;
|
|
||||||
reg = <0xff000000 0x1000>;
|
|
||||||
clocks = <&osc 1>;
|
|
||||||
};
|
|
||||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 4
|
VERSION = 4
|
||||||
PATCHLEVEL = 19
|
PATCHLEVEL = 19
|
||||||
SUBLEVEL = 324
|
SUBLEVEL = 325
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = "People's Front"
|
NAME = "People's Front"
|
||||||
|
|
||||||
|
|||||||
@@ -253,8 +253,8 @@
|
|||||||
|
|
||||||
reg_dcdc5: dcdc5 {
|
reg_dcdc5: dcdc5 {
|
||||||
regulator-always-on;
|
regulator-always-on;
|
||||||
regulator-min-microvolt = <1425000>;
|
regulator-min-microvolt = <1450000>;
|
||||||
regulator-max-microvolt = <1575000>;
|
regulator-max-microvolt = <1550000>;
|
||||||
regulator-name = "vcc-dram";
|
regulator-name = "vcc-dram";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -393,7 +393,7 @@ static void tls_thread_switch(struct task_struct *next)
|
|||||||
|
|
||||||
if (is_compat_thread(task_thread_info(next)))
|
if (is_compat_thread(task_thread_info(next)))
|
||||||
write_sysreg(next->thread.uw.tp_value, tpidrro_el0);
|
write_sysreg(next->thread.uw.tp_value, tpidrro_el0);
|
||||||
else if (!arm64_kernel_unmapped_at_el0())
|
else
|
||||||
write_sysreg(0, tpidrro_el0);
|
write_sysreg(0, tpidrro_el0);
|
||||||
|
|
||||||
write_sysreg(*task_user_tls(next), tpidr_el0);
|
write_sysreg(*task_user_tls(next), tpidr_el0);
|
||||||
|
|||||||
@@ -89,7 +89,7 @@ static struct platform_device mcf_uart = {
|
|||||||
.dev.platform_data = mcf_uart_platform_data,
|
.dev.platform_data = mcf_uart_platform_data,
|
||||||
};
|
};
|
||||||
|
|
||||||
#if IS_ENABLED(CONFIG_FEC)
|
#ifdef MCFFEC_BASE0
|
||||||
|
|
||||||
#ifdef CONFIG_M5441x
|
#ifdef CONFIG_M5441x
|
||||||
#define FEC_NAME "enet-fec"
|
#define FEC_NAME "enet-fec"
|
||||||
@@ -141,6 +141,7 @@ static struct platform_device mcf_fec0 = {
|
|||||||
.platform_data = FEC_PDATA,
|
.platform_data = FEC_PDATA,
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
#endif /* MCFFEC_BASE0 */
|
||||||
|
|
||||||
#ifdef MCFFEC_BASE1
|
#ifdef MCFFEC_BASE1
|
||||||
static struct resource mcf_fec1_resources[] = {
|
static struct resource mcf_fec1_resources[] = {
|
||||||
@@ -178,7 +179,6 @@ static struct platform_device mcf_fec1 = {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
#endif /* MCFFEC_BASE1 */
|
#endif /* MCFFEC_BASE1 */
|
||||||
#endif /* CONFIG_FEC */
|
|
||||||
|
|
||||||
#if IS_ENABLED(CONFIG_SPI_COLDFIRE_QSPI)
|
#if IS_ENABLED(CONFIG_SPI_COLDFIRE_QSPI)
|
||||||
/*
|
/*
|
||||||
@@ -478,12 +478,12 @@ static struct platform_device mcf_i2c5 = {
|
|||||||
|
|
||||||
static struct platform_device *mcf_devices[] __initdata = {
|
static struct platform_device *mcf_devices[] __initdata = {
|
||||||
&mcf_uart,
|
&mcf_uart,
|
||||||
#if IS_ENABLED(CONFIG_FEC)
|
#ifdef MCFFEC_BASE0
|
||||||
&mcf_fec0,
|
&mcf_fec0,
|
||||||
|
#endif
|
||||||
#ifdef MCFFEC_BASE1
|
#ifdef MCFFEC_BASE1
|
||||||
&mcf_fec1,
|
&mcf_fec1,
|
||||||
#endif
|
#endif
|
||||||
#endif
|
|
||||||
#if IS_ENABLED(CONFIG_SPI_COLDFIRE_QSPI)
|
#if IS_ENABLED(CONFIG_SPI_COLDFIRE_QSPI)
|
||||||
&mcf_qspi,
|
&mcf_qspi,
|
||||||
#endif
|
#endif
|
||||||
|
|||||||
@@ -152,7 +152,7 @@ static inline void gpio_free(unsigned gpio)
|
|||||||
* read-modify-write as well as those controlled by the EPORT and GPIO modules.
|
* read-modify-write as well as those controlled by the EPORT and GPIO modules.
|
||||||
*/
|
*/
|
||||||
#define MCFGPIO_SCR_START 40
|
#define MCFGPIO_SCR_START 40
|
||||||
#elif defined(CONFIGM5441x)
|
#elif defined(CONFIG_M5441x)
|
||||||
/* The m5441x EPORT doesn't have its own GPIO port, uses PORT C */
|
/* The m5441x EPORT doesn't have its own GPIO port, uses PORT C */
|
||||||
#define MCFGPIO_SCR_START 0
|
#define MCFGPIO_SCR_START 0
|
||||||
#else
|
#else
|
||||||
|
|||||||
@@ -90,8 +90,8 @@ struct pcc_regs {
|
|||||||
#define M147_SCC_B_ADDR 0xfffe3000
|
#define M147_SCC_B_ADDR 0xfffe3000
|
||||||
#define M147_SCC_PCLK 5000000
|
#define M147_SCC_PCLK 5000000
|
||||||
|
|
||||||
#define MVME147_IRQ_SCSI_PORT (IRQ_USER+0x45)
|
#define MVME147_IRQ_SCSI_PORT (IRQ_USER + 5)
|
||||||
#define MVME147_IRQ_SCSI_DMA (IRQ_USER+0x46)
|
#define MVME147_IRQ_SCSI_DMA (IRQ_USER + 6)
|
||||||
|
|
||||||
/* SCC interrupts, for MVME147 */
|
/* SCC interrupts, for MVME147 */
|
||||||
|
|
||||||
|
|||||||
@@ -12,8 +12,9 @@
|
|||||||
#include <linux/string.h>
|
#include <linux/string.h>
|
||||||
#include <asm/setup.h>
|
#include <asm/setup.h>
|
||||||
|
|
||||||
extern void mvme16x_cons_write(struct console *co,
|
|
||||||
const char *str, unsigned count);
|
#include "../mvme147/mvme147.h"
|
||||||
|
#include "../mvme16x/mvme16x.h"
|
||||||
|
|
||||||
asmlinkage void __init debug_cons_nputs(const char *s, unsigned n);
|
asmlinkage void __init debug_cons_nputs(const char *s, unsigned n);
|
||||||
|
|
||||||
@@ -22,7 +23,9 @@ static void __ref debug_cons_write(struct console *c,
|
|||||||
{
|
{
|
||||||
#if !(defined(CONFIG_SUN3) || defined(CONFIG_M68000) || \
|
#if !(defined(CONFIG_SUN3) || defined(CONFIG_M68000) || \
|
||||||
defined(CONFIG_COLDFIRE))
|
defined(CONFIG_COLDFIRE))
|
||||||
if (MACH_IS_MVME16x)
|
if (MACH_IS_MVME147)
|
||||||
|
mvme147_scc_write(c, s, n);
|
||||||
|
else if (MACH_IS_MVME16x)
|
||||||
mvme16x_cons_write(c, s, n);
|
mvme16x_cons_write(c, s, n);
|
||||||
else
|
else
|
||||||
debug_cons_nputs(s, n);
|
debug_cons_nputs(s, n);
|
||||||
|
|||||||
@@ -35,6 +35,7 @@
|
|||||||
#include <asm/machdep.h>
|
#include <asm/machdep.h>
|
||||||
#include <asm/mvme147hw.h>
|
#include <asm/mvme147hw.h>
|
||||||
|
|
||||||
|
#include "mvme147.h"
|
||||||
|
|
||||||
static void mvme147_get_model(char *model);
|
static void mvme147_get_model(char *model);
|
||||||
extern void mvme147_sched_init(irq_handler_t handler);
|
extern void mvme147_sched_init(irq_handler_t handler);
|
||||||
@@ -164,3 +165,32 @@ int mvme147_hwclk(int op, struct rtc_time *t)
|
|||||||
}
|
}
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void scc_delay(void)
|
||||||
|
{
|
||||||
|
__asm__ __volatile__ ("nop; nop;");
|
||||||
|
}
|
||||||
|
|
||||||
|
static void scc_write(char ch)
|
||||||
|
{
|
||||||
|
do {
|
||||||
|
scc_delay();
|
||||||
|
} while (!(in_8(M147_SCC_A_ADDR) & BIT(2)));
|
||||||
|
scc_delay();
|
||||||
|
out_8(M147_SCC_A_ADDR, 8);
|
||||||
|
scc_delay();
|
||||||
|
out_8(M147_SCC_A_ADDR, ch);
|
||||||
|
}
|
||||||
|
|
||||||
|
void mvme147_scc_write(struct console *co, const char *str, unsigned int count)
|
||||||
|
{
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
local_irq_save(flags);
|
||||||
|
while (count--) {
|
||||||
|
if (*str == '\n')
|
||||||
|
scc_write('\r');
|
||||||
|
scc_write(*str++);
|
||||||
|
}
|
||||||
|
local_irq_restore(flags);
|
||||||
|
}
|
||||||
|
|||||||
6
arch/m68k/mvme147/mvme147.h
Normal file
6
arch/m68k/mvme147/mvme147.h
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||||
|
|
||||||
|
struct console;
|
||||||
|
|
||||||
|
/* config.c */
|
||||||
|
void mvme147_scc_write(struct console *co, const char *str, unsigned int count);
|
||||||
@@ -38,6 +38,8 @@
|
|||||||
#include <asm/machdep.h>
|
#include <asm/machdep.h>
|
||||||
#include <asm/mvme16xhw.h>
|
#include <asm/mvme16xhw.h>
|
||||||
|
|
||||||
|
#include "mvme16x.h"
|
||||||
|
|
||||||
extern t_bdid mvme_bdid;
|
extern t_bdid mvme_bdid;
|
||||||
|
|
||||||
static MK48T08ptr_t volatile rtc = (MK48T08ptr_t)MVME_RTC_BASE;
|
static MK48T08ptr_t volatile rtc = (MK48T08ptr_t)MVME_RTC_BASE;
|
||||||
|
|||||||
6
arch/m68k/mvme16x/mvme16x.h
Normal file
6
arch/m68k/mvme16x/mvme16x.h
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||||
|
|
||||||
|
struct console;
|
||||||
|
|
||||||
|
/* config.c */
|
||||||
|
void mvme16x_cons_write(struct console *co, const char *str, unsigned count);
|
||||||
@@ -164,9 +164,4 @@ extern int emulate_step(struct pt_regs *regs, unsigned int instr);
|
|||||||
*/
|
*/
|
||||||
extern int emulate_loadstore(struct pt_regs *regs, struct instruction_op *op);
|
extern int emulate_loadstore(struct pt_regs *regs, struct instruction_op *op);
|
||||||
|
|
||||||
extern void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
|
|
||||||
const void *mem, bool cross_endian);
|
|
||||||
extern void emulate_vsx_store(struct instruction_op *op,
|
|
||||||
const union vsx_reg *reg, void *mem,
|
|
||||||
bool cross_endian);
|
|
||||||
extern int emulate_dcbz(unsigned long ea, struct pt_regs *regs);
|
extern int emulate_dcbz(unsigned long ea, struct pt_regs *regs);
|
||||||
|
|||||||
@@ -49,6 +49,7 @@ int vdso_getcpu_init(void);
|
|||||||
|
|
||||||
#define V_FUNCTION_BEGIN(name) \
|
#define V_FUNCTION_BEGIN(name) \
|
||||||
.globl name; \
|
.globl name; \
|
||||||
|
.type name,@function; \
|
||||||
name: \
|
name: \
|
||||||
|
|
||||||
#define V_FUNCTION_END(name) \
|
#define V_FUNCTION_END(name) \
|
||||||
|
|||||||
@@ -667,7 +667,7 @@ static nokprobe_inline int emulate_stq(struct pt_regs *regs, unsigned long ea,
|
|||||||
#endif /* __powerpc64 */
|
#endif /* __powerpc64 */
|
||||||
|
|
||||||
#ifdef CONFIG_VSX
|
#ifdef CONFIG_VSX
|
||||||
void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
|
static nokprobe_inline void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
|
||||||
const void *mem, bool rev)
|
const void *mem, bool rev)
|
||||||
{
|
{
|
||||||
int size, read_size;
|
int size, read_size;
|
||||||
@@ -748,10 +748,8 @@ void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
|
|||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(emulate_vsx_load);
|
|
||||||
NOKPROBE_SYMBOL(emulate_vsx_load);
|
|
||||||
|
|
||||||
void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
|
static nokprobe_inline void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
|
||||||
void *mem, bool rev)
|
void *mem, bool rev)
|
||||||
{
|
{
|
||||||
int size, write_size;
|
int size, write_size;
|
||||||
@@ -824,8 +822,6 @@ void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
|
|||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(emulate_vsx_store);
|
|
||||||
NOKPROBE_SYMBOL(emulate_vsx_store);
|
|
||||||
|
|
||||||
static nokprobe_inline int do_vsx_load(struct instruction_op *op,
|
static nokprobe_inline int do_vsx_load(struct instruction_op *op,
|
||||||
unsigned long ea, struct pt_regs *regs,
|
unsigned long ea, struct pt_regs *regs,
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ kapi-hdrs-y := $(kapi)/unistd_nr.h
|
|||||||
uapi-hdrs-y := $(uapi)/unistd_32.h
|
uapi-hdrs-y := $(uapi)/unistd_32.h
|
||||||
uapi-hdrs-y += $(uapi)/unistd_64.h
|
uapi-hdrs-y += $(uapi)/unistd_64.h
|
||||||
|
|
||||||
targets += $(addprefix ../../../,$(gen-y) $(kapi-hdrs-y) $(uapi-hdrs-y))
|
targets += $(addprefix ../../../../,$(gen-y) $(kapi-hdrs-y) $(uapi-hdrs-y))
|
||||||
|
|
||||||
PHONY += kapi uapi
|
PHONY += kapi uapi
|
||||||
|
|
||||||
|
|||||||
@@ -133,7 +133,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
|
|||||||
|
|
||||||
static void *c_start(struct seq_file *m, loff_t *pos)
|
static void *c_start(struct seq_file *m, loff_t *pos)
|
||||||
{
|
{
|
||||||
return *pos < NR_CPUS ? cpu_data + *pos : NULL;
|
return *pos < nr_cpu_ids ? cpu_data + *pos : NULL;
|
||||||
}
|
}
|
||||||
static void *c_next(struct seq_file *m, void *v, loff_t *pos)
|
static void *c_next(struct seq_file *m, void *v, loff_t *pos)
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -349,7 +349,7 @@ static struct platform_driver uml_net_driver = {
|
|||||||
|
|
||||||
static void net_device_release(struct device *dev)
|
static void net_device_release(struct device *dev)
|
||||||
{
|
{
|
||||||
struct uml_net *device = dev_get_drvdata(dev);
|
struct uml_net *device = container_of(dev, struct uml_net, pdev.dev);
|
||||||
struct net_device *netdev = device->dev;
|
struct net_device *netdev = device->dev;
|
||||||
struct uml_net_private *lp = netdev_priv(netdev);
|
struct uml_net_private *lp = netdev_priv(netdev);
|
||||||
|
|
||||||
|
|||||||
@@ -854,7 +854,7 @@ static int ubd_open_dev(struct ubd *ubd_dev)
|
|||||||
|
|
||||||
static void ubd_device_release(struct device *dev)
|
static void ubd_device_release(struct device *dev)
|
||||||
{
|
{
|
||||||
struct ubd *ubd_dev = dev_get_drvdata(dev);
|
struct ubd *ubd_dev = container_of(dev, struct ubd, pdev.dev);
|
||||||
|
|
||||||
blk_cleanup_queue(ubd_dev->queue);
|
blk_cleanup_queue(ubd_dev->queue);
|
||||||
*ubd_dev = ((struct ubd) DEFAULT_UBD);
|
*ubd_dev = ((struct ubd) DEFAULT_UBD);
|
||||||
|
|||||||
@@ -797,7 +797,8 @@ static struct platform_driver uml_net_driver = {
|
|||||||
|
|
||||||
static void vector_device_release(struct device *dev)
|
static void vector_device_release(struct device *dev)
|
||||||
{
|
{
|
||||||
struct vector_device *device = dev_get_drvdata(dev);
|
struct vector_device *device =
|
||||||
|
container_of(dev, struct vector_device, pdev.dev);
|
||||||
struct net_device *netdev = device->dev;
|
struct net_device *netdev = device->dev;
|
||||||
|
|
||||||
list_del(&device->list);
|
list_del(&device->list);
|
||||||
|
|||||||
@@ -396,6 +396,6 @@ int elf_core_copy_fpregs(struct task_struct *t, elf_fpregset_t *fpu)
|
|||||||
{
|
{
|
||||||
int cpu = current_thread_info()->cpu;
|
int cpu = current_thread_info()->cpu;
|
||||||
|
|
||||||
return save_i387_registers(userspace_pid[cpu], (unsigned long *) fpu);
|
return save_i387_registers(userspace_pid[cpu], (unsigned long *) fpu) == 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -115,7 +115,10 @@ static inline bool amd_gart_present(void)
|
|||||||
|
|
||||||
#define amd_nb_num(x) 0
|
#define amd_nb_num(x) 0
|
||||||
#define amd_nb_has_feature(x) false
|
#define amd_nb_has_feature(x) false
|
||||||
#define node_to_amd_nb(x) NULL
|
static inline struct amd_northbridge *node_to_amd_nb(int node)
|
||||||
|
{
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
#define amd_gart_present(x) false
|
#define amd_gart_present(x) false
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|||||||
@@ -1544,6 +1544,12 @@ void blk_mq_start_stopped_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
|
|||||||
return;
|
return;
|
||||||
|
|
||||||
clear_bit(BLK_MQ_S_STOPPED, &hctx->state);
|
clear_bit(BLK_MQ_S_STOPPED, &hctx->state);
|
||||||
|
/*
|
||||||
|
* Pairs with the smp_mb() in blk_mq_hctx_stopped() to order the
|
||||||
|
* clearing of BLK_MQ_S_STOPPED above and the checking of dispatch
|
||||||
|
* list in the subsequent routine.
|
||||||
|
*/
|
||||||
|
smp_mb__after_atomic();
|
||||||
blk_mq_run_hw_queue(hctx, async);
|
blk_mq_run_hw_queue(hctx, async);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(blk_mq_start_stopped_hw_queue);
|
EXPORT_SYMBOL_GPL(blk_mq_start_stopped_hw_queue);
|
||||||
|
|||||||
@@ -142,6 +142,19 @@ static inline struct blk_mq_tags *blk_mq_tags_from_data(struct blk_mq_alloc_data
|
|||||||
|
|
||||||
static inline bool blk_mq_hctx_stopped(struct blk_mq_hw_ctx *hctx)
|
static inline bool blk_mq_hctx_stopped(struct blk_mq_hw_ctx *hctx)
|
||||||
{
|
{
|
||||||
|
/* Fast path: hardware queue is not stopped most of the time. */
|
||||||
|
if (likely(!test_bit(BLK_MQ_S_STOPPED, &hctx->state)))
|
||||||
|
return false;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This barrier is used to order adding of dispatch list before and
|
||||||
|
* the test of BLK_MQ_S_STOPPED below. Pairs with the memory barrier
|
||||||
|
* in blk_mq_start_stopped_hw_queue() so that dispatch code could
|
||||||
|
* either see BLK_MQ_S_STOPPED is cleared or dispatch list is not
|
||||||
|
* empty to avoid missing dispatching requests.
|
||||||
|
*/
|
||||||
|
smp_mb();
|
||||||
|
|
||||||
return test_bit(BLK_MQ_S_STOPPED, &hctx->state);
|
return test_bit(BLK_MQ_S_STOPPED, &hctx->state);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -174,8 +174,10 @@ static int pcrypt_aead_encrypt(struct aead_request *req)
|
|||||||
err = pcrypt_do_parallel(padata, &ctx->cb_cpu, &pencrypt);
|
err = pcrypt_do_parallel(padata, &ctx->cb_cpu, &pencrypt);
|
||||||
if (!err)
|
if (!err)
|
||||||
return -EINPROGRESS;
|
return -EINPROGRESS;
|
||||||
if (err == -EBUSY)
|
if (err == -EBUSY) {
|
||||||
return -EAGAIN;
|
/* try non-parallel mode */
|
||||||
|
return crypto_aead_encrypt(creq);
|
||||||
|
}
|
||||||
|
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
@@ -220,8 +222,10 @@ static int pcrypt_aead_decrypt(struct aead_request *req)
|
|||||||
err = pcrypt_do_parallel(padata, &ctx->cb_cpu, &pdecrypt);
|
err = pcrypt_do_parallel(padata, &ctx->cb_cpu, &pdecrypt);
|
||||||
if (!err)
|
if (!err)
|
||||||
return -EINPROGRESS;
|
return -EINPROGRESS;
|
||||||
if (err == -EBUSY)
|
if (err == -EBUSY) {
|
||||||
return -EAGAIN;
|
/* try non-parallel mode */
|
||||||
|
return crypto_aead_decrypt(creq);
|
||||||
|
}
|
||||||
|
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -286,7 +286,7 @@ static int __init gtdt_parse_timer_block(struct acpi_gtdt_timer_block *block,
|
|||||||
if (frame->virt_irq > 0)
|
if (frame->virt_irq > 0)
|
||||||
acpi_unregister_gsi(gtdt_frame->virtual_timer_interrupt);
|
acpi_unregister_gsi(gtdt_frame->virtual_timer_interrupt);
|
||||||
frame->virt_irq = 0;
|
frame->virt_irq = 0;
|
||||||
} while (i-- >= 0 && gtdt_frame--);
|
} while (i-- > 0 && gtdt_frame--);
|
||||||
|
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -397,12 +397,16 @@ static irqreturn_t regmap_irq_thread(int irq, void *d)
|
|||||||
return IRQ_NONE;
|
return IRQ_NONE;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static struct lock_class_key regmap_irq_lock_class;
|
||||||
|
static struct lock_class_key regmap_irq_request_class;
|
||||||
|
|
||||||
static int regmap_irq_map(struct irq_domain *h, unsigned int virq,
|
static int regmap_irq_map(struct irq_domain *h, unsigned int virq,
|
||||||
irq_hw_number_t hw)
|
irq_hw_number_t hw)
|
||||||
{
|
{
|
||||||
struct regmap_irq_chip_data *data = h->host_data;
|
struct regmap_irq_chip_data *data = h->host_data;
|
||||||
|
|
||||||
irq_set_chip_data(virq, data);
|
irq_set_chip_data(virq, data);
|
||||||
|
irq_set_lockdep_class(virq, ®map_irq_lock_class, ®map_irq_request_class);
|
||||||
irq_set_chip(virq, &data->irq_chip);
|
irq_set_chip(virq, &data->irq_chip);
|
||||||
irq_set_nested_thread(virq, 1);
|
irq_set_nested_thread(virq, 1);
|
||||||
irq_set_parent(virq, data->irq);
|
irq_set_parent(virq, data->irq);
|
||||||
|
|||||||
@@ -9,6 +9,7 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
#include <linux/platform_device.h>
|
#include <linux/platform_device.h>
|
||||||
|
#include <linux/clk.h>
|
||||||
#include <linux/clk-provider.h>
|
#include <linux/clk-provider.h>
|
||||||
#include <linux/slab.h>
|
#include <linux/slab.h>
|
||||||
#include <linux/io.h>
|
#include <linux/io.h>
|
||||||
@@ -414,7 +415,7 @@ static int axi_clkgen_probe(struct platform_device *pdev)
|
|||||||
struct clk_init_data init = {};
|
struct clk_init_data init = {};
|
||||||
const char *parent_names[2];
|
const char *parent_names[2];
|
||||||
const char *clk_name;
|
const char *clk_name;
|
||||||
struct resource *mem;
|
struct clk *axi_clk;
|
||||||
unsigned int i;
|
unsigned int i;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
@@ -429,14 +430,29 @@ static int axi_clkgen_probe(struct platform_device *pdev)
|
|||||||
if (!axi_clkgen)
|
if (!axi_clkgen)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
axi_clkgen->base = devm_platform_ioremap_resource(pdev, 0);
|
||||||
axi_clkgen->base = devm_ioremap_resource(&pdev->dev, mem);
|
|
||||||
if (IS_ERR(axi_clkgen->base))
|
if (IS_ERR(axi_clkgen->base))
|
||||||
return PTR_ERR(axi_clkgen->base);
|
return PTR_ERR(axi_clkgen->base);
|
||||||
|
|
||||||
init.num_parents = of_clk_get_parent_count(pdev->dev.of_node);
|
init.num_parents = of_clk_get_parent_count(pdev->dev.of_node);
|
||||||
|
|
||||||
|
axi_clk = devm_clk_get_enabled(&pdev->dev, "s_axi_aclk");
|
||||||
|
if (!IS_ERR(axi_clk)) {
|
||||||
|
if (init.num_parents < 2 || init.num_parents > 3)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
init.num_parents -= 1;
|
||||||
|
} else {
|
||||||
|
/*
|
||||||
|
* Legacy... So that old DTs which do not have clock-names still
|
||||||
|
* work. In this case we don't explicitly enable the AXI bus
|
||||||
|
* clock.
|
||||||
|
*/
|
||||||
|
if (PTR_ERR(axi_clk) != -ENOENT)
|
||||||
|
return PTR_ERR(axi_clk);
|
||||||
if (init.num_parents < 1 || init.num_parents > 2)
|
if (init.num_parents < 1 || init.num_parents > 2)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
for (i = 0; i < init.num_parents; i++) {
|
for (i = 0; i < init.num_parents; i++) {
|
||||||
parent_names[i] = of_clk_get_parent_name(pdev->dev.of_node, i);
|
parent_names[i] = of_clk_get_parent_name(pdev->dev.of_node, i);
|
||||||
|
|||||||
@@ -166,7 +166,9 @@ static int __init cpufreq_init(void)
|
|||||||
|
|
||||||
ret = cpufreq_register_driver(&loongson2_cpufreq_driver);
|
ret = cpufreq_register_driver(&loongson2_cpufreq_driver);
|
||||||
|
|
||||||
if (!ret && !nowait) {
|
if (ret) {
|
||||||
|
platform_driver_unregister(&platform_driver);
|
||||||
|
} else if (!nowait) {
|
||||||
saved_cpu_wait = cpu_wait;
|
saved_cpu_wait = cpu_wait;
|
||||||
cpu_wait = loongson2_cpu_wait;
|
cpu_wait = loongson2_cpu_wait;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2510,6 +2510,7 @@ static int ahash_hmac_setkey(struct crypto_ahash *ahash, const u8 *key,
|
|||||||
|
|
||||||
static int ahash_hmac_init(struct ahash_request *req)
|
static int ahash_hmac_init(struct ahash_request *req)
|
||||||
{
|
{
|
||||||
|
int ret;
|
||||||
struct iproc_reqctx_s *rctx = ahash_request_ctx(req);
|
struct iproc_reqctx_s *rctx = ahash_request_ctx(req);
|
||||||
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
|
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
|
||||||
struct iproc_ctx_s *ctx = crypto_ahash_ctx(tfm);
|
struct iproc_ctx_s *ctx = crypto_ahash_ctx(tfm);
|
||||||
@@ -2519,7 +2520,9 @@ static int ahash_hmac_init(struct ahash_request *req)
|
|||||||
flow_log("ahash_hmac_init()\n");
|
flow_log("ahash_hmac_init()\n");
|
||||||
|
|
||||||
/* init the context as a hash */
|
/* init the context as a hash */
|
||||||
ahash_init(req);
|
ret = ahash_init(req);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
if (!spu_no_incr_hash(ctx)) {
|
if (!spu_no_incr_hash(ctx)) {
|
||||||
/* SPU-M can do incr hashing but needs sw for outer HMAC */
|
/* SPU-M can do incr hashing but needs sw for outer HMAC */
|
||||||
|
|||||||
@@ -48,7 +48,7 @@ static void cpt_disable_cores(struct cpt_device *cpt, u64 coremask,
|
|||||||
dev_err(dev, "Cores still busy %llx", coremask);
|
dev_err(dev, "Cores still busy %llx", coremask);
|
||||||
grp = cpt_read_csr64(cpt->reg_base,
|
grp = cpt_read_csr64(cpt->reg_base,
|
||||||
CPTX_PF_EXEC_BUSY(0));
|
CPTX_PF_EXEC_BUSY(0));
|
||||||
if (timeout--)
|
if (!timeout--)
|
||||||
break;
|
break;
|
||||||
|
|
||||||
udelay(CSR_DELAY);
|
udelay(CSR_DELAY);
|
||||||
@@ -306,6 +306,8 @@ static int cpt_ucode_load_fw(struct cpt_device *cpt, const u8 *fw, bool is_ae)
|
|||||||
|
|
||||||
ret = do_cpt_init(cpt, mcode);
|
ret = do_cpt_init(cpt, mcode);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
dma_free_coherent(&cpt->pdev->dev, mcode->code_size,
|
||||||
|
mcode->code, mcode->phys_base);
|
||||||
dev_err(dev, "do_cpt_init failed with ret: %d\n", ret);
|
dev_err(dev, "do_cpt_init failed with ret: %d\n", ret);
|
||||||
goto fw_release;
|
goto fw_release;
|
||||||
}
|
}
|
||||||
@@ -398,7 +400,7 @@ static void cpt_disable_all_cores(struct cpt_device *cpt)
|
|||||||
dev_err(dev, "Cores still busy");
|
dev_err(dev, "Cores still busy");
|
||||||
grp = cpt_read_csr64(cpt->reg_base,
|
grp = cpt_read_csr64(cpt->reg_base,
|
||||||
CPTX_PF_EXEC_BUSY(0));
|
CPTX_PF_EXEC_BUSY(0));
|
||||||
if (timeout--)
|
if (!timeout--)
|
||||||
break;
|
break;
|
||||||
|
|
||||||
udelay(CSR_DELAY);
|
udelay(CSR_DELAY);
|
||||||
|
|||||||
@@ -327,21 +327,25 @@ static void fsl_mc_check(struct mem_ctl_info *mci)
|
|||||||
* TODO: Add support for 32-bit wide buses
|
* TODO: Add support for 32-bit wide buses
|
||||||
*/
|
*/
|
||||||
if ((err_detect & DDR_EDE_SBE) && (bus_width == 64)) {
|
if ((err_detect & DDR_EDE_SBE) && (bus_width == 64)) {
|
||||||
|
u64 cap = (u64)cap_high << 32 | cap_low;
|
||||||
|
u32 s = syndrome;
|
||||||
|
|
||||||
sbe_ecc_decode(cap_high, cap_low, syndrome,
|
sbe_ecc_decode(cap_high, cap_low, syndrome,
|
||||||
&bad_data_bit, &bad_ecc_bit);
|
&bad_data_bit, &bad_ecc_bit);
|
||||||
|
|
||||||
if (bad_data_bit != -1)
|
if (bad_data_bit >= 0) {
|
||||||
fsl_mc_printk(mci, KERN_ERR,
|
fsl_mc_printk(mci, KERN_ERR, "Faulty Data bit: %d\n", bad_data_bit);
|
||||||
"Faulty Data bit: %d\n", bad_data_bit);
|
cap ^= 1ULL << bad_data_bit;
|
||||||
if (bad_ecc_bit != -1)
|
}
|
||||||
fsl_mc_printk(mci, KERN_ERR,
|
|
||||||
"Faulty ECC bit: %d\n", bad_ecc_bit);
|
if (bad_ecc_bit >= 0) {
|
||||||
|
fsl_mc_printk(mci, KERN_ERR, "Faulty ECC bit: %d\n", bad_ecc_bit);
|
||||||
|
s ^= 1 << bad_ecc_bit;
|
||||||
|
}
|
||||||
|
|
||||||
fsl_mc_printk(mci, KERN_ERR,
|
fsl_mc_printk(mci, KERN_ERR,
|
||||||
"Expected Data / ECC:\t%#8.8x_%08x / %#2.2x\n",
|
"Expected Data / ECC:\t%#8.8x_%08x / %#2.2x\n",
|
||||||
cap_high ^ (1 << (bad_data_bit - 32)),
|
upper_32_bits(cap), lower_32_bits(cap), s);
|
||||||
cap_low ^ (1 << bad_data_bit),
|
|
||||||
syndrome ^ (1 << bad_ecc_bit));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fsl_mc_printk(mci, KERN_ERR,
|
fsl_mc_printk(mci, KERN_ERR,
|
||||||
|
|||||||
@@ -638,6 +638,9 @@ static struct scpi_dvfs_info *scpi_dvfs_get_info(u8 domain)
|
|||||||
if (ret)
|
if (ret)
|
||||||
return ERR_PTR(ret);
|
return ERR_PTR(ret);
|
||||||
|
|
||||||
|
if (!buf.opp_count)
|
||||||
|
return ERR_PTR(-ENOENT);
|
||||||
|
|
||||||
info = kmalloc(sizeof(*info), GFP_KERNEL);
|
info = kmalloc(sizeof(*info), GFP_KERNEL);
|
||||||
if (!info)
|
if (!info)
|
||||||
return ERR_PTR(-ENOMEM);
|
return ERR_PTR(-ENOMEM);
|
||||||
|
|||||||
@@ -164,7 +164,7 @@ static void show_leaks(struct drm_mm *mm) { }
|
|||||||
|
|
||||||
INTERVAL_TREE_DEFINE(struct drm_mm_node, rb,
|
INTERVAL_TREE_DEFINE(struct drm_mm_node, rb,
|
||||||
u64, __subtree_last,
|
u64, __subtree_last,
|
||||||
START, LAST, static inline, drm_mm_interval_tree)
|
START, LAST, static inline __maybe_unused, drm_mm_interval_tree)
|
||||||
|
|
||||||
struct drm_mm_node *
|
struct drm_mm_node *
|
||||||
__drm_mm_interval_first(const struct drm_mm *mm, u64 start, u64 last)
|
__drm_mm_interval_first(const struct drm_mm *mm, u64 start, u64 last)
|
||||||
|
|||||||
@@ -108,17 +108,6 @@ static inline size_t size_vstruct(size_t nelem, size_t elem_size, size_t base)
|
|||||||
return base + nelem * elem_size;
|
return base + nelem * elem_size;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* returns true if fence a comes after fence b */
|
|
||||||
static inline bool fence_after(u32 a, u32 b)
|
|
||||||
{
|
|
||||||
return (s32)(a - b) > 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline bool fence_after_eq(u32 a, u32 b)
|
|
||||||
{
|
|
||||||
return (s32)(a - b) >= 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Etnaviv timeouts are specified wrt CLOCK_MONOTONIC, not jiffies.
|
* Etnaviv timeouts are specified wrt CLOCK_MONOTONIC, not jiffies.
|
||||||
* We need to calculate the timeout in terms of number of jiffies
|
* We need to calculate the timeout in terms of number of jiffies
|
||||||
|
|||||||
@@ -73,7 +73,7 @@ static void etnaviv_core_dump_header(struct core_dump_iterator *iter,
|
|||||||
hdr->file_size = cpu_to_le32(data_end - iter->data);
|
hdr->file_size = cpu_to_le32(data_end - iter->data);
|
||||||
|
|
||||||
iter->hdr++;
|
iter->hdr++;
|
||||||
iter->data += hdr->file_size;
|
iter->data += le32_to_cpu(hdr->file_size);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void etnaviv_core_dump_registers(struct core_dump_iterator *iter,
|
static void etnaviv_core_dump_registers(struct core_dump_iterator *iter,
|
||||||
@@ -81,10 +81,15 @@ static void etnaviv_core_dump_registers(struct core_dump_iterator *iter,
|
|||||||
{
|
{
|
||||||
struct etnaviv_dump_registers *reg = iter->data;
|
struct etnaviv_dump_registers *reg = iter->data;
|
||||||
unsigned int i;
|
unsigned int i;
|
||||||
|
u32 read_addr;
|
||||||
|
|
||||||
for (i = 0; i < ARRAY_SIZE(etnaviv_dump_registers); i++, reg++) {
|
for (i = 0; i < ARRAY_SIZE(etnaviv_dump_registers); i++, reg++) {
|
||||||
reg->reg = etnaviv_dump_registers[i];
|
read_addr = etnaviv_dump_registers[i];
|
||||||
reg->value = gpu_read(gpu, etnaviv_dump_registers[i]);
|
if (read_addr >= VIVS_PM_POWER_CONTROLS &&
|
||||||
|
read_addr <= VIVS_PM_PULSE_EATER)
|
||||||
|
read_addr = gpu_fix_power_address(gpu, read_addr);
|
||||||
|
reg->reg = cpu_to_le32(etnaviv_dump_registers[i]);
|
||||||
|
reg->value = cpu_to_le32(gpu_read(gpu, read_addr));
|
||||||
}
|
}
|
||||||
|
|
||||||
etnaviv_core_dump_header(iter, ETDUMP_BUF_REG, reg);
|
etnaviv_core_dump_header(iter, ETDUMP_BUF_REG, reg);
|
||||||
@@ -220,7 +225,7 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu)
|
|||||||
if (!IS_ERR(pages)) {
|
if (!IS_ERR(pages)) {
|
||||||
int j;
|
int j;
|
||||||
|
|
||||||
iter.hdr->data[0] = bomap - bomap_start;
|
iter.hdr->data[0] = cpu_to_le32((bomap - bomap_start));
|
||||||
|
|
||||||
for (j = 0; j < obj->base.size >> PAGE_SHIFT; j++)
|
for (j = 0; j < obj->base.size >> PAGE_SHIFT; j++)
|
||||||
*bomap++ = cpu_to_le64(page_to_phys(*pages++));
|
*bomap++ = cpu_to_le64(page_to_phys(*pages++));
|
||||||
|
|||||||
@@ -540,7 +540,7 @@ static void etnaviv_gpu_enable_mlcg(struct etnaviv_gpu *gpu)
|
|||||||
u32 pmc, ppc;
|
u32 pmc, ppc;
|
||||||
|
|
||||||
/* enable clock gating */
|
/* enable clock gating */
|
||||||
ppc = gpu_read(gpu, VIVS_PM_POWER_CONTROLS);
|
ppc = gpu_read_power(gpu, VIVS_PM_POWER_CONTROLS);
|
||||||
ppc |= VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING;
|
ppc |= VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING;
|
||||||
|
|
||||||
/* Disable stall module clock gating for 4.3.0.1 and 4.3.0.2 revs */
|
/* Disable stall module clock gating for 4.3.0.1 and 4.3.0.2 revs */
|
||||||
@@ -548,9 +548,9 @@ static void etnaviv_gpu_enable_mlcg(struct etnaviv_gpu *gpu)
|
|||||||
gpu->identity.revision == 0x4302)
|
gpu->identity.revision == 0x4302)
|
||||||
ppc |= VIVS_PM_POWER_CONTROLS_DISABLE_STALL_MODULE_CLOCK_GATING;
|
ppc |= VIVS_PM_POWER_CONTROLS_DISABLE_STALL_MODULE_CLOCK_GATING;
|
||||||
|
|
||||||
gpu_write(gpu, VIVS_PM_POWER_CONTROLS, ppc);
|
gpu_write_power(gpu, VIVS_PM_POWER_CONTROLS, ppc);
|
||||||
|
|
||||||
pmc = gpu_read(gpu, VIVS_PM_MODULE_CONTROLS);
|
pmc = gpu_read_power(gpu, VIVS_PM_MODULE_CONTROLS);
|
||||||
|
|
||||||
/* Disable PA clock gating for GC400+ without bugfix except for GC420 */
|
/* Disable PA clock gating for GC400+ without bugfix except for GC420 */
|
||||||
if (gpu->identity.model >= chipModel_GC400 &&
|
if (gpu->identity.model >= chipModel_GC400 &&
|
||||||
@@ -579,7 +579,7 @@ static void etnaviv_gpu_enable_mlcg(struct etnaviv_gpu *gpu)
|
|||||||
pmc |= VIVS_PM_MODULE_CONTROLS_DISABLE_MODULE_CLOCK_GATING_RA_HZ;
|
pmc |= VIVS_PM_MODULE_CONTROLS_DISABLE_MODULE_CLOCK_GATING_RA_HZ;
|
||||||
pmc |= VIVS_PM_MODULE_CONTROLS_DISABLE_MODULE_CLOCK_GATING_RA_EZ;
|
pmc |= VIVS_PM_MODULE_CONTROLS_DISABLE_MODULE_CLOCK_GATING_RA_EZ;
|
||||||
|
|
||||||
gpu_write(gpu, VIVS_PM_MODULE_CONTROLS, pmc);
|
gpu_write_power(gpu, VIVS_PM_MODULE_CONTROLS, pmc);
|
||||||
}
|
}
|
||||||
|
|
||||||
void etnaviv_gpu_start_fe(struct etnaviv_gpu *gpu, u32 address, u16 prefetch)
|
void etnaviv_gpu_start_fe(struct etnaviv_gpu *gpu, u32 address, u16 prefetch)
|
||||||
@@ -620,11 +620,11 @@ static void etnaviv_gpu_setup_pulse_eater(struct etnaviv_gpu *gpu)
|
|||||||
(gpu->identity.features & chipFeatures_PIPE_3D))
|
(gpu->identity.features & chipFeatures_PIPE_3D))
|
||||||
{
|
{
|
||||||
/* Performance fix: disable internal DFS */
|
/* Performance fix: disable internal DFS */
|
||||||
pulse_eater = gpu_read(gpu, VIVS_PM_PULSE_EATER);
|
pulse_eater = gpu_read_power(gpu, VIVS_PM_PULSE_EATER);
|
||||||
pulse_eater |= BIT(18);
|
pulse_eater |= BIT(18);
|
||||||
}
|
}
|
||||||
|
|
||||||
gpu_write(gpu, VIVS_PM_PULSE_EATER, pulse_eater);
|
gpu_write_power(gpu, VIVS_PM_PULSE_EATER, pulse_eater);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void etnaviv_gpu_hw_init(struct etnaviv_gpu *gpu)
|
static void etnaviv_gpu_hw_init(struct etnaviv_gpu *gpu)
|
||||||
@@ -1038,7 +1038,7 @@ static bool etnaviv_fence_signaled(struct dma_fence *fence)
|
|||||||
{
|
{
|
||||||
struct etnaviv_fence *f = to_etnaviv_fence(fence);
|
struct etnaviv_fence *f = to_etnaviv_fence(fence);
|
||||||
|
|
||||||
return fence_completed(f->gpu, f->base.seqno);
|
return (s32)(f->gpu->completed_fence - f->base.seqno) >= 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void etnaviv_fence_release(struct dma_fence *fence)
|
static void etnaviv_fence_release(struct dma_fence *fence)
|
||||||
@@ -1077,6 +1077,12 @@ static struct dma_fence *etnaviv_gpu_fence_alloc(struct etnaviv_gpu *gpu)
|
|||||||
return &f->base;
|
return &f->base;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* returns true if fence a comes after fence b */
|
||||||
|
static inline bool fence_after(u32 a, u32 b)
|
||||||
|
{
|
||||||
|
return (s32)(a - b) > 0;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* event management:
|
* event management:
|
||||||
*/
|
*/
|
||||||
@@ -1231,10 +1237,12 @@ static void sync_point_perfmon_sample_pre(struct etnaviv_gpu *gpu,
|
|||||||
{
|
{
|
||||||
u32 val;
|
u32 val;
|
||||||
|
|
||||||
|
mutex_lock(&gpu->lock);
|
||||||
|
|
||||||
/* disable clock gating */
|
/* disable clock gating */
|
||||||
val = gpu_read(gpu, VIVS_PM_POWER_CONTROLS);
|
val = gpu_read_power(gpu, VIVS_PM_POWER_CONTROLS);
|
||||||
val &= ~VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING;
|
val &= ~VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING;
|
||||||
gpu_write(gpu, VIVS_PM_POWER_CONTROLS, val);
|
gpu_write_power(gpu, VIVS_PM_POWER_CONTROLS, val);
|
||||||
|
|
||||||
/* enable debug register */
|
/* enable debug register */
|
||||||
val = gpu_read(gpu, VIVS_HI_CLOCK_CONTROL);
|
val = gpu_read(gpu, VIVS_HI_CLOCK_CONTROL);
|
||||||
@@ -1242,6 +1250,8 @@ static void sync_point_perfmon_sample_pre(struct etnaviv_gpu *gpu,
|
|||||||
gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, val);
|
gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, val);
|
||||||
|
|
||||||
sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_PRE);
|
sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_PRE);
|
||||||
|
|
||||||
|
mutex_unlock(&gpu->lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu,
|
static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu,
|
||||||
@@ -1251,23 +1261,27 @@ static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu,
|
|||||||
unsigned int i;
|
unsigned int i;
|
||||||
u32 val;
|
u32 val;
|
||||||
|
|
||||||
|
mutex_lock(&gpu->lock);
|
||||||
|
|
||||||
sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_POST);
|
sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_POST);
|
||||||
|
|
||||||
for (i = 0; i < submit->nr_pmrs; i++) {
|
|
||||||
const struct etnaviv_perfmon_request *pmr = submit->pmrs + i;
|
|
||||||
|
|
||||||
*pmr->bo_vma = pmr->sequence;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* disable debug register */
|
/* disable debug register */
|
||||||
val = gpu_read(gpu, VIVS_HI_CLOCK_CONTROL);
|
val = gpu_read(gpu, VIVS_HI_CLOCK_CONTROL);
|
||||||
val |= VIVS_HI_CLOCK_CONTROL_DISABLE_DEBUG_REGISTERS;
|
val |= VIVS_HI_CLOCK_CONTROL_DISABLE_DEBUG_REGISTERS;
|
||||||
gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, val);
|
gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, val);
|
||||||
|
|
||||||
/* enable clock gating */
|
/* enable clock gating */
|
||||||
val = gpu_read(gpu, VIVS_PM_POWER_CONTROLS);
|
val = gpu_read_power(gpu, VIVS_PM_POWER_CONTROLS);
|
||||||
val |= VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING;
|
val |= VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING;
|
||||||
gpu_write(gpu, VIVS_PM_POWER_CONTROLS, val);
|
gpu_write_power(gpu, VIVS_PM_POWER_CONTROLS, val);
|
||||||
|
|
||||||
|
mutex_unlock(&gpu->lock);
|
||||||
|
|
||||||
|
for (i = 0; i < submit->nr_pmrs; i++) {
|
||||||
|
const struct etnaviv_perfmon_request *pmr = submit->pmrs + i;
|
||||||
|
|
||||||
|
*pmr->bo_vma = pmr->sequence;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -11,6 +11,7 @@
|
|||||||
|
|
||||||
#include "etnaviv_cmdbuf.h"
|
#include "etnaviv_cmdbuf.h"
|
||||||
#include "etnaviv_drv.h"
|
#include "etnaviv_drv.h"
|
||||||
|
#include "common.xml.h"
|
||||||
|
|
||||||
struct etnaviv_gem_submit;
|
struct etnaviv_gem_submit;
|
||||||
struct etnaviv_vram_mapping;
|
struct etnaviv_vram_mapping;
|
||||||
@@ -162,9 +163,24 @@ static inline u32 gpu_read(struct etnaviv_gpu *gpu, u32 reg)
|
|||||||
return readl(gpu->mmio + reg);
|
return readl(gpu->mmio + reg);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline bool fence_completed(struct etnaviv_gpu *gpu, u32 fence)
|
static inline u32 gpu_fix_power_address(struct etnaviv_gpu *gpu, u32 reg)
|
||||||
{
|
{
|
||||||
return fence_after_eq(gpu->completed_fence, fence);
|
/* Power registers in GC300 < 2.0 are offset by 0x100 */
|
||||||
|
if (gpu->identity.model == chipModel_GC300 &&
|
||||||
|
gpu->identity.revision < 0x2000)
|
||||||
|
reg += 0x100;
|
||||||
|
|
||||||
|
return reg;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void gpu_write_power(struct etnaviv_gpu *gpu, u32 reg, u32 data)
|
||||||
|
{
|
||||||
|
writel(data, gpu->mmio + gpu_fix_power_address(gpu, reg));
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline u32 gpu_read_power(struct etnaviv_gpu *gpu, u32 reg)
|
||||||
|
{
|
||||||
|
return readl(gpu->mmio + gpu_fix_power_address(gpu, reg));
|
||||||
}
|
}
|
||||||
|
|
||||||
int etnaviv_gpu_get_param(struct etnaviv_gpu *gpu, u32 param, u64 *value);
|
int etnaviv_gpu_get_param(struct etnaviv_gpu *gpu, u32 param, u64 *value);
|
||||||
|
|||||||
@@ -1253,8 +1253,6 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
|
|||||||
|
|
||||||
omap_obj = to_omap_bo(obj);
|
omap_obj = to_omap_bo(obj);
|
||||||
|
|
||||||
mutex_lock(&omap_obj->lock);
|
|
||||||
|
|
||||||
omap_obj->sgt = sgt;
|
omap_obj->sgt = sgt;
|
||||||
|
|
||||||
if (sgt->orig_nents == 1) {
|
if (sgt->orig_nents == 1) {
|
||||||
@@ -1270,8 +1268,7 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
|
|||||||
pages = kcalloc(npages, sizeof(*pages), GFP_KERNEL);
|
pages = kcalloc(npages, sizeof(*pages), GFP_KERNEL);
|
||||||
if (!pages) {
|
if (!pages) {
|
||||||
omap_gem_free_object(obj);
|
omap_gem_free_object(obj);
|
||||||
obj = ERR_PTR(-ENOMEM);
|
return ERR_PTR(-ENOMEM);
|
||||||
goto done;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
omap_obj->pages = pages;
|
omap_obj->pages = pages;
|
||||||
@@ -1284,13 +1281,10 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
|
|||||||
|
|
||||||
if (WARN_ON(i != npages)) {
|
if (WARN_ON(i != npages)) {
|
||||||
omap_gem_free_object(obj);
|
omap_gem_free_object(obj);
|
||||||
obj = ERR_PTR(-ENOMEM);
|
return ERR_PTR(-ENOMEM);
|
||||||
goto done;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
done:
|
|
||||||
mutex_unlock(&omap_obj->lock);
|
|
||||||
return obj;
|
return obj;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1321,9 +1321,9 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
|
|||||||
rotation -= 1800;
|
rotation -= 1800;
|
||||||
|
|
||||||
input_report_abs(pen_input, ABS_TILT_X,
|
input_report_abs(pen_input, ABS_TILT_X,
|
||||||
(char)frame[7]);
|
(signed char)frame[7]);
|
||||||
input_report_abs(pen_input, ABS_TILT_Y,
|
input_report_abs(pen_input, ABS_TILT_Y,
|
||||||
(char)frame[8]);
|
(signed char)frame[8]);
|
||||||
input_report_abs(pen_input, ABS_Z, rotation);
|
input_report_abs(pen_input, ABS_Z, rotation);
|
||||||
input_report_abs(pen_input, ABS_WHEEL,
|
input_report_abs(pen_input, ABS_WHEEL,
|
||||||
get_unaligned_le16(&frame[11]));
|
get_unaligned_le16(&frame[11]));
|
||||||
|
|||||||
@@ -3110,7 +3110,7 @@ static void bnxt_re_process_res_shadow_qp_wc(struct bnxt_re_qp *qp,
|
|||||||
wc->byte_len = orig_cqe->length;
|
wc->byte_len = orig_cqe->length;
|
||||||
wc->qp = &qp1_qp->ib_qp;
|
wc->qp = &qp1_qp->ib_qp;
|
||||||
|
|
||||||
wc->ex.imm_data = cpu_to_be32(le32_to_cpu(orig_cqe->immdata));
|
wc->ex.imm_data = cpu_to_be32(orig_cqe->immdata);
|
||||||
wc->src_qp = orig_cqe->src_qp;
|
wc->src_qp = orig_cqe->src_qp;
|
||||||
memcpy(wc->smac, orig_cqe->smac, ETH_ALEN);
|
memcpy(wc->smac, orig_cqe->smac, ETH_ALEN);
|
||||||
if (bnxt_re_is_vlan_pkt(orig_cqe, &vlan_id, &sl)) {
|
if (bnxt_re_is_vlan_pkt(orig_cqe, &vlan_id, &sl)) {
|
||||||
@@ -3231,7 +3231,10 @@ int bnxt_re_poll_cq(struct ib_cq *ib_cq, int num_entries, struct ib_wc *wc)
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
wc->qp = &qp->ib_qp;
|
wc->qp = &qp->ib_qp;
|
||||||
wc->ex.imm_data = cpu_to_be32(le32_to_cpu(cqe->immdata));
|
if (cqe->flags & CQ_RES_RC_FLAGS_IMM)
|
||||||
|
wc->ex.imm_data = cpu_to_be32(cqe->immdata);
|
||||||
|
else
|
||||||
|
wc->ex.invalidate_rkey = cqe->invrkey;
|
||||||
wc->src_qp = cqe->src_qp;
|
wc->src_qp = cqe->src_qp;
|
||||||
memcpy(wc->smac, cqe->smac, ETH_ALEN);
|
memcpy(wc->smac, cqe->smac, ETH_ALEN);
|
||||||
wc->port_num = 1;
|
wc->port_num = 1;
|
||||||
|
|||||||
@@ -349,7 +349,7 @@ struct bnxt_qplib_cqe {
|
|||||||
u32 length;
|
u32 length;
|
||||||
u64 wr_id;
|
u64 wr_id;
|
||||||
union {
|
union {
|
||||||
__le32 immdata;
|
u32 immdata;
|
||||||
u32 invrkey;
|
u32 invrkey;
|
||||||
};
|
};
|
||||||
u64 qp_handle;
|
u64 qp_handle;
|
||||||
|
|||||||
@@ -544,6 +544,9 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
|
|||||||
for (minor = 0; minor < MAX_DVB_MINORS; minor++)
|
for (minor = 0; minor < MAX_DVB_MINORS; minor++)
|
||||||
if (dvb_minors[minor] == NULL)
|
if (dvb_minors[minor] == NULL)
|
||||||
break;
|
break;
|
||||||
|
#else
|
||||||
|
minor = nums2minor(adap->num, type, id);
|
||||||
|
#endif
|
||||||
if (minor >= MAX_DVB_MINORS) {
|
if (minor >= MAX_DVB_MINORS) {
|
||||||
if (new_node) {
|
if (new_node) {
|
||||||
list_del (&new_node->list_head);
|
list_del (&new_node->list_head);
|
||||||
@@ -557,17 +560,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
|
|||||||
mutex_unlock(&dvbdev_register_lock);
|
mutex_unlock(&dvbdev_register_lock);
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
#else
|
|
||||||
minor = nums2minor(adap->num, type, id);
|
|
||||||
if (minor >= MAX_DVB_MINORS) {
|
|
||||||
dvb_media_device_free(dvbdev);
|
|
||||||
list_del(&dvbdev->list_head);
|
|
||||||
kfree(dvbdev);
|
|
||||||
*pdvbdev = NULL;
|
|
||||||
mutex_unlock(&dvbdev_register_lock);
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
dvbdev->minor = minor;
|
dvbdev->minor = minor;
|
||||||
dvb_minors[minor] = dvb_device_get(dvbdev);
|
dvb_minors[minor] = dvb_device_get(dvbdev);
|
||||||
up_write(&minor_rwsem);
|
up_write(&minor_rwsem);
|
||||||
|
|||||||
@@ -472,11 +472,12 @@ int fmc_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload,
|
|||||||
jiffies_to_msecs(FM_DRV_TX_TIMEOUT) / 1000);
|
jiffies_to_msecs(FM_DRV_TX_TIMEOUT) / 1000);
|
||||||
return -ETIMEDOUT;
|
return -ETIMEDOUT;
|
||||||
}
|
}
|
||||||
|
spin_lock_irqsave(&fmdev->resp_skb_lock, flags);
|
||||||
if (!fmdev->resp_skb) {
|
if (!fmdev->resp_skb) {
|
||||||
|
spin_unlock_irqrestore(&fmdev->resp_skb_lock, flags);
|
||||||
fmerr("Response SKB is missing\n");
|
fmerr("Response SKB is missing\n");
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
}
|
}
|
||||||
spin_lock_irqsave(&fmdev->resp_skb_lock, flags);
|
|
||||||
skb = fmdev->resp_skb;
|
skb = fmdev->resp_skb;
|
||||||
fmdev->resp_skb = NULL;
|
fmdev->resp_skb = NULL;
|
||||||
spin_unlock_irqrestore(&fmdev->resp_skb_lock, flags);
|
spin_unlock_irqrestore(&fmdev->resp_skb_lock, flags);
|
||||||
|
|||||||
@@ -4206,10 +4206,8 @@ mptsas_find_phyinfo_by_phys_disk_num(MPT_ADAPTER *ioc, u8 phys_disk_num,
|
|||||||
static void
|
static void
|
||||||
mptsas_reprobe_lun(struct scsi_device *sdev, void *data)
|
mptsas_reprobe_lun(struct scsi_device *sdev, void *data)
|
||||||
{
|
{
|
||||||
int rc;
|
|
||||||
|
|
||||||
sdev->no_uld_attach = data ? 1 : 0;
|
sdev->no_uld_attach = data ? 1 : 0;
|
||||||
rc = scsi_device_reprobe(sdev);
|
WARN_ON(scsi_device_reprobe(sdev));
|
||||||
}
|
}
|
||||||
|
|
||||||
static void
|
static void
|
||||||
|
|||||||
@@ -42,7 +42,7 @@ static int da9052_spi_probe(struct spi_device *spi)
|
|||||||
spi_set_drvdata(spi, da9052);
|
spi_set_drvdata(spi, da9052);
|
||||||
|
|
||||||
config = da9052_regmap_config;
|
config = da9052_regmap_config;
|
||||||
config.read_flag_mask = 1;
|
config.write_flag_mask = 1;
|
||||||
config.reg_bits = 7;
|
config.reg_bits = 7;
|
||||||
config.pad_bits = 1;
|
config.pad_bits = 1;
|
||||||
config.val_bits = 8;
|
config.val_bits = 8;
|
||||||
|
|||||||
@@ -85,8 +85,8 @@ static int rt5033_i2c_probe(struct i2c_client *i2c,
|
|||||||
}
|
}
|
||||||
dev_info(&i2c->dev, "Device found Device ID: %04x\n", dev_id);
|
dev_info(&i2c->dev, "Device found Device ID: %04x\n", dev_id);
|
||||||
|
|
||||||
ret = regmap_add_irq_chip(rt5033->regmap, rt5033->irq,
|
ret = devm_regmap_add_irq_chip(rt5033->dev, rt5033->regmap,
|
||||||
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
|
rt5033->irq, IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
|
||||||
0, &rt5033_irq_chip, &rt5033->irq_data);
|
0, &rt5033_irq_chip, &rt5033->irq_data);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
dev_err(&i2c->dev, "Failed to request IRQ %d: %d\n",
|
dev_err(&i2c->dev, "Failed to request IRQ %d: %d\n",
|
||||||
|
|||||||
@@ -1163,7 +1163,7 @@ static int apds990x_probe(struct i2c_client *client,
|
|||||||
err = chip->pdata->setup_resources();
|
err = chip->pdata->setup_resources();
|
||||||
if (err) {
|
if (err) {
|
||||||
err = -EINVAL;
|
err = -EINVAL;
|
||||||
goto fail3;
|
goto fail4;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1171,7 +1171,7 @@ static int apds990x_probe(struct i2c_client *client,
|
|||||||
apds990x_attribute_group);
|
apds990x_attribute_group);
|
||||||
if (err < 0) {
|
if (err < 0) {
|
||||||
dev_err(&chip->client->dev, "Sysfs registration failed\n");
|
dev_err(&chip->client->dev, "Sysfs registration failed\n");
|
||||||
goto fail4;
|
goto fail5;
|
||||||
}
|
}
|
||||||
|
|
||||||
err = request_threaded_irq(client->irq, NULL,
|
err = request_threaded_irq(client->irq, NULL,
|
||||||
@@ -1182,15 +1182,17 @@ static int apds990x_probe(struct i2c_client *client,
|
|||||||
if (err) {
|
if (err) {
|
||||||
dev_err(&client->dev, "could not get IRQ %d\n",
|
dev_err(&client->dev, "could not get IRQ %d\n",
|
||||||
client->irq);
|
client->irq);
|
||||||
goto fail5;
|
goto fail6;
|
||||||
}
|
}
|
||||||
return err;
|
return err;
|
||||||
fail5:
|
fail6:
|
||||||
sysfs_remove_group(&chip->client->dev.kobj,
|
sysfs_remove_group(&chip->client->dev.kobj,
|
||||||
&apds990x_attribute_group[0]);
|
&apds990x_attribute_group[0]);
|
||||||
fail4:
|
fail5:
|
||||||
if (chip->pdata && chip->pdata->release_resources)
|
if (chip->pdata && chip->pdata->release_resources)
|
||||||
chip->pdata->release_resources();
|
chip->pdata->release_resources();
|
||||||
|
fail4:
|
||||||
|
pm_runtime_disable(&client->dev);
|
||||||
fail3:
|
fail3:
|
||||||
regulator_bulk_disable(ARRAY_SIZE(chip->regs), chip->regs);
|
regulator_bulk_disable(ARRAY_SIZE(chip->regs), chip->regs);
|
||||||
fail2:
|
fail2:
|
||||||
|
|||||||
@@ -2857,8 +2857,8 @@ static int dw_mci_init_slot(struct dw_mci *host)
|
|||||||
if (host->use_dma == TRANS_MODE_IDMAC) {
|
if (host->use_dma == TRANS_MODE_IDMAC) {
|
||||||
mmc->max_segs = host->ring_size;
|
mmc->max_segs = host->ring_size;
|
||||||
mmc->max_blk_size = 65535;
|
mmc->max_blk_size = 65535;
|
||||||
mmc->max_req_size = DW_MCI_DESC_DATA_LENGTH * host->ring_size;
|
mmc->max_seg_size = 0x1000;
|
||||||
mmc->max_seg_size = mmc->max_req_size;
|
mmc->max_req_size = mmc->max_seg_size * host->ring_size;
|
||||||
mmc->max_blk_count = mmc->max_req_size / 512;
|
mmc->max_blk_count = mmc->max_req_size / 512;
|
||||||
} else if (host->use_dma == TRANS_MODE_EDMAC) {
|
} else if (host->use_dma == TRANS_MODE_EDMAC) {
|
||||||
mmc->max_segs = 64;
|
mmc->max_segs = 64;
|
||||||
|
|||||||
@@ -269,10 +269,6 @@ static int mmc_spi_response_get(struct mmc_spi_host *host,
|
|||||||
u8 leftover = 0;
|
u8 leftover = 0;
|
||||||
unsigned short rotator;
|
unsigned short rotator;
|
||||||
int i;
|
int i;
|
||||||
char tag[32];
|
|
||||||
|
|
||||||
snprintf(tag, sizeof(tag), " ... CMD%d response SPI_%s",
|
|
||||||
cmd->opcode, maptype(cmd));
|
|
||||||
|
|
||||||
/* Except for data block reads, the whole response will already
|
/* Except for data block reads, the whole response will already
|
||||||
* be stored in the scratch buffer. It's somewhere after the
|
* be stored in the scratch buffer. It's somewhere after the
|
||||||
@@ -422,8 +418,9 @@ static int mmc_spi_response_get(struct mmc_spi_host *host,
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (value < 0)
|
if (value < 0)
|
||||||
dev_dbg(&host->spi->dev, "%s: resp %04x %08x\n",
|
dev_dbg(&host->spi->dev,
|
||||||
tag, cmd->resp[0], cmd->resp[1]);
|
" ... CMD%d response SPI_%s: resp %04x %08x\n",
|
||||||
|
cmd->opcode, maptype(cmd), cmd->resp[0], cmd->resp[1]);
|
||||||
|
|
||||||
/* disable chipselect on errors and some success cases */
|
/* disable chipselect on errors and some success cases */
|
||||||
if (value >= 0 && cs_on)
|
if (value >= 0 && cs_on)
|
||||||
|
|||||||
@@ -365,7 +365,7 @@ atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
|
|||||||
size = ALIGN(size, sizeof(s32));
|
size = ALIGN(size, sizeof(s32));
|
||||||
size += (req->ecc.strength + 1) * sizeof(s32) * 3;
|
size += (req->ecc.strength + 1) * sizeof(s32) * 3;
|
||||||
|
|
||||||
user = kzalloc(size, GFP_KERNEL);
|
user = devm_kzalloc(pmecc->dev, size, GFP_KERNEL);
|
||||||
if (!user)
|
if (!user)
|
||||||
return ERR_PTR(-ENOMEM);
|
return ERR_PTR(-ENOMEM);
|
||||||
|
|
||||||
@@ -411,12 +411,6 @@ atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(atmel_pmecc_create_user);
|
EXPORT_SYMBOL_GPL(atmel_pmecc_create_user);
|
||||||
|
|
||||||
void atmel_pmecc_destroy_user(struct atmel_pmecc_user *user)
|
|
||||||
{
|
|
||||||
kfree(user);
|
|
||||||
}
|
|
||||||
EXPORT_SYMBOL_GPL(atmel_pmecc_destroy_user);
|
|
||||||
|
|
||||||
static int get_strength(struct atmel_pmecc_user *user)
|
static int get_strength(struct atmel_pmecc_user *user)
|
||||||
{
|
{
|
||||||
const int *strengths = user->pmecc->caps->strengths;
|
const int *strengths = user->pmecc->caps->strengths;
|
||||||
|
|||||||
@@ -59,8 +59,6 @@ struct atmel_pmecc *devm_atmel_pmecc_get(struct device *dev);
|
|||||||
struct atmel_pmecc_user *
|
struct atmel_pmecc_user *
|
||||||
atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
|
atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
|
||||||
struct atmel_pmecc_user_req *req);
|
struct atmel_pmecc_user_req *req);
|
||||||
void atmel_pmecc_destroy_user(struct atmel_pmecc_user *user);
|
|
||||||
|
|
||||||
void atmel_pmecc_reset(struct atmel_pmecc *pmecc);
|
void atmel_pmecc_reset(struct atmel_pmecc *pmecc);
|
||||||
int atmel_pmecc_enable(struct atmel_pmecc_user *user, int op);
|
int atmel_pmecc_enable(struct atmel_pmecc_user *user, int op);
|
||||||
void atmel_pmecc_disable(struct atmel_pmecc_user *user);
|
void atmel_pmecc_disable(struct atmel_pmecc_user *user);
|
||||||
|
|||||||
@@ -1459,7 +1459,7 @@ static int scan_all(struct ubi_device *ubi, struct ubi_attach_info *ai,
|
|||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct ubi_attach_info *alloc_ai(void)
|
static struct ubi_attach_info *alloc_ai(const char *slab_name)
|
||||||
{
|
{
|
||||||
struct ubi_attach_info *ai;
|
struct ubi_attach_info *ai;
|
||||||
|
|
||||||
@@ -1473,7 +1473,7 @@ static struct ubi_attach_info *alloc_ai(void)
|
|||||||
INIT_LIST_HEAD(&ai->alien);
|
INIT_LIST_HEAD(&ai->alien);
|
||||||
INIT_LIST_HEAD(&ai->fastmap);
|
INIT_LIST_HEAD(&ai->fastmap);
|
||||||
ai->volumes = RB_ROOT;
|
ai->volumes = RB_ROOT;
|
||||||
ai->aeb_slab_cache = kmem_cache_create("ubi_aeb_slab_cache",
|
ai->aeb_slab_cache = kmem_cache_create(slab_name,
|
||||||
sizeof(struct ubi_ainf_peb),
|
sizeof(struct ubi_ainf_peb),
|
||||||
0, 0, NULL);
|
0, 0, NULL);
|
||||||
if (!ai->aeb_slab_cache) {
|
if (!ai->aeb_slab_cache) {
|
||||||
@@ -1503,7 +1503,7 @@ static int scan_fast(struct ubi_device *ubi, struct ubi_attach_info **ai)
|
|||||||
|
|
||||||
err = -ENOMEM;
|
err = -ENOMEM;
|
||||||
|
|
||||||
scan_ai = alloc_ai();
|
scan_ai = alloc_ai("ubi_aeb_slab_cache_fastmap");
|
||||||
if (!scan_ai)
|
if (!scan_ai)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
@@ -1569,7 +1569,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan)
|
|||||||
int err;
|
int err;
|
||||||
struct ubi_attach_info *ai;
|
struct ubi_attach_info *ai;
|
||||||
|
|
||||||
ai = alloc_ai();
|
ai = alloc_ai("ubi_aeb_slab_cache");
|
||||||
if (!ai)
|
if (!ai)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
@@ -1587,7 +1587,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan)
|
|||||||
if (err > 0 || mtd_is_eccerr(err)) {
|
if (err > 0 || mtd_is_eccerr(err)) {
|
||||||
if (err != UBI_NO_FASTMAP) {
|
if (err != UBI_NO_FASTMAP) {
|
||||||
destroy_ai(ai);
|
destroy_ai(ai);
|
||||||
ai = alloc_ai();
|
ai = alloc_ai("ubi_aeb_slab_cache");
|
||||||
if (!ai)
|
if (!ai)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
@@ -1626,7 +1626,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan)
|
|||||||
if (ubi->fm && ubi_dbg_chk_fastmap(ubi)) {
|
if (ubi->fm && ubi_dbg_chk_fastmap(ubi)) {
|
||||||
struct ubi_attach_info *scan_ai;
|
struct ubi_attach_info *scan_ai;
|
||||||
|
|
||||||
scan_ai = alloc_ai();
|
scan_ai = alloc_ai("ubi_aeb_slab_cache_dbg_chk_fastmap");
|
||||||
if (!scan_ai) {
|
if (!scan_ai) {
|
||||||
err = -ENOMEM;
|
err = -ENOMEM;
|
||||||
goto out_wl;
|
goto out_wl;
|
||||||
|
|||||||
@@ -810,7 +810,14 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
|
|||||||
goto out_not_moved;
|
goto out_not_moved;
|
||||||
}
|
}
|
||||||
if (err == MOVE_RETRY) {
|
if (err == MOVE_RETRY) {
|
||||||
scrubbing = 1;
|
/*
|
||||||
|
* For source PEB:
|
||||||
|
* 1. The scrubbing is set for scrub type PEB, it will
|
||||||
|
* be put back into ubi->scrub list.
|
||||||
|
* 2. Non-scrub type PEB will be put back into ubi->used
|
||||||
|
* list.
|
||||||
|
*/
|
||||||
|
keep = 1;
|
||||||
dst_leb_clean = 1;
|
dst_leb_clean = 1;
|
||||||
goto out_not_moved;
|
goto out_not_moved;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -17866,6 +17866,9 @@ static int tg3_init_one(struct pci_dev *pdev,
|
|||||||
} else
|
} else
|
||||||
persist_dma_mask = dma_mask = DMA_BIT_MASK(64);
|
persist_dma_mask = dma_mask = DMA_BIT_MASK(64);
|
||||||
|
|
||||||
|
if (tg3_asic_rev(tp) == ASIC_REV_57766)
|
||||||
|
persist_dma_mask = DMA_BIT_MASK(31);
|
||||||
|
|
||||||
/* Configure DMA attributes. */
|
/* Configure DMA attributes. */
|
||||||
if (dma_mask > DMA_BIT_MASK(32)) {
|
if (dma_mask > DMA_BIT_MASK(32)) {
|
||||||
err = pci_set_dma_mask(pdev, dma_mask);
|
err = pci_set_dma_mask(pdev, dma_mask);
|
||||||
|
|||||||
@@ -1417,18 +1417,15 @@ static int pxa168_eth_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
printk(KERN_NOTICE "PXA168 10/100 Ethernet Driver\n");
|
printk(KERN_NOTICE "PXA168 10/100 Ethernet Driver\n");
|
||||||
|
|
||||||
clk = devm_clk_get(&pdev->dev, NULL);
|
clk = devm_clk_get_enabled(&pdev->dev, NULL);
|
||||||
if (IS_ERR(clk)) {
|
if (IS_ERR(clk)) {
|
||||||
dev_err(&pdev->dev, "Fast Ethernet failed to get clock\n");
|
dev_err(&pdev->dev, "Fast Ethernet failed to get and enable clock\n");
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
clk_prepare_enable(clk);
|
|
||||||
|
|
||||||
dev = alloc_etherdev(sizeof(struct pxa168_eth_private));
|
dev = alloc_etherdev(sizeof(struct pxa168_eth_private));
|
||||||
if (!dev) {
|
if (!dev)
|
||||||
err = -ENOMEM;
|
return -ENOMEM;
|
||||||
goto err_clk;
|
|
||||||
}
|
|
||||||
|
|
||||||
platform_set_drvdata(pdev, dev);
|
platform_set_drvdata(pdev, dev);
|
||||||
pep = netdev_priv(dev);
|
pep = netdev_priv(dev);
|
||||||
@@ -1541,8 +1538,6 @@ static int pxa168_eth_probe(struct platform_device *pdev)
|
|||||||
mdiobus_free(pep->smi_bus);
|
mdiobus_free(pep->smi_bus);
|
||||||
err_netdev:
|
err_netdev:
|
||||||
free_netdev(dev);
|
free_netdev(dev);
|
||||||
err_clk:
|
|
||||||
clk_disable_unprepare(clk);
|
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -346,6 +346,8 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
|
|||||||
plat_dat->bsp_priv = dwmac;
|
plat_dat->bsp_priv = dwmac;
|
||||||
plat_dat->fix_mac_speed = socfpga_dwmac_fix_mac_speed;
|
plat_dat->fix_mac_speed = socfpga_dwmac_fix_mac_speed;
|
||||||
|
|
||||||
|
plat_dat->riwt_off = 1;
|
||||||
|
|
||||||
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
|
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto err_remove_config_dt;
|
goto err_remove_config_dt;
|
||||||
|
|||||||
@@ -1440,13 +1440,13 @@ static int lan78xx_set_wol(struct net_device *netdev,
|
|||||||
struct lan78xx_priv *pdata = (struct lan78xx_priv *)(dev->data[0]);
|
struct lan78xx_priv *pdata = (struct lan78xx_priv *)(dev->data[0]);
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
if (wol->wolopts & ~WAKE_ALL)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
ret = usb_autopm_get_interface(dev->intf);
|
ret = usb_autopm_get_interface(dev->intf);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
if (wol->wolopts & ~WAKE_ALL)
|
|
||||||
return -EINVAL;
|
|
||||||
|
|
||||||
pdata->wol = wol->wolopts;
|
pdata->wol = wol->wolopts;
|
||||||
|
|
||||||
device_set_wakeup_enable(&dev->udev->dev, (bool)wol->wolopts);
|
device_set_wakeup_enable(&dev->udev->dev, (bool)wol->wolopts);
|
||||||
@@ -2204,6 +2204,7 @@ static int lan78xx_phy_init(struct lan78xx_net *dev)
|
|||||||
if (dev->chipid == ID_REV_CHIP_ID_7801_) {
|
if (dev->chipid == ID_REV_CHIP_ID_7801_) {
|
||||||
if (phy_is_pseudo_fixed_link(phydev)) {
|
if (phy_is_pseudo_fixed_link(phydev)) {
|
||||||
fixed_phy_unregister(phydev);
|
fixed_phy_unregister(phydev);
|
||||||
|
phy_device_free(phydev);
|
||||||
} else {
|
} else {
|
||||||
phy_unregister_fixup_for_uid(PHY_KSZ9031RNX,
|
phy_unregister_fixup_for_uid(PHY_KSZ9031RNX,
|
||||||
0xfffffff0);
|
0xfffffff0);
|
||||||
@@ -3884,8 +3885,10 @@ static void lan78xx_disconnect(struct usb_interface *intf)
|
|||||||
|
|
||||||
phy_disconnect(net->phydev);
|
phy_disconnect(net->phydev);
|
||||||
|
|
||||||
if (phy_is_pseudo_fixed_link(phydev))
|
if (phy_is_pseudo_fixed_link(phydev)) {
|
||||||
fixed_phy_unregister(phydev);
|
fixed_phy_unregister(phydev);
|
||||||
|
phy_device_free(phydev);
|
||||||
|
}
|
||||||
|
|
||||||
unregister_netdev(net);
|
unregister_netdev(net);
|
||||||
|
|
||||||
|
|||||||
@@ -1045,6 +1045,7 @@ static const struct usb_device_id products[] = {
|
|||||||
USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0x581d, USB_CLASS_VENDOR_SPEC, 1, 7),
|
USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0x581d, USB_CLASS_VENDOR_SPEC, 1, 7),
|
||||||
.driver_info = (unsigned long)&qmi_wwan_info,
|
.driver_info = (unsigned long)&qmi_wwan_info,
|
||||||
},
|
},
|
||||||
|
{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0122)}, /* Quectel RG650V */
|
||||||
{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0125)}, /* Quectel EC25, EC20 R2.0 Mini PCIe */
|
{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0125)}, /* Quectel EC25, EC20 R2.0 Mini PCIe */
|
||||||
{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0306)}, /* Quectel EP06/EG06/EM06 */
|
{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0306)}, /* Quectel EP06/EG06/EM06 */
|
||||||
{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0512)}, /* Quectel EG12/EM12 */
|
{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0512)}, /* Quectel EG12/EM12 */
|
||||||
|
|||||||
@@ -297,6 +297,9 @@ int htc_connect_service(struct htc_target *target,
|
|||||||
return -ETIMEDOUT;
|
return -ETIMEDOUT;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (target->conn_rsp_epid < 0 || target->conn_rsp_epid >= ENDPOINT_MAX)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
*conn_rsp_epid = target->conn_rsp_epid;
|
*conn_rsp_epid = target->conn_rsp_epid;
|
||||||
return 0;
|
return 0;
|
||||||
err:
|
err:
|
||||||
|
|||||||
@@ -853,7 +853,7 @@ struct mwifiex_ietypes_chanstats {
|
|||||||
struct mwifiex_ie_types_wildcard_ssid_params {
|
struct mwifiex_ie_types_wildcard_ssid_params {
|
||||||
struct mwifiex_ie_types_header header;
|
struct mwifiex_ie_types_header header;
|
||||||
u8 max_ssid_length;
|
u8 max_ssid_length;
|
||||||
u8 ssid[1];
|
u8 ssid[];
|
||||||
} __packed;
|
} __packed;
|
||||||
|
|
||||||
#define TSF_DATA_SIZE 8
|
#define TSF_DATA_SIZE 8
|
||||||
|
|||||||
@@ -802,11 +802,16 @@ static int nvme_submit_user_cmd(struct request_queue *q,
|
|||||||
bool write = nvme_is_write(cmd);
|
bool write = nvme_is_write(cmd);
|
||||||
struct nvme_ns *ns = q->queuedata;
|
struct nvme_ns *ns = q->queuedata;
|
||||||
struct gendisk *disk = ns ? ns->disk : NULL;
|
struct gendisk *disk = ns ? ns->disk : NULL;
|
||||||
|
bool supports_metadata = disk && blk_get_integrity(disk);
|
||||||
|
bool has_metadata = meta_buffer && meta_len;
|
||||||
struct request *req;
|
struct request *req;
|
||||||
struct bio *bio = NULL;
|
struct bio *bio = NULL;
|
||||||
void *meta = NULL;
|
void *meta = NULL;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
if (has_metadata && !supports_metadata)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
req = nvme_alloc_request(q, cmd, 0, NVME_QID_ANY);
|
req = nvme_alloc_request(q, cmd, 0, NVME_QID_ANY);
|
||||||
if (IS_ERR(req))
|
if (IS_ERR(req))
|
||||||
return PTR_ERR(req);
|
return PTR_ERR(req);
|
||||||
@@ -821,7 +826,7 @@ static int nvme_submit_user_cmd(struct request_queue *q,
|
|||||||
goto out;
|
goto out;
|
||||||
bio = req->bio;
|
bio = req->bio;
|
||||||
bio->bi_disk = disk;
|
bio->bi_disk = disk;
|
||||||
if (disk && meta_buffer && meta_len) {
|
if (has_metadata) {
|
||||||
meta = nvme_add_user_metadata(bio, meta_buffer, meta_len,
|
meta = nvme_add_user_metadata(bio, meta_buffer, meta_len,
|
||||||
meta_seed, write);
|
meta_seed, write);
|
||||||
if (IS_ERR(meta)) {
|
if (IS_ERR(meta)) {
|
||||||
|
|||||||
@@ -135,11 +135,13 @@ int cpqhp_unconfigure_device(struct pci_func *func)
|
|||||||
static int PCI_RefinedAccessConfig(struct pci_bus *bus, unsigned int devfn, u8 offset, u32 *value)
|
static int PCI_RefinedAccessConfig(struct pci_bus *bus, unsigned int devfn, u8 offset, u32 *value)
|
||||||
{
|
{
|
||||||
u32 vendID = 0;
|
u32 vendID = 0;
|
||||||
|
int ret;
|
||||||
|
|
||||||
if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, &vendID) == -1)
|
ret = pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, &vendID);
|
||||||
return -1;
|
if (ret != PCIBIOS_SUCCESSFUL)
|
||||||
if (vendID == 0xffffffff)
|
return PCIBIOS_DEVICE_NOT_FOUND;
|
||||||
return -1;
|
if (PCI_POSSIBLE_ERROR(vendID))
|
||||||
|
return PCIBIOS_DEVICE_NOT_FOUND;
|
||||||
return pci_bus_read_config_dword(bus, devfn, offset, value);
|
return pci_bus_read_config_dword(bus, devfn, offset, value);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -200,13 +202,15 @@ static int PCI_ScanBusForNonBridge(struct controller *ctrl, u8 bus_num, u8 *dev_
|
|||||||
{
|
{
|
||||||
u16 tdevice;
|
u16 tdevice;
|
||||||
u32 work;
|
u32 work;
|
||||||
|
int ret;
|
||||||
u8 tbus;
|
u8 tbus;
|
||||||
|
|
||||||
ctrl->pci_bus->number = bus_num;
|
ctrl->pci_bus->number = bus_num;
|
||||||
|
|
||||||
for (tdevice = 0; tdevice < 0xFF; tdevice++) {
|
for (tdevice = 0; tdevice < 0xFF; tdevice++) {
|
||||||
/* Scan for access first */
|
/* Scan for access first */
|
||||||
if (PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work) == -1)
|
ret = PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work);
|
||||||
|
if (ret)
|
||||||
continue;
|
continue;
|
||||||
dbg("Looking for nonbridge bus_num %d dev_num %d\n", bus_num, tdevice);
|
dbg("Looking for nonbridge bus_num %d dev_num %d\n", bus_num, tdevice);
|
||||||
/* Yep we got one. Not a bridge ? */
|
/* Yep we got one. Not a bridge ? */
|
||||||
@@ -218,7 +222,8 @@ static int PCI_ScanBusForNonBridge(struct controller *ctrl, u8 bus_num, u8 *dev_
|
|||||||
}
|
}
|
||||||
for (tdevice = 0; tdevice < 0xFF; tdevice++) {
|
for (tdevice = 0; tdevice < 0xFF; tdevice++) {
|
||||||
/* Scan for access first */
|
/* Scan for access first */
|
||||||
if (PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work) == -1)
|
ret = PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work);
|
||||||
|
if (ret)
|
||||||
continue;
|
continue;
|
||||||
dbg("Looking for bridge bus_num %d dev_num %d\n", bus_num, tdevice);
|
dbg("Looking for bridge bus_num %d dev_num %d\n", bus_num, tdevice);
|
||||||
/* Yep we got one. bridge ? */
|
/* Yep we got one. bridge ? */
|
||||||
@@ -251,7 +256,7 @@ static int PCI_GetBusDevHelper(struct controller *ctrl, u8 *bus_num, u8 *dev_num
|
|||||||
*dev_num = tdevice;
|
*dev_num = tdevice;
|
||||||
ctrl->pci_bus->number = tbus;
|
ctrl->pci_bus->number = tbus;
|
||||||
pci_bus_read_config_dword(ctrl->pci_bus, *dev_num, PCI_VENDOR_ID, &work);
|
pci_bus_read_config_dword(ctrl->pci_bus, *dev_num, PCI_VENDOR_ID, &work);
|
||||||
if (!nobridge || (work == 0xffffffff))
|
if (!nobridge || PCI_POSSIBLE_ERROR(work))
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
dbg("bus_num %d devfn %d\n", *bus_num, *dev_num);
|
dbg("bus_num %d devfn %d\n", *bus_num, *dev_num);
|
||||||
|
|||||||
@@ -115,6 +115,7 @@ static void pci_slot_release(struct kobject *kobj)
|
|||||||
up_read(&pci_bus_sem);
|
up_read(&pci_bus_sem);
|
||||||
|
|
||||||
list_del(&slot->list);
|
list_del(&slot->list);
|
||||||
|
pci_bus_put(slot->bus);
|
||||||
|
|
||||||
kfree(slot);
|
kfree(slot);
|
||||||
}
|
}
|
||||||
@@ -296,7 +297,7 @@ struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr,
|
|||||||
goto err;
|
goto err;
|
||||||
}
|
}
|
||||||
|
|
||||||
slot->bus = parent;
|
slot->bus = pci_bus_get(parent);
|
||||||
slot->number = slot_nr;
|
slot->number = slot_nr;
|
||||||
|
|
||||||
slot->kobj.kset = pci_slots_kset;
|
slot->kobj.kset = pci_slots_kset;
|
||||||
@@ -304,6 +305,7 @@ struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr,
|
|||||||
slot_name = make_slot_name(name);
|
slot_name = make_slot_name(name);
|
||||||
if (!slot_name) {
|
if (!slot_name) {
|
||||||
err = -ENOMEM;
|
err = -ENOMEM;
|
||||||
|
pci_bus_put(slot->bus);
|
||||||
kfree(slot);
|
kfree(slot);
|
||||||
goto err;
|
goto err;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -482,8 +482,6 @@ EXPORT_SYMBOL_GPL(power_supply_get_by_name);
|
|||||||
*/
|
*/
|
||||||
void power_supply_put(struct power_supply *psy)
|
void power_supply_put(struct power_supply *psy)
|
||||||
{
|
{
|
||||||
might_sleep();
|
|
||||||
|
|
||||||
atomic_dec(&psy->use_cnt);
|
atomic_dec(&psy->use_cnt);
|
||||||
put_device(&psy->dev);
|
put_device(&psy->dev);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -92,6 +92,8 @@ struct glink_core_rx_intent {
|
|||||||
* @rcids: idr of all channels with a known remote channel id
|
* @rcids: idr of all channels with a known remote channel id
|
||||||
* @features: remote features
|
* @features: remote features
|
||||||
* @intentless: flag to indicate that there is no intent
|
* @intentless: flag to indicate that there is no intent
|
||||||
|
* @tx_avail_notify: Waitqueue for pending tx tasks
|
||||||
|
* @sent_read_notify: flag to check cmd sent or not
|
||||||
*/
|
*/
|
||||||
struct qcom_glink {
|
struct qcom_glink {
|
||||||
struct device *dev;
|
struct device *dev;
|
||||||
@@ -118,6 +120,8 @@ struct qcom_glink {
|
|||||||
unsigned long features;
|
unsigned long features;
|
||||||
|
|
||||||
bool intentless;
|
bool intentless;
|
||||||
|
wait_queue_head_t tx_avail_notify;
|
||||||
|
bool sent_read_notify;
|
||||||
};
|
};
|
||||||
|
|
||||||
enum {
|
enum {
|
||||||
@@ -187,20 +191,20 @@ struct glink_channel {
|
|||||||
|
|
||||||
static const struct rpmsg_endpoint_ops glink_endpoint_ops;
|
static const struct rpmsg_endpoint_ops glink_endpoint_ops;
|
||||||
|
|
||||||
#define RPM_CMD_VERSION 0
|
#define GLINK_CMD_VERSION 0
|
||||||
#define RPM_CMD_VERSION_ACK 1
|
#define GLINK_CMD_VERSION_ACK 1
|
||||||
#define RPM_CMD_OPEN 2
|
#define GLINK_CMD_OPEN 2
|
||||||
#define RPM_CMD_CLOSE 3
|
#define GLINK_CMD_CLOSE 3
|
||||||
#define RPM_CMD_OPEN_ACK 4
|
#define GLINK_CMD_OPEN_ACK 4
|
||||||
#define RPM_CMD_INTENT 5
|
#define GLINK_CMD_INTENT 5
|
||||||
#define RPM_CMD_RX_DONE 6
|
#define GLINK_CMD_RX_DONE 6
|
||||||
#define RPM_CMD_RX_INTENT_REQ 7
|
#define GLINK_CMD_RX_INTENT_REQ 7
|
||||||
#define RPM_CMD_RX_INTENT_REQ_ACK 8
|
#define GLINK_CMD_RX_INTENT_REQ_ACK 8
|
||||||
#define RPM_CMD_TX_DATA 9
|
#define GLINK_CMD_TX_DATA 9
|
||||||
#define RPM_CMD_CLOSE_ACK 11
|
#define GLINK_CMD_CLOSE_ACK 11
|
||||||
#define RPM_CMD_TX_DATA_CONT 12
|
#define GLINK_CMD_TX_DATA_CONT 12
|
||||||
#define RPM_CMD_READ_NOTIF 13
|
#define GLINK_CMD_READ_NOTIF 13
|
||||||
#define RPM_CMD_RX_DONE_W_REUSE 14
|
#define GLINK_CMD_RX_DONE_W_REUSE 14
|
||||||
|
|
||||||
#define GLINK_FEATURE_INTENTLESS BIT(1)
|
#define GLINK_FEATURE_INTENTLESS BIT(1)
|
||||||
|
|
||||||
@@ -305,6 +309,20 @@ static void qcom_glink_tx_write(struct qcom_glink *glink,
|
|||||||
glink->tx_pipe->write(glink->tx_pipe, hdr, hlen, data, dlen);
|
glink->tx_pipe->write(glink->tx_pipe, hdr, hlen, data, dlen);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void qcom_glink_send_read_notify(struct qcom_glink *glink)
|
||||||
|
{
|
||||||
|
struct glink_msg msg;
|
||||||
|
|
||||||
|
msg.cmd = cpu_to_le16(GLINK_CMD_READ_NOTIF);
|
||||||
|
msg.param1 = 0;
|
||||||
|
msg.param2 = 0;
|
||||||
|
|
||||||
|
qcom_glink_tx_write(glink, &msg, sizeof(msg), NULL, 0);
|
||||||
|
|
||||||
|
mbox_send_message(glink->mbox_chan, NULL);
|
||||||
|
mbox_client_txdone(glink->mbox_chan, 0);
|
||||||
|
}
|
||||||
|
|
||||||
static int qcom_glink_tx(struct qcom_glink *glink,
|
static int qcom_glink_tx(struct qcom_glink *glink,
|
||||||
const void *hdr, size_t hlen,
|
const void *hdr, size_t hlen,
|
||||||
const void *data, size_t dlen, bool wait)
|
const void *data, size_t dlen, bool wait)
|
||||||
@@ -325,12 +343,21 @@ static int qcom_glink_tx(struct qcom_glink *glink,
|
|||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (!glink->sent_read_notify) {
|
||||||
|
glink->sent_read_notify = true;
|
||||||
|
qcom_glink_send_read_notify(glink);
|
||||||
|
}
|
||||||
|
|
||||||
/* Wait without holding the tx_lock */
|
/* Wait without holding the tx_lock */
|
||||||
spin_unlock_irqrestore(&glink->tx_lock, flags);
|
spin_unlock_irqrestore(&glink->tx_lock, flags);
|
||||||
|
|
||||||
usleep_range(10000, 15000);
|
wait_event_timeout(glink->tx_avail_notify,
|
||||||
|
qcom_glink_tx_avail(glink) >= tlen, 10 * HZ);
|
||||||
|
|
||||||
spin_lock_irqsave(&glink->tx_lock, flags);
|
spin_lock_irqsave(&glink->tx_lock, flags);
|
||||||
|
|
||||||
|
if (qcom_glink_tx_avail(glink) >= tlen)
|
||||||
|
glink->sent_read_notify = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
qcom_glink_tx_write(glink, hdr, hlen, data, dlen);
|
qcom_glink_tx_write(glink, hdr, hlen, data, dlen);
|
||||||
@@ -348,7 +375,7 @@ static int qcom_glink_send_version(struct qcom_glink *glink)
|
|||||||
{
|
{
|
||||||
struct glink_msg msg;
|
struct glink_msg msg;
|
||||||
|
|
||||||
msg.cmd = cpu_to_le16(RPM_CMD_VERSION);
|
msg.cmd = cpu_to_le16(GLINK_CMD_VERSION);
|
||||||
msg.param1 = cpu_to_le16(GLINK_VERSION_1);
|
msg.param1 = cpu_to_le16(GLINK_VERSION_1);
|
||||||
msg.param2 = cpu_to_le32(glink->features);
|
msg.param2 = cpu_to_le32(glink->features);
|
||||||
|
|
||||||
@@ -359,7 +386,7 @@ static void qcom_glink_send_version_ack(struct qcom_glink *glink)
|
|||||||
{
|
{
|
||||||
struct glink_msg msg;
|
struct glink_msg msg;
|
||||||
|
|
||||||
msg.cmd = cpu_to_le16(RPM_CMD_VERSION_ACK);
|
msg.cmd = cpu_to_le16(GLINK_CMD_VERSION_ACK);
|
||||||
msg.param1 = cpu_to_le16(GLINK_VERSION_1);
|
msg.param1 = cpu_to_le16(GLINK_VERSION_1);
|
||||||
msg.param2 = cpu_to_le32(glink->features);
|
msg.param2 = cpu_to_le32(glink->features);
|
||||||
|
|
||||||
@@ -371,7 +398,7 @@ static void qcom_glink_send_open_ack(struct qcom_glink *glink,
|
|||||||
{
|
{
|
||||||
struct glink_msg msg;
|
struct glink_msg msg;
|
||||||
|
|
||||||
msg.cmd = cpu_to_le16(RPM_CMD_OPEN_ACK);
|
msg.cmd = cpu_to_le16(GLINK_CMD_OPEN_ACK);
|
||||||
msg.param1 = cpu_to_le16(channel->rcid);
|
msg.param1 = cpu_to_le16(channel->rcid);
|
||||||
msg.param2 = cpu_to_le32(0);
|
msg.param2 = cpu_to_le32(0);
|
||||||
|
|
||||||
@@ -397,11 +424,11 @@ static void qcom_glink_handle_intent_req_ack(struct qcom_glink *glink,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* qcom_glink_send_open_req() - send a RPM_CMD_OPEN request to the remote
|
* qcom_glink_send_open_req() - send a GLINK_CMD_OPEN request to the remote
|
||||||
* @glink: Ptr to the glink edge
|
* @glink: Ptr to the glink edge
|
||||||
* @channel: Ptr to the channel that the open req is sent
|
* @channel: Ptr to the channel that the open req is sent
|
||||||
*
|
*
|
||||||
* Allocates a local channel id and sends a RPM_CMD_OPEN message to the remote.
|
* Allocates a local channel id and sends a GLINK_CMD_OPEN message to the remote.
|
||||||
* Will return with refcount held, regardless of outcome.
|
* Will return with refcount held, regardless of outcome.
|
||||||
*
|
*
|
||||||
* Returns 0 on success, negative errno otherwise.
|
* Returns 0 on success, negative errno otherwise.
|
||||||
@@ -430,7 +457,7 @@ static int qcom_glink_send_open_req(struct qcom_glink *glink,
|
|||||||
|
|
||||||
channel->lcid = ret;
|
channel->lcid = ret;
|
||||||
|
|
||||||
req.msg.cmd = cpu_to_le16(RPM_CMD_OPEN);
|
req.msg.cmd = cpu_to_le16(GLINK_CMD_OPEN);
|
||||||
req.msg.param1 = cpu_to_le16(channel->lcid);
|
req.msg.param1 = cpu_to_le16(channel->lcid);
|
||||||
req.msg.param2 = cpu_to_le32(name_len);
|
req.msg.param2 = cpu_to_le32(name_len);
|
||||||
strcpy(req.name, channel->name);
|
strcpy(req.name, channel->name);
|
||||||
@@ -455,7 +482,7 @@ static void qcom_glink_send_close_req(struct qcom_glink *glink,
|
|||||||
{
|
{
|
||||||
struct glink_msg req;
|
struct glink_msg req;
|
||||||
|
|
||||||
req.cmd = cpu_to_le16(RPM_CMD_CLOSE);
|
req.cmd = cpu_to_le16(GLINK_CMD_CLOSE);
|
||||||
req.param1 = cpu_to_le16(channel->lcid);
|
req.param1 = cpu_to_le16(channel->lcid);
|
||||||
req.param2 = 0;
|
req.param2 = 0;
|
||||||
|
|
||||||
@@ -467,7 +494,7 @@ static void qcom_glink_send_close_ack(struct qcom_glink *glink,
|
|||||||
{
|
{
|
||||||
struct glink_msg req;
|
struct glink_msg req;
|
||||||
|
|
||||||
req.cmd = cpu_to_le16(RPM_CMD_CLOSE_ACK);
|
req.cmd = cpu_to_le16(GLINK_CMD_CLOSE_ACK);
|
||||||
req.param1 = cpu_to_le16(rcid);
|
req.param1 = cpu_to_le16(rcid);
|
||||||
req.param2 = 0;
|
req.param2 = 0;
|
||||||
|
|
||||||
@@ -498,7 +525,7 @@ static void qcom_glink_rx_done_work(struct work_struct *work)
|
|||||||
iid = intent->id;
|
iid = intent->id;
|
||||||
reuse = intent->reuse;
|
reuse = intent->reuse;
|
||||||
|
|
||||||
cmd.id = reuse ? RPM_CMD_RX_DONE_W_REUSE : RPM_CMD_RX_DONE;
|
cmd.id = reuse ? GLINK_CMD_RX_DONE_W_REUSE : GLINK_CMD_RX_DONE;
|
||||||
cmd.lcid = cid;
|
cmd.lcid = cid;
|
||||||
cmd.liid = iid;
|
cmd.liid = iid;
|
||||||
|
|
||||||
@@ -610,7 +637,7 @@ static int qcom_glink_send_intent_req_ack(struct qcom_glink *glink,
|
|||||||
{
|
{
|
||||||
struct glink_msg msg;
|
struct glink_msg msg;
|
||||||
|
|
||||||
msg.cmd = cpu_to_le16(RPM_CMD_RX_INTENT_REQ_ACK);
|
msg.cmd = cpu_to_le16(GLINK_CMD_RX_INTENT_REQ_ACK);
|
||||||
msg.param1 = cpu_to_le16(channel->lcid);
|
msg.param1 = cpu_to_le16(channel->lcid);
|
||||||
msg.param2 = cpu_to_le32(granted);
|
msg.param2 = cpu_to_le32(granted);
|
||||||
|
|
||||||
@@ -641,7 +668,7 @@ static int qcom_glink_advertise_intent(struct qcom_glink *glink,
|
|||||||
} __packed;
|
} __packed;
|
||||||
struct command cmd;
|
struct command cmd;
|
||||||
|
|
||||||
cmd.id = cpu_to_le16(RPM_CMD_INTENT);
|
cmd.id = cpu_to_le16(GLINK_CMD_INTENT);
|
||||||
cmd.lcid = cpu_to_le16(channel->lcid);
|
cmd.lcid = cpu_to_le16(channel->lcid);
|
||||||
cmd.count = cpu_to_le32(1);
|
cmd.count = cpu_to_le32(1);
|
||||||
cmd.size = cpu_to_le32(intent->size);
|
cmd.size = cpu_to_le32(intent->size);
|
||||||
@@ -991,6 +1018,9 @@ static irqreturn_t qcom_glink_native_intr(int irq, void *data)
|
|||||||
unsigned int cmd;
|
unsigned int cmd;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
|
/* To wakeup any blocking writers */
|
||||||
|
wake_up_all(&glink->tx_avail_notify);
|
||||||
|
|
||||||
for (;;) {
|
for (;;) {
|
||||||
avail = qcom_glink_rx_avail(glink);
|
avail = qcom_glink_rx_avail(glink);
|
||||||
if (avail < sizeof(msg))
|
if (avail < sizeof(msg))
|
||||||
@@ -1003,42 +1033,43 @@ static irqreturn_t qcom_glink_native_intr(int irq, void *data)
|
|||||||
param2 = le32_to_cpu(msg.param2);
|
param2 = le32_to_cpu(msg.param2);
|
||||||
|
|
||||||
switch (cmd) {
|
switch (cmd) {
|
||||||
case RPM_CMD_VERSION:
|
case GLINK_CMD_VERSION:
|
||||||
case RPM_CMD_VERSION_ACK:
|
case GLINK_CMD_VERSION_ACK:
|
||||||
case RPM_CMD_CLOSE:
|
case GLINK_CMD_CLOSE:
|
||||||
case RPM_CMD_CLOSE_ACK:
|
case GLINK_CMD_CLOSE_ACK:
|
||||||
case RPM_CMD_RX_INTENT_REQ:
|
case GLINK_CMD_RX_INTENT_REQ:
|
||||||
ret = qcom_glink_rx_defer(glink, 0);
|
ret = qcom_glink_rx_defer(glink, 0);
|
||||||
break;
|
break;
|
||||||
case RPM_CMD_OPEN_ACK:
|
case GLINK_CMD_OPEN_ACK:
|
||||||
ret = qcom_glink_rx_open_ack(glink, param1);
|
ret = qcom_glink_rx_open_ack(glink, param1);
|
||||||
qcom_glink_rx_advance(glink, ALIGN(sizeof(msg), 8));
|
qcom_glink_rx_advance(glink, ALIGN(sizeof(msg), 8));
|
||||||
break;
|
break;
|
||||||
case RPM_CMD_OPEN:
|
case GLINK_CMD_OPEN:
|
||||||
ret = qcom_glink_rx_defer(glink, param2);
|
/* upper 16 bits of param2 are the "prio" field */
|
||||||
|
ret = qcom_glink_rx_defer(glink, param2 & 0xffff);
|
||||||
break;
|
break;
|
||||||
case RPM_CMD_TX_DATA:
|
case GLINK_CMD_TX_DATA:
|
||||||
case RPM_CMD_TX_DATA_CONT:
|
case GLINK_CMD_TX_DATA_CONT:
|
||||||
ret = qcom_glink_rx_data(glink, avail);
|
ret = qcom_glink_rx_data(glink, avail);
|
||||||
break;
|
break;
|
||||||
case RPM_CMD_READ_NOTIF:
|
case GLINK_CMD_READ_NOTIF:
|
||||||
qcom_glink_rx_advance(glink, ALIGN(sizeof(msg), 8));
|
qcom_glink_rx_advance(glink, ALIGN(sizeof(msg), 8));
|
||||||
|
|
||||||
mbox_send_message(glink->mbox_chan, NULL);
|
mbox_send_message(glink->mbox_chan, NULL);
|
||||||
mbox_client_txdone(glink->mbox_chan, 0);
|
mbox_client_txdone(glink->mbox_chan, 0);
|
||||||
break;
|
break;
|
||||||
case RPM_CMD_INTENT:
|
case GLINK_CMD_INTENT:
|
||||||
qcom_glink_handle_intent(glink, param1, param2, avail);
|
qcom_glink_handle_intent(glink, param1, param2, avail);
|
||||||
break;
|
break;
|
||||||
case RPM_CMD_RX_DONE:
|
case GLINK_CMD_RX_DONE:
|
||||||
qcom_glink_handle_rx_done(glink, param1, param2, false);
|
qcom_glink_handle_rx_done(glink, param1, param2, false);
|
||||||
qcom_glink_rx_advance(glink, ALIGN(sizeof(msg), 8));
|
qcom_glink_rx_advance(glink, ALIGN(sizeof(msg), 8));
|
||||||
break;
|
break;
|
||||||
case RPM_CMD_RX_DONE_W_REUSE:
|
case GLINK_CMD_RX_DONE_W_REUSE:
|
||||||
qcom_glink_handle_rx_done(glink, param1, param2, true);
|
qcom_glink_handle_rx_done(glink, param1, param2, true);
|
||||||
qcom_glink_rx_advance(glink, ALIGN(sizeof(msg), 8));
|
qcom_glink_rx_advance(glink, ALIGN(sizeof(msg), 8));
|
||||||
break;
|
break;
|
||||||
case RPM_CMD_RX_INTENT_REQ_ACK:
|
case GLINK_CMD_RX_INTENT_REQ_ACK:
|
||||||
qcom_glink_handle_intent_req_ack(glink, param1, param2);
|
qcom_glink_handle_intent_req_ack(glink, param1, param2);
|
||||||
qcom_glink_rx_advance(glink, ALIGN(sizeof(msg), 8));
|
qcom_glink_rx_advance(glink, ALIGN(sizeof(msg), 8));
|
||||||
break;
|
break;
|
||||||
@@ -1241,7 +1272,7 @@ static int qcom_glink_request_intent(struct qcom_glink *glink,
|
|||||||
|
|
||||||
reinit_completion(&channel->intent_req_comp);
|
reinit_completion(&channel->intent_req_comp);
|
||||||
|
|
||||||
cmd.id = RPM_CMD_RX_INTENT_REQ;
|
cmd.id = GLINK_CMD_RX_INTENT_REQ;
|
||||||
cmd.cid = channel->lcid;
|
cmd.cid = channel->lcid;
|
||||||
cmd.size = size;
|
cmd.size = size;
|
||||||
|
|
||||||
@@ -1276,6 +1307,8 @@ static int __qcom_glink_send(struct glink_channel *channel,
|
|||||||
} __packed req;
|
} __packed req;
|
||||||
int ret;
|
int ret;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
int chunk_size = len;
|
||||||
|
int left_size = 0;
|
||||||
|
|
||||||
if (!glink->intentless) {
|
if (!glink->intentless) {
|
||||||
while (!intent) {
|
while (!intent) {
|
||||||
@@ -1309,18 +1342,48 @@ static int __qcom_glink_send(struct glink_channel *channel,
|
|||||||
iid = intent->id;
|
iid = intent->id;
|
||||||
}
|
}
|
||||||
|
|
||||||
req.msg.cmd = cpu_to_le16(RPM_CMD_TX_DATA);
|
if (wait && chunk_size > SZ_8K) {
|
||||||
|
chunk_size = SZ_8K;
|
||||||
|
left_size = len - chunk_size;
|
||||||
|
}
|
||||||
|
req.msg.cmd = cpu_to_le16(GLINK_CMD_TX_DATA);
|
||||||
req.msg.param1 = cpu_to_le16(channel->lcid);
|
req.msg.param1 = cpu_to_le16(channel->lcid);
|
||||||
req.msg.param2 = cpu_to_le32(iid);
|
req.msg.param2 = cpu_to_le32(iid);
|
||||||
req.chunk_size = cpu_to_le32(len);
|
req.chunk_size = cpu_to_le32(chunk_size);
|
||||||
req.left_size = cpu_to_le32(0);
|
req.left_size = cpu_to_le32(left_size);
|
||||||
|
|
||||||
ret = qcom_glink_tx(glink, &req, sizeof(req), data, len, wait);
|
ret = qcom_glink_tx(glink, &req, sizeof(req), data, chunk_size, wait);
|
||||||
|
|
||||||
/* Mark intent available if we failed */
|
/* Mark intent available if we failed */
|
||||||
if (ret && intent)
|
if (ret) {
|
||||||
|
if (intent)
|
||||||
intent->in_use = false;
|
intent->in_use = false;
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
while (left_size > 0) {
|
||||||
|
data = (void *)((char *)data + chunk_size);
|
||||||
|
chunk_size = left_size;
|
||||||
|
if (chunk_size > SZ_8K)
|
||||||
|
chunk_size = SZ_8K;
|
||||||
|
left_size -= chunk_size;
|
||||||
|
|
||||||
|
req.msg.cmd = cpu_to_le16(GLINK_CMD_TX_DATA_CONT);
|
||||||
|
req.msg.param1 = cpu_to_le16(channel->lcid);
|
||||||
|
req.msg.param2 = cpu_to_le32(iid);
|
||||||
|
req.chunk_size = cpu_to_le32(chunk_size);
|
||||||
|
req.left_size = cpu_to_le32(left_size);
|
||||||
|
|
||||||
|
ret = qcom_glink_tx(glink, &req, sizeof(req), data,
|
||||||
|
chunk_size, wait);
|
||||||
|
|
||||||
|
/* Mark intent available if we failed */
|
||||||
|
if (ret) {
|
||||||
|
if (intent)
|
||||||
|
intent->in_use = false;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1500,6 +1563,9 @@ static void qcom_glink_rx_close_ack(struct qcom_glink *glink, unsigned int lcid)
|
|||||||
struct glink_channel *channel;
|
struct glink_channel *channel;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
|
/* To wakeup any blocking writers */
|
||||||
|
wake_up_all(&glink->tx_avail_notify);
|
||||||
|
|
||||||
spin_lock_irqsave(&glink->idr_lock, flags);
|
spin_lock_irqsave(&glink->idr_lock, flags);
|
||||||
channel = idr_find(&glink->lcids, lcid);
|
channel = idr_find(&glink->lcids, lcid);
|
||||||
if (WARN(!channel, "close ack on unknown channel\n")) {
|
if (WARN(!channel, "close ack on unknown channel\n")) {
|
||||||
@@ -1542,22 +1608,22 @@ static void qcom_glink_work(struct work_struct *work)
|
|||||||
param2 = le32_to_cpu(msg->param2);
|
param2 = le32_to_cpu(msg->param2);
|
||||||
|
|
||||||
switch (cmd) {
|
switch (cmd) {
|
||||||
case RPM_CMD_VERSION:
|
case GLINK_CMD_VERSION:
|
||||||
qcom_glink_receive_version(glink, param1, param2);
|
qcom_glink_receive_version(glink, param1, param2);
|
||||||
break;
|
break;
|
||||||
case RPM_CMD_VERSION_ACK:
|
case GLINK_CMD_VERSION_ACK:
|
||||||
qcom_glink_receive_version_ack(glink, param1, param2);
|
qcom_glink_receive_version_ack(glink, param1, param2);
|
||||||
break;
|
break;
|
||||||
case RPM_CMD_OPEN:
|
case GLINK_CMD_OPEN:
|
||||||
qcom_glink_rx_open(glink, param1, msg->data);
|
qcom_glink_rx_open(glink, param1, msg->data);
|
||||||
break;
|
break;
|
||||||
case RPM_CMD_CLOSE:
|
case GLINK_CMD_CLOSE:
|
||||||
qcom_glink_rx_close(glink, param1);
|
qcom_glink_rx_close(glink, param1);
|
||||||
break;
|
break;
|
||||||
case RPM_CMD_CLOSE_ACK:
|
case GLINK_CMD_CLOSE_ACK:
|
||||||
qcom_glink_rx_close_ack(glink, param1);
|
qcom_glink_rx_close_ack(glink, param1);
|
||||||
break;
|
break;
|
||||||
case RPM_CMD_RX_INTENT_REQ:
|
case GLINK_CMD_RX_INTENT_REQ:
|
||||||
qcom_glink_handle_intent_req(glink, param1, param2);
|
qcom_glink_handle_intent_req(glink, param1, param2);
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
@@ -1606,6 +1672,7 @@ struct qcom_glink *qcom_glink_native_probe(struct device *dev,
|
|||||||
spin_lock_init(&glink->rx_lock);
|
spin_lock_init(&glink->rx_lock);
|
||||||
INIT_LIST_HEAD(&glink->rx_queue);
|
INIT_LIST_HEAD(&glink->rx_queue);
|
||||||
INIT_WORK(&glink->rx_work, qcom_glink_work);
|
INIT_WORK(&glink->rx_work, qcom_glink_work);
|
||||||
|
init_waitqueue_head(&glink->tx_avail_notify);
|
||||||
|
|
||||||
spin_lock_init(&glink->idr_lock);
|
spin_lock_init(&glink->idr_lock);
|
||||||
idr_init(&glink->lcids);
|
idr_init(&glink->lcids);
|
||||||
|
|||||||
@@ -914,13 +914,18 @@ void rtc_timer_do_work(struct work_struct *work)
|
|||||||
struct timerqueue_node *next;
|
struct timerqueue_node *next;
|
||||||
ktime_t now;
|
ktime_t now;
|
||||||
struct rtc_time tm;
|
struct rtc_time tm;
|
||||||
|
int err;
|
||||||
|
|
||||||
struct rtc_device *rtc =
|
struct rtc_device *rtc =
|
||||||
container_of(work, struct rtc_device, irqwork);
|
container_of(work, struct rtc_device, irqwork);
|
||||||
|
|
||||||
mutex_lock(&rtc->ops_lock);
|
mutex_lock(&rtc->ops_lock);
|
||||||
again:
|
again:
|
||||||
__rtc_read_time(rtc, &tm);
|
err = __rtc_read_time(rtc, &tm);
|
||||||
|
if (err) {
|
||||||
|
mutex_unlock(&rtc->ops_lock);
|
||||||
|
return;
|
||||||
|
}
|
||||||
now = rtc_tm_to_ktime(tm);
|
now = rtc_tm_to_ktime(tm);
|
||||||
while ((next = timerqueue_getnext(&rtc->timerqueue))) {
|
while ((next = timerqueue_getnext(&rtc->timerqueue))) {
|
||||||
if (next->expires > now)
|
if (next->expires > now)
|
||||||
|
|||||||
@@ -1711,9 +1711,8 @@ bfad_init(void)
|
|||||||
|
|
||||||
error = bfad_im_module_init();
|
error = bfad_im_module_init();
|
||||||
if (error) {
|
if (error) {
|
||||||
error = -ENOMEM;
|
|
||||||
printk(KERN_WARNING "bfad_im_module_init failure\n");
|
printk(KERN_WARNING "bfad_im_module_init failure\n");
|
||||||
goto ext;
|
return -ENOMEM;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (strcmp(FCPI_NAME, " fcpim") == 0)
|
if (strcmp(FCPI_NAME, " fcpim") == 0)
|
||||||
|
|||||||
@@ -357,6 +357,7 @@ static int qedi_alloc_and_init_sb(struct qedi_ctx *qedi,
|
|||||||
ret = qedi_ops->common->sb_init(qedi->cdev, sb_info, sb_virt, sb_phys,
|
ret = qedi_ops->common->sb_init(qedi->cdev, sb_info, sb_virt, sb_phys,
|
||||||
sb_id, QED_SB_TYPE_STORAGE);
|
sb_id, QED_SB_TYPE_STORAGE);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
dma_free_coherent(&qedi->pdev->dev, sizeof(*sb_virt), sb_virt, sb_phys);
|
||||||
QEDI_ERR(&qedi->dbg_ctx,
|
QEDI_ERR(&qedi->dbg_ctx,
|
||||||
"Status block initialization failed for id = %d.\n",
|
"Status block initialization failed for id = %d.\n",
|
||||||
sb_id);
|
sb_id);
|
||||||
|
|||||||
@@ -194,7 +194,6 @@ int __init register_intc_controller(struct intc_desc *desc)
|
|||||||
goto err0;
|
goto err0;
|
||||||
|
|
||||||
INIT_LIST_HEAD(&d->list);
|
INIT_LIST_HEAD(&d->list);
|
||||||
list_add_tail(&d->list, &intc_list);
|
|
||||||
|
|
||||||
raw_spin_lock_init(&d->lock);
|
raw_spin_lock_init(&d->lock);
|
||||||
INIT_RADIX_TREE(&d->tree, GFP_ATOMIC);
|
INIT_RADIX_TREE(&d->tree, GFP_ATOMIC);
|
||||||
@@ -380,6 +379,7 @@ int __init register_intc_controller(struct intc_desc *desc)
|
|||||||
|
|
||||||
d->skip_suspend = desc->skip_syscore_suspend;
|
d->skip_suspend = desc->skip_syscore_suspend;
|
||||||
|
|
||||||
|
list_add_tail(&d->list, &intc_list);
|
||||||
nr_intc_controllers++;
|
nr_intc_controllers++;
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|||||||
@@ -542,7 +542,8 @@ int geni_se_clk_tbl_get(struct geni_se *se, unsigned long **tbl)
|
|||||||
|
|
||||||
for (i = 0; i < MAX_CLK_PERF_LEVEL; i++) {
|
for (i = 0; i < MAX_CLK_PERF_LEVEL; i++) {
|
||||||
freq = clk_round_rate(se->clk, freq + 1);
|
freq = clk_round_rate(se->clk, freq + 1);
|
||||||
if (freq <= 0 || freq == se->clk_perf_tbl[i - 1])
|
if (freq <= 0 ||
|
||||||
|
(i > 0 && freq == se->clk_perf_tbl[i - 1]))
|
||||||
break;
|
break;
|
||||||
se->clk_perf_tbl[i] = freq;
|
se->clk_perf_tbl[i] = freq;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -358,6 +358,16 @@ static int spi_drv_probe(struct device *dev)
|
|||||||
spi->irq = 0;
|
spi->irq = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (has_acpi_companion(dev) && spi->irq < 0) {
|
||||||
|
struct acpi_device *adev = to_acpi_device_node(dev->fwnode);
|
||||||
|
|
||||||
|
spi->irq = acpi_dev_gpio_irq_get(adev, 0);
|
||||||
|
if (spi->irq == -EPROBE_DEFER)
|
||||||
|
return -EPROBE_DEFER;
|
||||||
|
if (spi->irq < 0)
|
||||||
|
spi->irq = 0;
|
||||||
|
}
|
||||||
|
|
||||||
ret = dev_pm_domain_attach(dev, true);
|
ret = dev_pm_domain_attach(dev, true);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
@@ -1843,9 +1853,6 @@ static acpi_status acpi_register_spi_device(struct spi_controller *ctlr,
|
|||||||
acpi_set_modalias(adev, acpi_device_hid(adev), spi->modalias,
|
acpi_set_modalias(adev, acpi_device_hid(adev), spi->modalias,
|
||||||
sizeof(spi->modalias));
|
sizeof(spi->modalias));
|
||||||
|
|
||||||
if (spi->irq < 0)
|
|
||||||
spi->irq = acpi_dev_gpio_irq_get(adev, 0);
|
|
||||||
|
|
||||||
acpi_device_set_enumerated(adev);
|
acpi_device_set_enumerated(adev);
|
||||||
|
|
||||||
adev->power.flags.ignore_parent = true;
|
adev->power.flags.ignore_parent = true;
|
||||||
|
|||||||
@@ -643,12 +643,12 @@ static void omap_8250_shutdown(struct uart_port *port)
|
|||||||
struct uart_8250_port *up = up_to_u8250p(port);
|
struct uart_8250_port *up = up_to_u8250p(port);
|
||||||
struct omap8250_priv *priv = port->private_data;
|
struct omap8250_priv *priv = port->private_data;
|
||||||
|
|
||||||
|
pm_runtime_get_sync(port->dev);
|
||||||
|
|
||||||
flush_work(&priv->qos_work);
|
flush_work(&priv->qos_work);
|
||||||
if (up->dma)
|
if (up->dma)
|
||||||
omap_8250_rx_dma_flush(up);
|
omap_8250_rx_dma_flush(up);
|
||||||
|
|
||||||
pm_runtime_get_sync(port->dev);
|
|
||||||
|
|
||||||
serial_out(up, UART_OMAP_WER, 0);
|
serial_out(up, UART_OMAP_WER, 0);
|
||||||
|
|
||||||
up->ier = 0;
|
up->ier = 0;
|
||||||
|
|||||||
@@ -854,7 +854,7 @@ static struct ctl_table tty_table[] = {
|
|||||||
.data = &tty_ldisc_autoload,
|
.data = &tty_ldisc_autoload,
|
||||||
.maxlen = sizeof(tty_ldisc_autoload),
|
.maxlen = sizeof(tty_ldisc_autoload),
|
||||||
.mode = 0644,
|
.mode = 0644,
|
||||||
.proc_handler = proc_dointvec,
|
.proc_handler = proc_dointvec_minmax,
|
||||||
.extra1 = &zero,
|
.extra1 = &zero,
|
||||||
.extra2 = &one,
|
.extra2 = &one,
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -903,11 +903,14 @@ static u32 dwc3_calc_trbs_left(struct dwc3_ep *dep)
|
|||||||
* pending to be processed by the driver.
|
* pending to be processed by the driver.
|
||||||
*/
|
*/
|
||||||
if (dep->trb_enqueue == dep->trb_dequeue) {
|
if (dep->trb_enqueue == dep->trb_dequeue) {
|
||||||
|
struct dwc3_request *req;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If there is any request remained in the started_list at
|
* If there is any request remained in the started_list with
|
||||||
* this point, that means there is no TRB available.
|
* active TRBs at this point, then there is no TRB available.
|
||||||
*/
|
*/
|
||||||
if (!list_empty(&dep->started_list))
|
req = next_request(&dep->started_list);
|
||||||
|
if (req && req->num_trbs)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
return DWC3_TRB_NUM - 1;
|
return DWC3_TRB_NUM - 1;
|
||||||
|
|||||||
@@ -2026,8 +2026,20 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
|
|||||||
memset(buf, 0, w_length);
|
memset(buf, 0, w_length);
|
||||||
buf[5] = 0x01;
|
buf[5] = 0x01;
|
||||||
switch (ctrl->bRequestType & USB_RECIP_MASK) {
|
switch (ctrl->bRequestType & USB_RECIP_MASK) {
|
||||||
|
/*
|
||||||
|
* The Microsoft CompatID OS Descriptor Spec(w_index = 0x4) and
|
||||||
|
* Extended Prop OS Desc Spec(w_index = 0x5) state that the
|
||||||
|
* HighByte of wValue is the InterfaceNumber and the LowByte is
|
||||||
|
* the PageNumber. This high/low byte ordering is incorrectly
|
||||||
|
* documented in the Spec. USB analyzer output on the below
|
||||||
|
* request packets show the high/low byte inverted i.e LowByte
|
||||||
|
* is the InterfaceNumber and the HighByte is the PageNumber.
|
||||||
|
* Since we dont support >64KB CompatID/ExtendedProp descriptors,
|
||||||
|
* PageNumber is set to 0. Hence verify that the HighByte is 0
|
||||||
|
* for below two cases.
|
||||||
|
*/
|
||||||
case USB_RECIP_DEVICE:
|
case USB_RECIP_DEVICE:
|
||||||
if (w_index != 0x4 || (w_value & 0xff))
|
if (w_index != 0x4 || (w_value >> 8))
|
||||||
break;
|
break;
|
||||||
buf[6] = w_index;
|
buf[6] = w_index;
|
||||||
/* Number of ext compat interfaces */
|
/* Number of ext compat interfaces */
|
||||||
@@ -2043,9 +2055,9 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
|
|||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case USB_RECIP_INTERFACE:
|
case USB_RECIP_INTERFACE:
|
||||||
if (w_index != 0x5 || (w_value & 0xff))
|
if (w_index != 0x5 || (w_value >> 8))
|
||||||
break;
|
break;
|
||||||
interface = w_value >> 8;
|
interface = w_value & 0xFF;
|
||||||
if (interface >= MAX_CONFIG_INTERFACES ||
|
if (interface >= MAX_CONFIG_INTERFACES ||
|
||||||
!os_desc_cfg->interface[interface])
|
!os_desc_cfg->interface[interface])
|
||||||
break;
|
break;
|
||||||
|
|||||||
@@ -110,7 +110,9 @@ static int spear_ehci_hcd_drv_probe(struct platform_device *pdev)
|
|||||||
/* registers start at offset 0x0 */
|
/* registers start at offset 0x0 */
|
||||||
hcd_to_ehci(hcd)->caps = hcd->regs;
|
hcd_to_ehci(hcd)->caps = hcd->regs;
|
||||||
|
|
||||||
clk_prepare_enable(sehci->clk);
|
retval = clk_prepare_enable(sehci->clk);
|
||||||
|
if (retval)
|
||||||
|
goto err_put_hcd;
|
||||||
retval = usb_add_hcd(hcd, irq, IRQF_SHARED);
|
retval = usb_add_hcd(hcd, irq, IRQF_SHARED);
|
||||||
if (retval)
|
if (retval)
|
||||||
goto err_stop_ehci;
|
goto err_stop_ehci;
|
||||||
@@ -135,7 +137,6 @@ static int spear_ehci_hcd_drv_remove(struct platform_device *pdev)
|
|||||||
|
|
||||||
usb_remove_hcd(hcd);
|
usb_remove_hcd(hcd);
|
||||||
|
|
||||||
if (sehci->clk)
|
|
||||||
clk_disable_unprepare(sehci->clk);
|
clk_disable_unprepare(sehci->clk);
|
||||||
usb_put_hcd(hcd);
|
usb_put_hcd(hcd);
|
||||||
|
|
||||||
|
|||||||
@@ -27,6 +27,8 @@ static struct usb_class_driver chaoskey_class;
|
|||||||
static int chaoskey_rng_read(struct hwrng *rng, void *data,
|
static int chaoskey_rng_read(struct hwrng *rng, void *data,
|
||||||
size_t max, bool wait);
|
size_t max, bool wait);
|
||||||
|
|
||||||
|
static DEFINE_MUTEX(chaoskey_list_lock);
|
||||||
|
|
||||||
#define usb_dbg(usb_if, format, arg...) \
|
#define usb_dbg(usb_if, format, arg...) \
|
||||||
dev_dbg(&(usb_if)->dev, format, ## arg)
|
dev_dbg(&(usb_if)->dev, format, ## arg)
|
||||||
|
|
||||||
@@ -234,6 +236,7 @@ static void chaoskey_disconnect(struct usb_interface *interface)
|
|||||||
usb_deregister_dev(interface, &chaoskey_class);
|
usb_deregister_dev(interface, &chaoskey_class);
|
||||||
|
|
||||||
usb_set_intfdata(interface, NULL);
|
usb_set_intfdata(interface, NULL);
|
||||||
|
mutex_lock(&chaoskey_list_lock);
|
||||||
mutex_lock(&dev->lock);
|
mutex_lock(&dev->lock);
|
||||||
|
|
||||||
dev->present = false;
|
dev->present = false;
|
||||||
@@ -245,6 +248,7 @@ static void chaoskey_disconnect(struct usb_interface *interface)
|
|||||||
} else
|
} else
|
||||||
mutex_unlock(&dev->lock);
|
mutex_unlock(&dev->lock);
|
||||||
|
|
||||||
|
mutex_unlock(&chaoskey_list_lock);
|
||||||
usb_dbg(interface, "disconnect done");
|
usb_dbg(interface, "disconnect done");
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -252,6 +256,7 @@ static int chaoskey_open(struct inode *inode, struct file *file)
|
|||||||
{
|
{
|
||||||
struct chaoskey *dev;
|
struct chaoskey *dev;
|
||||||
struct usb_interface *interface;
|
struct usb_interface *interface;
|
||||||
|
int rv = 0;
|
||||||
|
|
||||||
/* get the interface from minor number and driver information */
|
/* get the interface from minor number and driver information */
|
||||||
interface = usb_find_interface(&chaoskey_driver, iminor(inode));
|
interface = usb_find_interface(&chaoskey_driver, iminor(inode));
|
||||||
@@ -267,18 +272,23 @@ static int chaoskey_open(struct inode *inode, struct file *file)
|
|||||||
}
|
}
|
||||||
|
|
||||||
file->private_data = dev;
|
file->private_data = dev;
|
||||||
|
mutex_lock(&chaoskey_list_lock);
|
||||||
mutex_lock(&dev->lock);
|
mutex_lock(&dev->lock);
|
||||||
|
if (dev->present)
|
||||||
++dev->open;
|
++dev->open;
|
||||||
|
else
|
||||||
|
rv = -ENODEV;
|
||||||
mutex_unlock(&dev->lock);
|
mutex_unlock(&dev->lock);
|
||||||
|
mutex_unlock(&chaoskey_list_lock);
|
||||||
|
|
||||||
usb_dbg(interface, "open success");
|
return rv;
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int chaoskey_release(struct inode *inode, struct file *file)
|
static int chaoskey_release(struct inode *inode, struct file *file)
|
||||||
{
|
{
|
||||||
struct chaoskey *dev = file->private_data;
|
struct chaoskey *dev = file->private_data;
|
||||||
struct usb_interface *interface;
|
struct usb_interface *interface;
|
||||||
|
int rv = 0;
|
||||||
|
|
||||||
if (dev == NULL)
|
if (dev == NULL)
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
@@ -287,14 +297,15 @@ static int chaoskey_release(struct inode *inode, struct file *file)
|
|||||||
|
|
||||||
usb_dbg(interface, "release");
|
usb_dbg(interface, "release");
|
||||||
|
|
||||||
|
mutex_lock(&chaoskey_list_lock);
|
||||||
mutex_lock(&dev->lock);
|
mutex_lock(&dev->lock);
|
||||||
|
|
||||||
usb_dbg(interface, "open count at release is %d", dev->open);
|
usb_dbg(interface, "open count at release is %d", dev->open);
|
||||||
|
|
||||||
if (dev->open <= 0) {
|
if (dev->open <= 0) {
|
||||||
usb_dbg(interface, "invalid open count (%d)", dev->open);
|
usb_dbg(interface, "invalid open count (%d)", dev->open);
|
||||||
mutex_unlock(&dev->lock);
|
rv = -ENODEV;
|
||||||
return -ENODEV;
|
goto bail;
|
||||||
}
|
}
|
||||||
|
|
||||||
--dev->open;
|
--dev->open;
|
||||||
@@ -303,13 +314,15 @@ static int chaoskey_release(struct inode *inode, struct file *file)
|
|||||||
if (dev->open == 0) {
|
if (dev->open == 0) {
|
||||||
mutex_unlock(&dev->lock);
|
mutex_unlock(&dev->lock);
|
||||||
chaoskey_free(dev);
|
chaoskey_free(dev);
|
||||||
} else
|
goto destruction;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
bail:
|
||||||
mutex_unlock(&dev->lock);
|
mutex_unlock(&dev->lock);
|
||||||
} else
|
destruction:
|
||||||
mutex_unlock(&dev->lock);
|
mutex_unlock(&chaoskey_list_lock);
|
||||||
|
|
||||||
usb_dbg(interface, "release success");
|
usb_dbg(interface, "release success");
|
||||||
return 0;
|
return rv;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void chaos_read_callback(struct urb *urb)
|
static void chaos_read_callback(struct urb *urb)
|
||||||
|
|||||||
@@ -281,28 +281,45 @@ static ssize_t iowarrior_read(struct file *file, char __user *buffer,
|
|||||||
struct iowarrior *dev;
|
struct iowarrior *dev;
|
||||||
int read_idx;
|
int read_idx;
|
||||||
int offset;
|
int offset;
|
||||||
|
int retval;
|
||||||
|
|
||||||
dev = file->private_data;
|
dev = file->private_data;
|
||||||
|
|
||||||
|
if (file->f_flags & O_NONBLOCK) {
|
||||||
|
retval = mutex_trylock(&dev->mutex);
|
||||||
|
if (!retval)
|
||||||
|
return -EAGAIN;
|
||||||
|
} else {
|
||||||
|
retval = mutex_lock_interruptible(&dev->mutex);
|
||||||
|
if (retval)
|
||||||
|
return -ERESTARTSYS;
|
||||||
|
}
|
||||||
|
|
||||||
/* verify that the device wasn't unplugged */
|
/* verify that the device wasn't unplugged */
|
||||||
if (!dev || !dev->present)
|
if (!dev->present) {
|
||||||
return -ENODEV;
|
retval = -ENODEV;
|
||||||
|
goto exit;
|
||||||
|
}
|
||||||
|
|
||||||
dev_dbg(&dev->interface->dev, "minor %d, count = %zd\n",
|
dev_dbg(&dev->interface->dev, "minor %d, count = %zd\n",
|
||||||
dev->minor, count);
|
dev->minor, count);
|
||||||
|
|
||||||
/* read count must be packet size (+ time stamp) */
|
/* read count must be packet size (+ time stamp) */
|
||||||
if ((count != dev->report_size)
|
if ((count != dev->report_size)
|
||||||
&& (count != (dev->report_size + 1)))
|
&& (count != (dev->report_size + 1))) {
|
||||||
return -EINVAL;
|
retval = -EINVAL;
|
||||||
|
goto exit;
|
||||||
|
}
|
||||||
|
|
||||||
/* repeat until no buffer overrun in callback handler occur */
|
/* repeat until no buffer overrun in callback handler occur */
|
||||||
do {
|
do {
|
||||||
atomic_set(&dev->overflow_flag, 0);
|
atomic_set(&dev->overflow_flag, 0);
|
||||||
if ((read_idx = read_index(dev)) == -1) {
|
if ((read_idx = read_index(dev)) == -1) {
|
||||||
/* queue empty */
|
/* queue empty */
|
||||||
if (file->f_flags & O_NONBLOCK)
|
if (file->f_flags & O_NONBLOCK) {
|
||||||
return -EAGAIN;
|
retval = -EAGAIN;
|
||||||
|
goto exit;
|
||||||
|
}
|
||||||
else {
|
else {
|
||||||
//next line will return when there is either new data, or the device is unplugged
|
//next line will return when there is either new data, or the device is unplugged
|
||||||
int r = wait_event_interruptible(dev->read_wait,
|
int r = wait_event_interruptible(dev->read_wait,
|
||||||
@@ -313,28 +330,37 @@ static ssize_t iowarrior_read(struct file *file, char __user *buffer,
|
|||||||
-1));
|
-1));
|
||||||
if (r) {
|
if (r) {
|
||||||
//we were interrupted by a signal
|
//we were interrupted by a signal
|
||||||
return -ERESTART;
|
retval = -ERESTART;
|
||||||
|
goto exit;
|
||||||
}
|
}
|
||||||
if (!dev->present) {
|
if (!dev->present) {
|
||||||
//The device was unplugged
|
//The device was unplugged
|
||||||
return -ENODEV;
|
retval = -ENODEV;
|
||||||
|
goto exit;
|
||||||
}
|
}
|
||||||
if (read_idx == -1) {
|
if (read_idx == -1) {
|
||||||
// Can this happen ???
|
// Can this happen ???
|
||||||
return 0;
|
retval = 0;
|
||||||
|
goto exit;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
offset = read_idx * (dev->report_size + 1);
|
offset = read_idx * (dev->report_size + 1);
|
||||||
if (copy_to_user(buffer, dev->read_queue + offset, count)) {
|
if (copy_to_user(buffer, dev->read_queue + offset, count)) {
|
||||||
return -EFAULT;
|
retval = -EFAULT;
|
||||||
|
goto exit;
|
||||||
}
|
}
|
||||||
} while (atomic_read(&dev->overflow_flag));
|
} while (atomic_read(&dev->overflow_flag));
|
||||||
|
|
||||||
read_idx = ++read_idx == MAX_INTERRUPT_BUFFER ? 0 : read_idx;
|
read_idx = ++read_idx == MAX_INTERRUPT_BUFFER ? 0 : read_idx;
|
||||||
atomic_set(&dev->read_idx, read_idx);
|
atomic_set(&dev->read_idx, read_idx);
|
||||||
|
mutex_unlock(&dev->mutex);
|
||||||
return count;
|
return count;
|
||||||
|
|
||||||
|
exit:
|
||||||
|
mutex_unlock(&dev->mutex);
|
||||||
|
return retval;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -315,6 +315,10 @@ static int vfio_virt_config_read(struct vfio_pci_device *vdev, int pos,
|
|||||||
return count;
|
return count;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static struct perm_bits direct_ro_perms = {
|
||||||
|
.readfn = vfio_direct_config_read,
|
||||||
|
};
|
||||||
|
|
||||||
/* Default capability regions to read-only, no-virtualization */
|
/* Default capability regions to read-only, no-virtualization */
|
||||||
static struct perm_bits cap_perms[PCI_CAP_ID_MAX + 1] = {
|
static struct perm_bits cap_perms[PCI_CAP_ID_MAX + 1] = {
|
||||||
[0 ... PCI_CAP_ID_MAX] = { .readfn = vfio_direct_config_read }
|
[0 ... PCI_CAP_ID_MAX] = { .readfn = vfio_direct_config_read }
|
||||||
@@ -1837,9 +1841,17 @@ static ssize_t vfio_config_do_rw(struct vfio_pci_device *vdev, char __user *buf,
|
|||||||
cap_start = *ppos;
|
cap_start = *ppos;
|
||||||
} else {
|
} else {
|
||||||
if (*ppos >= PCI_CFG_SPACE_SIZE) {
|
if (*ppos >= PCI_CFG_SPACE_SIZE) {
|
||||||
WARN_ON(cap_id > PCI_EXT_CAP_ID_MAX);
|
/*
|
||||||
|
* We can get a cap_id that exceeds PCI_EXT_CAP_ID_MAX
|
||||||
|
* if we're hiding an unknown capability at the start
|
||||||
|
* of the extended capability list. Use default, ro
|
||||||
|
* access, which will virtualize the id and next values.
|
||||||
|
*/
|
||||||
|
if (cap_id > PCI_EXT_CAP_ID_MAX)
|
||||||
|
perm = &direct_ro_perms;
|
||||||
|
else
|
||||||
perm = &ecap_perms[cap_id];
|
perm = &ecap_perms[cap_id];
|
||||||
|
|
||||||
cap_start = vfio_find_cap_start(vdev, *ppos);
|
cap_start = vfio_find_cap_start(vdev, *ppos);
|
||||||
} else {
|
} else {
|
||||||
WARN_ON(cap_id > PCI_CAP_ID_MAX);
|
WARN_ON(cap_id > PCI_CAP_ID_MAX);
|
||||||
|
|||||||
@@ -362,7 +362,7 @@ static void sh7760fb_free_mem(struct fb_info *info)
|
|||||||
if (!info->screen_base)
|
if (!info->screen_base)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
dma_free_coherent(info->dev, info->screen_size,
|
dma_free_coherent(info->device, info->screen_size,
|
||||||
info->screen_base, par->fbdma);
|
info->screen_base, par->fbdma);
|
||||||
|
|
||||||
par->fbdma = 0;
|
par->fbdma = 0;
|
||||||
@@ -411,14 +411,13 @@ static int sh7760fb_alloc_mem(struct fb_info *info)
|
|||||||
if (vram < PAGE_SIZE)
|
if (vram < PAGE_SIZE)
|
||||||
vram = PAGE_SIZE;
|
vram = PAGE_SIZE;
|
||||||
|
|
||||||
fbmem = dma_alloc_coherent(info->dev, vram, &par->fbdma, GFP_KERNEL);
|
fbmem = dma_alloc_coherent(info->device, vram, &par->fbdma, GFP_KERNEL);
|
||||||
|
|
||||||
if (!fbmem)
|
if (!fbmem)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
if ((par->fbdma & SH7760FB_DMA_MASK) != SH7760FB_DMA_MASK) {
|
if ((par->fbdma & SH7760FB_DMA_MASK) != SH7760FB_DMA_MASK) {
|
||||||
sh7760fb_free_mem(info);
|
dma_free_coherent(info->device, vram, fbmem, par->fbdma);
|
||||||
dev_err(info->dev, "kernel gave me memory at 0x%08lx, which is"
|
dev_err(info->device, "kernel gave me memory at 0x%08lx, which is"
|
||||||
"unusable for the LCDC\n", (unsigned long)par->fbdma);
|
"unusable for the LCDC\n", (unsigned long)par->fbdma);
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
}
|
}
|
||||||
@@ -489,7 +488,7 @@ static int sh7760fb_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
ret = sh7760fb_alloc_mem(info);
|
ret = sh7760fb_alloc_mem(info);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
dev_dbg(info->dev, "framebuffer memory allocation failed!\n");
|
dev_dbg(info->device, "framebuffer memory allocation failed!\n");
|
||||||
goto out_unmap;
|
goto out_unmap;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -185,6 +185,56 @@ static inline ext4_fsblk_t ext4_fsmap_next_pblk(struct ext4_fsmap *fmr)
|
|||||||
return fmr->fmr_physical + fmr->fmr_length;
|
return fmr->fmr_physical + fmr->fmr_length;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int ext4_getfsmap_meta_helper(struct super_block *sb,
|
||||||
|
ext4_group_t agno, ext4_grpblk_t start,
|
||||||
|
ext4_grpblk_t len, void *priv)
|
||||||
|
{
|
||||||
|
struct ext4_getfsmap_info *info = priv;
|
||||||
|
struct ext4_fsmap *p;
|
||||||
|
struct ext4_fsmap *tmp;
|
||||||
|
struct ext4_sb_info *sbi = EXT4_SB(sb);
|
||||||
|
ext4_fsblk_t fsb, fs_start, fs_end;
|
||||||
|
int error;
|
||||||
|
|
||||||
|
fs_start = fsb = (EXT4_C2B(sbi, start) +
|
||||||
|
ext4_group_first_block_no(sb, agno));
|
||||||
|
fs_end = fs_start + EXT4_C2B(sbi, len);
|
||||||
|
|
||||||
|
/* Return relevant extents from the meta_list */
|
||||||
|
list_for_each_entry_safe(p, tmp, &info->gfi_meta_list, fmr_list) {
|
||||||
|
if (p->fmr_physical < info->gfi_next_fsblk) {
|
||||||
|
list_del(&p->fmr_list);
|
||||||
|
kfree(p);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if (p->fmr_physical <= fs_start ||
|
||||||
|
p->fmr_physical + p->fmr_length <= fs_end) {
|
||||||
|
/* Emit the retained free extent record if present */
|
||||||
|
if (info->gfi_lastfree.fmr_owner) {
|
||||||
|
error = ext4_getfsmap_helper(sb, info,
|
||||||
|
&info->gfi_lastfree);
|
||||||
|
if (error)
|
||||||
|
return error;
|
||||||
|
info->gfi_lastfree.fmr_owner = 0;
|
||||||
|
}
|
||||||
|
error = ext4_getfsmap_helper(sb, info, p);
|
||||||
|
if (error)
|
||||||
|
return error;
|
||||||
|
fsb = p->fmr_physical + p->fmr_length;
|
||||||
|
if (info->gfi_next_fsblk < fsb)
|
||||||
|
info->gfi_next_fsblk = fsb;
|
||||||
|
list_del(&p->fmr_list);
|
||||||
|
kfree(p);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (info->gfi_next_fsblk < fsb)
|
||||||
|
info->gfi_next_fsblk = fsb;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
/* Transform a blockgroup's free record into a fsmap */
|
/* Transform a blockgroup's free record into a fsmap */
|
||||||
static int ext4_getfsmap_datadev_helper(struct super_block *sb,
|
static int ext4_getfsmap_datadev_helper(struct super_block *sb,
|
||||||
ext4_group_t agno, ext4_grpblk_t start,
|
ext4_group_t agno, ext4_grpblk_t start,
|
||||||
@@ -539,6 +589,7 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
|
|||||||
error = ext4_mballoc_query_range(sb, info->gfi_agno,
|
error = ext4_mballoc_query_range(sb, info->gfi_agno,
|
||||||
EXT4_B2C(sbi, info->gfi_low.fmr_physical),
|
EXT4_B2C(sbi, info->gfi_low.fmr_physical),
|
||||||
EXT4_B2C(sbi, info->gfi_high.fmr_physical),
|
EXT4_B2C(sbi, info->gfi_high.fmr_physical),
|
||||||
|
ext4_getfsmap_meta_helper,
|
||||||
ext4_getfsmap_datadev_helper, info);
|
ext4_getfsmap_datadev_helper, info);
|
||||||
if (error)
|
if (error)
|
||||||
goto err;
|
goto err;
|
||||||
@@ -560,7 +611,8 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
|
|||||||
|
|
||||||
/* Report any gaps at the end of the bg */
|
/* Report any gaps at the end of the bg */
|
||||||
info->gfi_last = true;
|
info->gfi_last = true;
|
||||||
error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster, 0, info);
|
error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster + 1,
|
||||||
|
0, info);
|
||||||
if (error)
|
if (error)
|
||||||
goto err;
|
goto err;
|
||||||
|
|
||||||
|
|||||||
@@ -5424,13 +5424,14 @@ int
|
|||||||
ext4_mballoc_query_range(
|
ext4_mballoc_query_range(
|
||||||
struct super_block *sb,
|
struct super_block *sb,
|
||||||
ext4_group_t group,
|
ext4_group_t group,
|
||||||
ext4_grpblk_t start,
|
ext4_grpblk_t first,
|
||||||
ext4_grpblk_t end,
|
ext4_grpblk_t end,
|
||||||
|
ext4_mballoc_query_range_fn meta_formatter,
|
||||||
ext4_mballoc_query_range_fn formatter,
|
ext4_mballoc_query_range_fn formatter,
|
||||||
void *priv)
|
void *priv)
|
||||||
{
|
{
|
||||||
void *bitmap;
|
void *bitmap;
|
||||||
ext4_grpblk_t next;
|
ext4_grpblk_t start, next;
|
||||||
struct ext4_buddy e4b;
|
struct ext4_buddy e4b;
|
||||||
int error;
|
int error;
|
||||||
|
|
||||||
@@ -5441,10 +5442,19 @@ ext4_mballoc_query_range(
|
|||||||
|
|
||||||
ext4_lock_group(sb, group);
|
ext4_lock_group(sb, group);
|
||||||
|
|
||||||
start = max(e4b.bd_info->bb_first_free, start);
|
start = max(e4b.bd_info->bb_first_free, first);
|
||||||
if (end >= EXT4_CLUSTERS_PER_GROUP(sb))
|
if (end >= EXT4_CLUSTERS_PER_GROUP(sb))
|
||||||
end = EXT4_CLUSTERS_PER_GROUP(sb) - 1;
|
end = EXT4_CLUSTERS_PER_GROUP(sb) - 1;
|
||||||
|
if (meta_formatter && start != first) {
|
||||||
|
if (start > end)
|
||||||
|
start = end;
|
||||||
|
ext4_unlock_group(sb, group);
|
||||||
|
error = meta_formatter(sb, group, first, start - first,
|
||||||
|
priv);
|
||||||
|
if (error)
|
||||||
|
goto out_unload;
|
||||||
|
ext4_lock_group(sb, group);
|
||||||
|
}
|
||||||
while (start <= end) {
|
while (start <= end) {
|
||||||
start = mb_find_next_zero_bit(bitmap, end + 1, start);
|
start = mb_find_next_zero_bit(bitmap, end + 1, start);
|
||||||
if (start > end)
|
if (start > end)
|
||||||
|
|||||||
@@ -212,6 +212,7 @@ ext4_mballoc_query_range(
|
|||||||
ext4_group_t agno,
|
ext4_group_t agno,
|
||||||
ext4_grpblk_t start,
|
ext4_grpblk_t start,
|
||||||
ext4_grpblk_t end,
|
ext4_grpblk_t end,
|
||||||
|
ext4_mballoc_query_range_fn meta_formatter,
|
||||||
ext4_mballoc_query_range_fn formatter,
|
ext4_mballoc_query_range_fn formatter,
|
||||||
void *priv);
|
void *priv);
|
||||||
|
|
||||||
|
|||||||
@@ -259,9 +259,9 @@ __u32 ext4_free_group_clusters(struct super_block *sb,
|
|||||||
__u32 ext4_free_inodes_count(struct super_block *sb,
|
__u32 ext4_free_inodes_count(struct super_block *sb,
|
||||||
struct ext4_group_desc *bg)
|
struct ext4_group_desc *bg)
|
||||||
{
|
{
|
||||||
return le16_to_cpu(bg->bg_free_inodes_count_lo) |
|
return le16_to_cpu(READ_ONCE(bg->bg_free_inodes_count_lo)) |
|
||||||
(EXT4_DESC_SIZE(sb) >= EXT4_MIN_DESC_SIZE_64BIT ?
|
(EXT4_DESC_SIZE(sb) >= EXT4_MIN_DESC_SIZE_64BIT ?
|
||||||
(__u32)le16_to_cpu(bg->bg_free_inodes_count_hi) << 16 : 0);
|
(__u32)le16_to_cpu(READ_ONCE(bg->bg_free_inodes_count_hi)) << 16 : 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
__u32 ext4_used_dirs_count(struct super_block *sb,
|
__u32 ext4_used_dirs_count(struct super_block *sb,
|
||||||
@@ -315,9 +315,9 @@ void ext4_free_group_clusters_set(struct super_block *sb,
|
|||||||
void ext4_free_inodes_set(struct super_block *sb,
|
void ext4_free_inodes_set(struct super_block *sb,
|
||||||
struct ext4_group_desc *bg, __u32 count)
|
struct ext4_group_desc *bg, __u32 count)
|
||||||
{
|
{
|
||||||
bg->bg_free_inodes_count_lo = cpu_to_le16((__u16)count);
|
WRITE_ONCE(bg->bg_free_inodes_count_lo, cpu_to_le16((__u16)count));
|
||||||
if (EXT4_DESC_SIZE(sb) >= EXT4_MIN_DESC_SIZE_64BIT)
|
if (EXT4_DESC_SIZE(sb) >= EXT4_MIN_DESC_SIZE_64BIT)
|
||||||
bg->bg_free_inodes_count_hi = cpu_to_le16(count >> 16);
|
WRITE_ONCE(bg->bg_free_inodes_count_hi, cpu_to_le16(count >> 16));
|
||||||
}
|
}
|
||||||
|
|
||||||
void ext4_used_dirs_set(struct super_block *sb,
|
void ext4_used_dirs_set(struct super_block *sb,
|
||||||
|
|||||||
@@ -156,6 +156,7 @@ struct hfsplus_sb_info {
|
|||||||
|
|
||||||
/* Runtime variables */
|
/* Runtime variables */
|
||||||
u32 blockoffset;
|
u32 blockoffset;
|
||||||
|
u32 min_io_size;
|
||||||
sector_t part_start;
|
sector_t part_start;
|
||||||
sector_t sect_count;
|
sector_t sect_count;
|
||||||
int fs_shift;
|
int fs_shift;
|
||||||
@@ -306,7 +307,7 @@ struct hfsplus_readdir_data {
|
|||||||
*/
|
*/
|
||||||
static inline unsigned short hfsplus_min_io_size(struct super_block *sb)
|
static inline unsigned short hfsplus_min_io_size(struct super_block *sb)
|
||||||
{
|
{
|
||||||
return max_t(unsigned short, bdev_logical_block_size(sb->s_bdev),
|
return max_t(unsigned short, HFSPLUS_SB(sb)->min_io_size,
|
||||||
HFSPLUS_SECTOR_SIZE);
|
HFSPLUS_SECTOR_SIZE);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -170,6 +170,8 @@ int hfsplus_read_wrapper(struct super_block *sb)
|
|||||||
if (!blocksize)
|
if (!blocksize)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
|
sbi->min_io_size = blocksize;
|
||||||
|
|
||||||
if (hfsplus_get_last_session(sb, &part_start, &part_size))
|
if (hfsplus_get_last_session(sb, &part_start, &part_size))
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
|
|||||||
@@ -340,10 +340,9 @@ static int jffs2_block_check_erase(struct jffs2_sb_info *c, struct jffs2_erasebl
|
|||||||
} while(--retlen);
|
} while(--retlen);
|
||||||
mtd_unpoint(c->mtd, jeb->offset, c->sector_size);
|
mtd_unpoint(c->mtd, jeb->offset, c->sector_size);
|
||||||
if (retlen) {
|
if (retlen) {
|
||||||
pr_warn("Newly-erased block contained word 0x%lx at offset 0x%08tx\n",
|
*bad_offset = jeb->offset + c->sector_size - retlen * sizeof(*wordebuf);
|
||||||
*wordebuf,
|
pr_warn("Newly-erased block contained word 0x%lx at offset 0x%08x\n",
|
||||||
jeb->offset +
|
*wordebuf, *bad_offset);
|
||||||
c->sector_size-retlen * sizeof(*wordebuf));
|
|
||||||
return -EIO;
|
return -EIO;
|
||||||
}
|
}
|
||||||
return 0;
|
return 0;
|
||||||
|
|||||||
@@ -572,7 +572,7 @@ static int ea_get(struct inode *inode, struct ea_buffer *ea_buf, int min_size)
|
|||||||
|
|
||||||
size_check:
|
size_check:
|
||||||
if (EALIST_SIZE(ea_buf->xattr) != ea_size) {
|
if (EALIST_SIZE(ea_buf->xattr) != ea_size) {
|
||||||
int size = min_t(int, EALIST_SIZE(ea_buf->xattr), ea_size);
|
int size = clamp_t(int, ea_size, 0, EALIST_SIZE(ea_buf->xattr));
|
||||||
|
|
||||||
printk(KERN_ERR "ea_get: invalid extended attribute\n");
|
printk(KERN_ERR "ea_get: invalid extended attribute\n");
|
||||||
print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1,
|
print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1,
|
||||||
|
|||||||
@@ -2379,12 +2379,14 @@ static void nfs4_open_release(void *calldata)
|
|||||||
struct nfs4_opendata *data = calldata;
|
struct nfs4_opendata *data = calldata;
|
||||||
struct nfs4_state *state = NULL;
|
struct nfs4_state *state = NULL;
|
||||||
|
|
||||||
|
/* In case of error, no cleanup! */
|
||||||
|
if (data->rpc_status != 0 || !data->rpc_done) {
|
||||||
|
nfs_release_seqid(data->o_arg.seqid);
|
||||||
|
goto out_free;
|
||||||
|
}
|
||||||
/* If this request hasn't been cancelled, do nothing */
|
/* If this request hasn't been cancelled, do nothing */
|
||||||
if (!data->cancelled)
|
if (!data->cancelled)
|
||||||
goto out_free;
|
goto out_free;
|
||||||
/* In case of error, no cleanup! */
|
|
||||||
if (data->rpc_status != 0 || !data->rpc_done)
|
|
||||||
goto out_free;
|
|
||||||
/* In case we need an open_confirm, no cleanup! */
|
/* In case we need an open_confirm, no cleanup! */
|
||||||
if (data->o_res.rflags & NFS4_OPEN_RESULT_CONFIRM)
|
if (data->o_res.rflags & NFS4_OPEN_RESULT_CONFIRM)
|
||||||
goto out_free;
|
goto out_free;
|
||||||
|
|||||||
@@ -283,17 +283,17 @@ static int decode_cb_compound4res(struct xdr_stream *xdr,
|
|||||||
u32 length;
|
u32 length;
|
||||||
__be32 *p;
|
__be32 *p;
|
||||||
|
|
||||||
p = xdr_inline_decode(xdr, 4 + 4);
|
p = xdr_inline_decode(xdr, XDR_UNIT);
|
||||||
if (unlikely(p == NULL))
|
if (unlikely(p == NULL))
|
||||||
goto out_overflow;
|
goto out_overflow;
|
||||||
hdr->status = be32_to_cpup(p++);
|
hdr->status = be32_to_cpup(p);
|
||||||
/* Ignore the tag */
|
/* Ignore the tag */
|
||||||
length = be32_to_cpup(p++);
|
if (xdr_stream_decode_u32(xdr, &length) < 0)
|
||||||
p = xdr_inline_decode(xdr, length + 4);
|
goto out_overflow;
|
||||||
if (unlikely(p == NULL))
|
if (xdr_inline_decode(xdr, length) == NULL)
|
||||||
|
goto out_overflow;
|
||||||
|
if (xdr_stream_decode_u32(xdr, &hdr->nops) < 0)
|
||||||
goto out_overflow;
|
goto out_overflow;
|
||||||
p += XDR_QUADLEN(length);
|
|
||||||
hdr->nops = be32_to_cpup(p);
|
|
||||||
return 0;
|
return 0;
|
||||||
out_overflow:
|
out_overflow:
|
||||||
return -EIO;
|
return -EIO;
|
||||||
@@ -1134,6 +1134,8 @@ static void nfsd4_process_cb_update(struct nfsd4_callback *cb)
|
|||||||
ses = c->cn_session;
|
ses = c->cn_session;
|
||||||
}
|
}
|
||||||
spin_unlock(&clp->cl_lock);
|
spin_unlock(&clp->cl_lock);
|
||||||
|
if (!c)
|
||||||
|
return;
|
||||||
|
|
||||||
err = setup_callback_client(clp, &conn, ses);
|
err = setup_callback_client(clp, &conn, ses);
|
||||||
if (err) {
|
if (err) {
|
||||||
|
|||||||
@@ -596,7 +596,8 @@ nfs4_reset_recoverydir(char *recdir)
|
|||||||
return status;
|
return status;
|
||||||
status = -ENOTDIR;
|
status = -ENOTDIR;
|
||||||
if (d_is_dir(path.dentry)) {
|
if (d_is_dir(path.dentry)) {
|
||||||
strcpy(user_recovery_dirname, recdir);
|
strscpy(user_recovery_dirname, recdir,
|
||||||
|
sizeof(user_recovery_dirname));
|
||||||
status = 0;
|
status = 0;
|
||||||
}
|
}
|
||||||
path_put(&path);
|
path_put(&path);
|
||||||
|
|||||||
@@ -68,7 +68,6 @@ nilfs_btnode_create_block(struct address_space *btnc, __u64 blocknr)
|
|||||||
goto failed;
|
goto failed;
|
||||||
}
|
}
|
||||||
memset(bh->b_data, 0, i_blocksize(inode));
|
memset(bh->b_data, 0, i_blocksize(inode));
|
||||||
bh->b_bdev = inode->i_sb->s_bdev;
|
|
||||||
bh->b_blocknr = blocknr;
|
bh->b_blocknr = blocknr;
|
||||||
set_buffer_mapped(bh);
|
set_buffer_mapped(bh);
|
||||||
set_buffer_uptodate(bh);
|
set_buffer_uptodate(bh);
|
||||||
@@ -133,7 +132,6 @@ int nilfs_btnode_submit_block(struct address_space *btnc, __u64 blocknr,
|
|||||||
goto found;
|
goto found;
|
||||||
}
|
}
|
||||||
set_buffer_mapped(bh);
|
set_buffer_mapped(bh);
|
||||||
bh->b_bdev = inode->i_sb->s_bdev;
|
|
||||||
bh->b_blocknr = pblocknr; /* set block address for read */
|
bh->b_blocknr = pblocknr; /* set block address for read */
|
||||||
bh->b_end_io = end_buffer_read_sync;
|
bh->b_end_io = end_buffer_read_sync;
|
||||||
get_bh(bh);
|
get_bh(bh);
|
||||||
|
|||||||
@@ -83,10 +83,8 @@ int nilfs_gccache_submit_read_data(struct inode *inode, sector_t blkoff,
|
|||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!buffer_mapped(bh)) {
|
if (!buffer_mapped(bh))
|
||||||
bh->b_bdev = inode->i_sb->s_bdev;
|
|
||||||
set_buffer_mapped(bh);
|
set_buffer_mapped(bh);
|
||||||
}
|
|
||||||
bh->b_blocknr = pbn;
|
bh->b_blocknr = pbn;
|
||||||
bh->b_end_io = end_buffer_read_sync;
|
bh->b_end_io = end_buffer_read_sync;
|
||||||
get_bh(bh);
|
get_bh(bh);
|
||||||
|
|||||||
@@ -89,7 +89,6 @@ static int nilfs_mdt_create_block(struct inode *inode, unsigned long block,
|
|||||||
if (buffer_uptodate(bh))
|
if (buffer_uptodate(bh))
|
||||||
goto failed_bh;
|
goto failed_bh;
|
||||||
|
|
||||||
bh->b_bdev = sb->s_bdev;
|
|
||||||
err = nilfs_mdt_insert_new_block(inode, block, bh, init_block);
|
err = nilfs_mdt_insert_new_block(inode, block, bh, init_block);
|
||||||
if (likely(!err)) {
|
if (likely(!err)) {
|
||||||
get_bh(bh);
|
get_bh(bh);
|
||||||
|
|||||||
@@ -39,7 +39,6 @@ __nilfs_get_page_block(struct page *page, unsigned long block, pgoff_t index,
|
|||||||
first_block = (unsigned long)index << (PAGE_SHIFT - blkbits);
|
first_block = (unsigned long)index << (PAGE_SHIFT - blkbits);
|
||||||
bh = nilfs_page_get_nth_block(page, block - first_block);
|
bh = nilfs_page_get_nth_block(page, block - first_block);
|
||||||
|
|
||||||
touch_buffer(bh);
|
|
||||||
wait_on_buffer(bh);
|
wait_on_buffer(bh);
|
||||||
return bh;
|
return bh;
|
||||||
}
|
}
|
||||||
@@ -64,6 +63,7 @@ struct buffer_head *nilfs_grab_buffer(struct inode *inode,
|
|||||||
put_page(page);
|
put_page(page);
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
bh->b_bdev = inode->i_sb->s_bdev;
|
||||||
return bh;
|
return bh;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -86,6 +86,8 @@ enum ocfs2_iocb_lock_bits {
|
|||||||
OCFS2_IOCB_NUM_LOCKS
|
OCFS2_IOCB_NUM_LOCKS
|
||||||
};
|
};
|
||||||
|
|
||||||
|
#define ocfs2_iocb_init_rw_locked(iocb) \
|
||||||
|
(iocb->private = NULL)
|
||||||
#define ocfs2_iocb_clear_rw_locked(iocb) \
|
#define ocfs2_iocb_clear_rw_locked(iocb) \
|
||||||
clear_bit(OCFS2_IOCB_RW_LOCK, (unsigned long *)&iocb->private)
|
clear_bit(OCFS2_IOCB_RW_LOCK, (unsigned long *)&iocb->private)
|
||||||
#define ocfs2_iocb_rw_locked_level(iocb) \
|
#define ocfs2_iocb_rw_locked_level(iocb) \
|
||||||
|
|||||||
@@ -2412,6 +2412,8 @@ static ssize_t ocfs2_file_write_iter(struct kiocb *iocb,
|
|||||||
} else
|
} else
|
||||||
inode_lock(inode);
|
inode_lock(inode);
|
||||||
|
|
||||||
|
ocfs2_iocb_init_rw_locked(iocb);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Concurrent O_DIRECT writes are allowed with
|
* Concurrent O_DIRECT writes are allowed with
|
||||||
* mount_option "coherency=buffered".
|
* mount_option "coherency=buffered".
|
||||||
@@ -2558,6 +2560,8 @@ static ssize_t ocfs2_file_read_iter(struct kiocb *iocb,
|
|||||||
if (!direct_io && nowait)
|
if (!direct_io && nowait)
|
||||||
return -EOPNOTSUPP;
|
return -EOPNOTSUPP;
|
||||||
|
|
||||||
|
ocfs2_iocb_init_rw_locked(iocb);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* buffered reads protect themselves in ->readpage(). O_DIRECT reads
|
* buffered reads protect themselves in ->readpage(). O_DIRECT reads
|
||||||
* need locks to protect pending reads from racing with truncate.
|
* need locks to protect pending reads from racing with truncate.
|
||||||
|
|||||||
@@ -582,6 +582,8 @@ int ocfs2_group_add(struct inode *inode, struct ocfs2_new_group_input *input)
|
|||||||
ocfs2_commit_trans(osb, handle);
|
ocfs2_commit_trans(osb, handle);
|
||||||
|
|
||||||
out_free_group_bh:
|
out_free_group_bh:
|
||||||
|
if (ret < 0)
|
||||||
|
ocfs2_remove_from_cache(INODE_CACHE(inode), group_bh);
|
||||||
brelse(group_bh);
|
brelse(group_bh);
|
||||||
|
|
||||||
out_unlock:
|
out_unlock:
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user