Merge 4.19.234 into android-4.19-stable

Changes in 4.19.234
	x86/speculation: Merge one test in spectre_v2_user_select_mitigation()
	x86,bugs: Unconditionally allow spectre_v2=retpoline,amd
	x86/speculation: Rename RETPOLINE_AMD to RETPOLINE_LFENCE
	x86/speculation: Add eIBRS + Retpoline options
	Documentation/hw-vuln: Update spectre doc
	x86/speculation: Include unprivileged eBPF status in Spectre v2 mitigation reporting
	x86/speculation: Use generic retpoline by default on AMD
	x86/speculation: Update link to AMD speculation whitepaper
	x86/speculation: Warn about Spectre v2 LFENCE mitigation
	x86/speculation: Warn about eIBRS + LFENCE + Unprivileged eBPF + SMT
	arm/arm64: Provide a wrapper for SMCCC 1.1 calls
	arm/arm64: smccc/psci: add arm_smccc_1_1_get_conduit()
	ARM: report Spectre v2 status through sysfs
	ARM: early traps initialisation
	ARM: use LOADADDR() to get load address of sections
	ARM: Spectre-BHB workaround
	ARM: include unprivileged BPF status in Spectre V2 reporting
	ARM: fix build error when BPF_SYSCALL is disabled
	kbuild: add CONFIG_LD_IS_LLD
	ARM: fix co-processor register typo
	ARM: Do not use NOCROSSREFS directive with ld.lld
	ARM: fix build warning in proc-v7-bugs.c
	xen/xenbus: don't let xenbus_grant_ring() remove grants in error case
	xen/grant-table: add gnttab_try_end_foreign_access()
	xen/blkfront: don't use gnttab_query_foreign_access() for mapped status
	xen/netfront: don't use gnttab_query_foreign_access() for mapped status
	xen/scsifront: don't use gnttab_query_foreign_access() for mapped status
	xen/gntalloc: don't use gnttab_query_foreign_access()
	xen: remove gnttab_query_foreign_access()
	xen/9p: use alloc/free_pages_exact()
	xen/pvcalls: use alloc/free_pages_exact()
	xen/gnttab: fix gnttab_end_foreign_access() without page specified
	xen/netfront: react properly to failing gnttab_end_foreign_access_ref()
	Linux 4.19.234

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I0ba31b3c9c84dcebbafa96ab5735505712d83185
This commit is contained in:
Greg Kroah-Hartman
2022-03-11 11:13:57 +01:00
30 changed files with 960 additions and 261 deletions

View File

@@ -60,8 +60,8 @@ privileged data touched during the speculative execution.
Spectre variant 1 attacks take advantage of speculative execution of Spectre variant 1 attacks take advantage of speculative execution of
conditional branches, while Spectre variant 2 attacks use speculative conditional branches, while Spectre variant 2 attacks use speculative
execution of indirect branches to leak privileged memory. execution of indirect branches to leak privileged memory.
See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[7] <spec_ref7>` See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[6] <spec_ref6>`
:ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`. :ref:`[7] <spec_ref7>` :ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`.
Spectre variant 1 (Bounds Check Bypass) Spectre variant 1 (Bounds Check Bypass)
--------------------------------------- ---------------------------------------
@@ -131,6 +131,19 @@ steer its indirect branch speculations to gadget code, and measure the
speculative execution's side effects left in level 1 cache to infer the speculative execution's side effects left in level 1 cache to infer the
victim's data. victim's data.
Yet another variant 2 attack vector is for the attacker to poison the
Branch History Buffer (BHB) to speculatively steer an indirect branch
to a specific Branch Target Buffer (BTB) entry, even if the entry isn't
associated with the source address of the indirect branch. Specifically,
the BHB might be shared across privilege levels even in the presence of
Enhanced IBRS.
Currently the only known real-world BHB attack vector is via
unprivileged eBPF. Therefore, it's highly recommended to not enable
unprivileged eBPF, especially when eIBRS is used (without retpolines).
For a full mitigation against BHB attacks, it's recommended to use
retpolines (or eIBRS combined with retpolines).
Attack scenarios Attack scenarios
---------------- ----------------
@@ -364,13 +377,15 @@ The possible values in this file are:
- Kernel status: - Kernel status:
==================================== ================================= ======================================== =================================
'Not affected' The processor is not vulnerable 'Not affected' The processor is not vulnerable
'Vulnerable' Vulnerable, no mitigation 'Mitigation: None' Vulnerable, no mitigation
'Mitigation: Full generic retpoline' Software-focused mitigation 'Mitigation: Retpolines' Use Retpoline thunks
'Mitigation: Full AMD retpoline' AMD-specific software mitigation 'Mitigation: LFENCE' Use LFENCE instructions
'Mitigation: Enhanced IBRS' Hardware-focused mitigation 'Mitigation: Enhanced IBRS' Hardware-focused mitigation
==================================== ================================= 'Mitigation: Enhanced IBRS + Retpolines' Hardware-focused + Retpolines
'Mitigation: Enhanced IBRS + LFENCE' Hardware-focused + LFENCE
======================================== =================================
- Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is - Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is
used to protect against Spectre variant 2 attacks when calling firmware (x86 only). used to protect against Spectre variant 2 attacks when calling firmware (x86 only).
@@ -584,12 +599,13 @@ kernel command line.
Specific mitigations can also be selected manually: Specific mitigations can also be selected manually:
retpoline retpoline auto pick between generic,lfence
replace indirect branches retpoline,generic Retpolines
retpoline,generic retpoline,lfence LFENCE; indirect branch
google's original retpoline retpoline,amd alias for retpoline,lfence
retpoline,amd eibrs enhanced IBRS
AMD-specific minimal thunk eibrs,retpoline enhanced IBRS + Retpolines
eibrs,lfence enhanced IBRS + LFENCE
Not specifying this option is equivalent to Not specifying this option is equivalent to
spectre_v2=auto. spectre_v2=auto.
@@ -730,7 +746,7 @@ AMD white papers:
.. _spec_ref6: .. _spec_ref6:
[6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/90343-B_SoftwareTechniquesforManagingSpeculation_WP_7-18Update_FNL.pdf>`_. [6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/Managing-Speculation-on-AMD-Processors.pdf>`_.
ARM white papers: ARM white papers:

View File

@@ -4359,8 +4359,12 @@
Specific mitigations can also be selected manually: Specific mitigations can also be selected manually:
retpoline - replace indirect branches retpoline - replace indirect branches
retpoline,generic - google's original retpoline retpoline,generic - Retpolines
retpoline,amd - AMD-specific minimal thunk retpoline,lfence - LFENCE; indirect branch
retpoline,amd - alias for retpoline,lfence
eibrs - enhanced IBRS
eibrs,retpoline - enhanced IBRS + Retpolines
eibrs,lfence - enhanced IBRS + LFENCE
Not specifying this option is equivalent to Not specifying this option is equivalent to
spectre_v2=auto. spectre_v2=auto.

View File

@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
VERSION = 4 VERSION = 4
PATCHLEVEL = 19 PATCHLEVEL = 19
SUBLEVEL = 233 SUBLEVEL = 234
EXTRAVERSION = EXTRAVERSION =
NAME = "People's Front" NAME = "People's Front"

View File

@@ -110,6 +110,16 @@
.endm .endm
#endif #endif
#if __LINUX_ARM_ARCH__ < 7
.macro dsb, args
mcr p15, 0, r0, c7, c10, 4
.endm
.macro isb, args
mcr p15, 0, r0, c7, c5, 4
.endm
#endif
.macro asm_trace_hardirqs_off, save=1 .macro asm_trace_hardirqs_off, save=1
#if defined(CONFIG_TRACE_IRQFLAGS) #if defined(CONFIG_TRACE_IRQFLAGS)
.if \save .if \save

View File

@@ -0,0 +1,32 @@
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef __ASM_SPECTRE_H
#define __ASM_SPECTRE_H
enum {
SPECTRE_UNAFFECTED,
SPECTRE_MITIGATED,
SPECTRE_VULNERABLE,
};
enum {
__SPECTRE_V2_METHOD_BPIALL,
__SPECTRE_V2_METHOD_ICIALLU,
__SPECTRE_V2_METHOD_SMC,
__SPECTRE_V2_METHOD_HVC,
__SPECTRE_V2_METHOD_LOOP8,
};
enum {
SPECTRE_V2_METHOD_BPIALL = BIT(__SPECTRE_V2_METHOD_BPIALL),
SPECTRE_V2_METHOD_ICIALLU = BIT(__SPECTRE_V2_METHOD_ICIALLU),
SPECTRE_V2_METHOD_SMC = BIT(__SPECTRE_V2_METHOD_SMC),
SPECTRE_V2_METHOD_HVC = BIT(__SPECTRE_V2_METHOD_HVC),
SPECTRE_V2_METHOD_LOOP8 = BIT(__SPECTRE_V2_METHOD_LOOP8),
};
void spectre_v2_update_state(unsigned int state, unsigned int methods);
int spectre_bhb_update_vectors(unsigned int method);
#endif

View File

@@ -106,4 +106,6 @@ endif
obj-$(CONFIG_HAVE_ARM_SMCCC) += smccc-call.o obj-$(CONFIG_HAVE_ARM_SMCCC) += smccc-call.o
obj-$(CONFIG_GENERIC_CPU_VULNERABILITIES) += spectre.o
extra-y := $(head-y) vmlinux.lds extra-y := $(head-y) vmlinux.lds

View File

@@ -1029,12 +1029,11 @@ vector_\name:
sub lr, lr, #\correction sub lr, lr, #\correction
.endif .endif
@ @ Save r0, lr_<exception> (parent PC)
@ Save r0, lr_<exception> (parent PC) and spsr_<exception>
@ (parent CPSR)
@
stmia sp, {r0, lr} @ save r0, lr stmia sp, {r0, lr} @ save r0, lr
mrs lr, spsr
@ Save spsr_<exception> (parent CPSR)
2: mrs lr, spsr
str lr, [sp, #8] @ save spsr str lr, [sp, #8] @ save spsr
@ @
@@ -1055,6 +1054,44 @@ vector_\name:
movs pc, lr @ branch to handler in SVC mode movs pc, lr @ branch to handler in SVC mode
ENDPROC(vector_\name) ENDPROC(vector_\name)
#ifdef CONFIG_HARDEN_BRANCH_HISTORY
.subsection 1
.align 5
vector_bhb_loop8_\name:
.if \correction
sub lr, lr, #\correction
.endif
@ Save r0, lr_<exception> (parent PC)
stmia sp, {r0, lr}
@ bhb workaround
mov r0, #8
1: b . + 4
subs r0, r0, #1
bne 1b
dsb
isb
b 2b
ENDPROC(vector_bhb_loop8_\name)
vector_bhb_bpiall_\name:
.if \correction
sub lr, lr, #\correction
.endif
@ Save r0, lr_<exception> (parent PC)
stmia sp, {r0, lr}
@ bhb workaround
mcr p15, 0, r0, c7, c5, 6 @ BPIALL
@ isb not needed due to "movs pc, lr" in the vector stub
@ which gives a "context synchronisation".
b 2b
ENDPROC(vector_bhb_bpiall_\name)
.previous
#endif
.align 2 .align 2
@ handler addresses follow this label @ handler addresses follow this label
1: 1:
@@ -1063,6 +1100,10 @@ ENDPROC(vector_\name)
.section .stubs, "ax", %progbits .section .stubs, "ax", %progbits
@ This must be the first word @ This must be the first word
.word vector_swi .word vector_swi
#ifdef CONFIG_HARDEN_BRANCH_HISTORY
.word vector_bhb_loop8_swi
.word vector_bhb_bpiall_swi
#endif
vector_rst: vector_rst:
ARM( swi SYS_ERROR0 ) ARM( swi SYS_ERROR0 )
@@ -1177,8 +1218,10 @@ vector_addrexcptn:
* FIQ "NMI" handler * FIQ "NMI" handler
*----------------------------------------------------------------------------- *-----------------------------------------------------------------------------
* Handle a FIQ using the SVC stack allowing FIQ act like NMI on x86 * Handle a FIQ using the SVC stack allowing FIQ act like NMI on x86
* systems. * systems. This must be the last vector stub, so lets place it in its own
* subsection.
*/ */
.subsection 2
vector_stub fiq, FIQ_MODE, 4 vector_stub fiq, FIQ_MODE, 4
.long __fiq_usr @ 0 (USR_26 / USR_32) .long __fiq_usr @ 0 (USR_26 / USR_32)
@@ -1211,6 +1254,30 @@ vector_addrexcptn:
W(b) vector_irq W(b) vector_irq
W(b) vector_fiq W(b) vector_fiq
#ifdef CONFIG_HARDEN_BRANCH_HISTORY
.section .vectors.bhb.loop8, "ax", %progbits
.L__vectors_bhb_loop8_start:
W(b) vector_rst
W(b) vector_bhb_loop8_und
W(ldr) pc, .L__vectors_bhb_loop8_start + 0x1004
W(b) vector_bhb_loop8_pabt
W(b) vector_bhb_loop8_dabt
W(b) vector_addrexcptn
W(b) vector_bhb_loop8_irq
W(b) vector_bhb_loop8_fiq
.section .vectors.bhb.bpiall, "ax", %progbits
.L__vectors_bhb_bpiall_start:
W(b) vector_rst
W(b) vector_bhb_bpiall_und
W(ldr) pc, .L__vectors_bhb_bpiall_start + 0x1008
W(b) vector_bhb_bpiall_pabt
W(b) vector_bhb_bpiall_dabt
W(b) vector_addrexcptn
W(b) vector_bhb_bpiall_irq
W(b) vector_bhb_bpiall_fiq
#endif
.data .data
.align 2 .align 2

View File

@@ -165,6 +165,29 @@ ENDPROC(ret_from_fork)
*----------------------------------------------------------------------------- *-----------------------------------------------------------------------------
*/ */
.align 5
#ifdef CONFIG_HARDEN_BRANCH_HISTORY
ENTRY(vector_bhb_loop8_swi)
sub sp, sp, #PT_REGS_SIZE
stmia sp, {r0 - r12}
mov r8, #8
1: b 2f
2: subs r8, r8, #1
bne 1b
dsb
isb
b 3f
ENDPROC(vector_bhb_loop8_swi)
.align 5
ENTRY(vector_bhb_bpiall_swi)
sub sp, sp, #PT_REGS_SIZE
stmia sp, {r0 - r12}
mcr p15, 0, r8, c7, c5, 6 @ BPIALL
isb
b 3f
ENDPROC(vector_bhb_bpiall_swi)
#endif
.align 5 .align 5
ENTRY(vector_swi) ENTRY(vector_swi)
#ifdef CONFIG_CPU_V7M #ifdef CONFIG_CPU_V7M
@@ -172,6 +195,7 @@ ENTRY(vector_swi)
#else #else
sub sp, sp, #PT_REGS_SIZE sub sp, sp, #PT_REGS_SIZE
stmia sp, {r0 - r12} @ Calling r0 - r12 stmia sp, {r0 - r12} @ Calling r0 - r12
3:
ARM( add r8, sp, #S_PC ) ARM( add r8, sp, #S_PC )
ARM( stmdb r8, {sp, lr}^ ) @ Calling sp, lr ARM( stmdb r8, {sp, lr}^ ) @ Calling sp, lr
THUMB( mov r8, sp ) THUMB( mov r8, sp )

71
arch/arm/kernel/spectre.c Normal file
View File

@@ -0,0 +1,71 @@
// SPDX-License-Identifier: GPL-2.0-only
#include <linux/bpf.h>
#include <linux/cpu.h>
#include <linux/device.h>
#include <asm/spectre.h>
static bool _unprivileged_ebpf_enabled(void)
{
#ifdef CONFIG_BPF_SYSCALL
return !sysctl_unprivileged_bpf_disabled;
#else
return false;
#endif
}
ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "Mitigation: __user pointer sanitization\n");
}
static unsigned int spectre_v2_state;
static unsigned int spectre_v2_methods;
void spectre_v2_update_state(unsigned int state, unsigned int method)
{
if (state > spectre_v2_state)
spectre_v2_state = state;
spectre_v2_methods |= method;
}
ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
char *buf)
{
const char *method;
if (spectre_v2_state == SPECTRE_UNAFFECTED)
return sprintf(buf, "%s\n", "Not affected");
if (spectre_v2_state != SPECTRE_MITIGATED)
return sprintf(buf, "%s\n", "Vulnerable");
if (_unprivileged_ebpf_enabled())
return sprintf(buf, "Vulnerable: Unprivileged eBPF enabled\n");
switch (spectre_v2_methods) {
case SPECTRE_V2_METHOD_BPIALL:
method = "Branch predictor hardening";
break;
case SPECTRE_V2_METHOD_ICIALLU:
method = "I-cache invalidation";
break;
case SPECTRE_V2_METHOD_SMC:
case SPECTRE_V2_METHOD_HVC:
method = "Firmware call";
break;
case SPECTRE_V2_METHOD_LOOP8:
method = "History overwrite";
break;
default:
method = "Multiple mitigations";
break;
}
return sprintf(buf, "Mitigation: %s\n", method);
}

View File

@@ -33,6 +33,7 @@
#include <linux/atomic.h> #include <linux/atomic.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/exception.h> #include <asm/exception.h>
#include <asm/spectre.h>
#include <asm/unistd.h> #include <asm/unistd.h>
#include <asm/traps.h> #include <asm/traps.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
@@ -830,10 +831,59 @@ static inline void __init kuser_init(void *vectors)
} }
#endif #endif
#ifndef CONFIG_CPU_V7M
static void copy_from_lma(void *vma, void *lma_start, void *lma_end)
{
memcpy(vma, lma_start, lma_end - lma_start);
}
static void flush_vectors(void *vma, size_t offset, size_t size)
{
unsigned long start = (unsigned long)vma + offset;
unsigned long end = start + size;
flush_icache_range(start, end);
}
#ifdef CONFIG_HARDEN_BRANCH_HISTORY
int spectre_bhb_update_vectors(unsigned int method)
{
extern char __vectors_bhb_bpiall_start[], __vectors_bhb_bpiall_end[];
extern char __vectors_bhb_loop8_start[], __vectors_bhb_loop8_end[];
void *vec_start, *vec_end;
if (system_state > SYSTEM_SCHEDULING) {
pr_err("CPU%u: Spectre BHB workaround too late - system vulnerable\n",
smp_processor_id());
return SPECTRE_VULNERABLE;
}
switch (method) {
case SPECTRE_V2_METHOD_LOOP8:
vec_start = __vectors_bhb_loop8_start;
vec_end = __vectors_bhb_loop8_end;
break;
case SPECTRE_V2_METHOD_BPIALL:
vec_start = __vectors_bhb_bpiall_start;
vec_end = __vectors_bhb_bpiall_end;
break;
default:
pr_err("CPU%u: unknown Spectre BHB state %d\n",
smp_processor_id(), method);
return SPECTRE_VULNERABLE;
}
copy_from_lma(vectors_page, vec_start, vec_end);
flush_vectors(vectors_page, 0, vec_end - vec_start);
return SPECTRE_MITIGATED;
}
#endif
void __init early_trap_init(void *vectors_base) void __init early_trap_init(void *vectors_base)
{ {
#ifndef CONFIG_CPU_V7M
unsigned long vectors = (unsigned long)vectors_base;
extern char __stubs_start[], __stubs_end[]; extern char __stubs_start[], __stubs_end[];
extern char __vectors_start[], __vectors_end[]; extern char __vectors_start[], __vectors_end[];
unsigned i; unsigned i;
@@ -854,17 +904,20 @@ void __init early_trap_init(void *vectors_base)
* into the vector page, mapped at 0xffff0000, and ensure these * into the vector page, mapped at 0xffff0000, and ensure these
* are visible to the instruction stream. * are visible to the instruction stream.
*/ */
memcpy((void *)vectors, __vectors_start, __vectors_end - __vectors_start); copy_from_lma(vectors_base, __vectors_start, __vectors_end);
memcpy((void *)vectors + 0x1000, __stubs_start, __stubs_end - __stubs_start); copy_from_lma(vectors_base + 0x1000, __stubs_start, __stubs_end);
kuser_init(vectors_base); kuser_init(vectors_base);
flush_icache_range(vectors, vectors + PAGE_SIZE * 2); flush_vectors(vectors_base, 0, PAGE_SIZE * 2);
}
#else /* ifndef CONFIG_CPU_V7M */ #else /* ifndef CONFIG_CPU_V7M */
void __init early_trap_init(void *vectors_base)
{
/* /*
* on V7-M there is no need to copy the vector table to a dedicated * on V7-M there is no need to copy the vector table to a dedicated
* memory area. The address is configurable and so a table in the kernel * memory area. The address is configurable and so a table in the kernel
* image can be used. * image can be used.
*/ */
#endif
} }
#endif

View File

@@ -25,6 +25,19 @@
#define ARM_MMU_DISCARD(x) x #define ARM_MMU_DISCARD(x) x
#endif #endif
/*
* ld.lld does not support NOCROSSREFS:
* https://github.com/ClangBuiltLinux/linux/issues/1609
*/
#ifdef CONFIG_LD_IS_LLD
#define NOCROSSREFS
#endif
/* Set start/end symbol names to the LMA for the section */
#define ARM_LMA(sym, section) \
sym##_start = LOADADDR(section); \
sym##_end = LOADADDR(section) + SIZEOF(section)
#define PROC_INFO \ #define PROC_INFO \
. = ALIGN(4); \ . = ALIGN(4); \
__proc_info_begin = .; \ __proc_info_begin = .; \
@@ -100,19 +113,31 @@
* only thing that matters is their relative offsets * only thing that matters is their relative offsets
*/ */
#define ARM_VECTORS \ #define ARM_VECTORS \
__vectors_start = .; \ __vectors_lma = .; \
.vectors 0xffff0000 : AT(__vectors_start) { \ OVERLAY 0xffff0000 : NOCROSSREFS AT(__vectors_lma) { \
*(.vectors) \ .vectors { \
*(.vectors) \
} \
.vectors.bhb.loop8 { \
*(.vectors.bhb.loop8) \
} \
.vectors.bhb.bpiall { \
*(.vectors.bhb.bpiall) \
} \
} \ } \
. = __vectors_start + SIZEOF(.vectors); \ ARM_LMA(__vectors, .vectors); \
__vectors_end = .; \ ARM_LMA(__vectors_bhb_loop8, .vectors.bhb.loop8); \
ARM_LMA(__vectors_bhb_bpiall, .vectors.bhb.bpiall); \
. = __vectors_lma + SIZEOF(.vectors) + \
SIZEOF(.vectors.bhb.loop8) + \
SIZEOF(.vectors.bhb.bpiall); \
\ \
__stubs_start = .; \ __stubs_lma = .; \
.stubs ADDR(.vectors) + 0x1000 : AT(__stubs_start) { \ .stubs ADDR(.vectors) + 0x1000 : AT(__stubs_lma) { \
*(.stubs) \ *(.stubs) \
} \ } \
. = __stubs_start + SIZEOF(.stubs); \ ARM_LMA(__stubs, .stubs); \
__stubs_end = .; \ . = __stubs_lma + SIZEOF(.stubs); \
\ \
PROVIDE(vector_fiq_offset = vector_fiq - ADDR(.vectors)); PROVIDE(vector_fiq_offset = vector_fiq - ADDR(.vectors));

View File

@@ -823,6 +823,7 @@ config CPU_BPREDICT_DISABLE
config CPU_SPECTRE config CPU_SPECTRE
bool bool
select GENERIC_CPU_VULNERABILITIES
config HARDEN_BRANCH_PREDICTOR config HARDEN_BRANCH_PREDICTOR
bool "Harden the branch predictor against aliasing attacks" if EXPERT bool "Harden the branch predictor against aliasing attacks" if EXPERT
@@ -843,6 +844,16 @@ config HARDEN_BRANCH_PREDICTOR
If unsure, say Y. If unsure, say Y.
config HARDEN_BRANCH_HISTORY
bool "Harden Spectre style attacks against branch history" if EXPERT
depends on CPU_SPECTRE
default y
help
Speculation attacks against some high-performance processors can
make use of branch history to influence future speculation. When
taking an exception, a sequence of branches overwrites the branch
history, or branch history is invalidated.
config TLS_REG_EMUL config TLS_REG_EMUL
bool bool
select NEED_KUSER_HELPERS select NEED_KUSER_HELPERS

View File

@@ -7,8 +7,36 @@
#include <asm/cp15.h> #include <asm/cp15.h>
#include <asm/cputype.h> #include <asm/cputype.h>
#include <asm/proc-fns.h> #include <asm/proc-fns.h>
#include <asm/spectre.h>
#include <asm/system_misc.h> #include <asm/system_misc.h>
#ifdef CONFIG_ARM_PSCI
#define SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED 1
static int __maybe_unused spectre_v2_get_cpu_fw_mitigation_state(void)
{
struct arm_smccc_res res;
arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
switch ((int)res.a0) {
case SMCCC_RET_SUCCESS:
return SPECTRE_MITIGATED;
case SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED:
return SPECTRE_UNAFFECTED;
default:
return SPECTRE_VULNERABLE;
}
}
#else
static int __maybe_unused spectre_v2_get_cpu_fw_mitigation_state(void)
{
return SPECTRE_VULNERABLE;
}
#endif
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn); DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
@@ -37,13 +65,61 @@ static void __maybe_unused call_hvc_arch_workaround_1(void)
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL); arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
} }
static void cpu_v7_spectre_init(void) static unsigned int spectre_v2_install_workaround(unsigned int method)
{ {
const char *spectre_v2_method = NULL; const char *spectre_v2_method = NULL;
int cpu = smp_processor_id(); int cpu = smp_processor_id();
if (per_cpu(harden_branch_predictor_fn, cpu)) if (per_cpu(harden_branch_predictor_fn, cpu))
return; return SPECTRE_MITIGATED;
switch (method) {
case SPECTRE_V2_METHOD_BPIALL:
per_cpu(harden_branch_predictor_fn, cpu) =
harden_branch_predictor_bpiall;
spectre_v2_method = "BPIALL";
break;
case SPECTRE_V2_METHOD_ICIALLU:
per_cpu(harden_branch_predictor_fn, cpu) =
harden_branch_predictor_iciallu;
spectre_v2_method = "ICIALLU";
break;
case SPECTRE_V2_METHOD_HVC:
per_cpu(harden_branch_predictor_fn, cpu) =
call_hvc_arch_workaround_1;
cpu_do_switch_mm = cpu_v7_hvc_switch_mm;
spectre_v2_method = "hypervisor";
break;
case SPECTRE_V2_METHOD_SMC:
per_cpu(harden_branch_predictor_fn, cpu) =
call_smc_arch_workaround_1;
cpu_do_switch_mm = cpu_v7_smc_switch_mm;
spectre_v2_method = "firmware";
break;
}
if (spectre_v2_method)
pr_info("CPU%u: Spectre v2: using %s workaround\n",
smp_processor_id(), spectre_v2_method);
return SPECTRE_MITIGATED;
}
#else
static unsigned int spectre_v2_install_workaround(unsigned int method)
{
pr_info("CPU%u: Spectre V2: workarounds disabled by configuration\n",
smp_processor_id());
return SPECTRE_VULNERABLE;
}
#endif
static void cpu_v7_spectre_v2_init(void)
{
unsigned int state, method = 0;
switch (read_cpuid_part()) { switch (read_cpuid_part()) {
case ARM_CPU_PART_CORTEX_A8: case ARM_CPU_PART_CORTEX_A8:
@@ -52,32 +128,37 @@ static void cpu_v7_spectre_init(void)
case ARM_CPU_PART_CORTEX_A17: case ARM_CPU_PART_CORTEX_A17:
case ARM_CPU_PART_CORTEX_A73: case ARM_CPU_PART_CORTEX_A73:
case ARM_CPU_PART_CORTEX_A75: case ARM_CPU_PART_CORTEX_A75:
per_cpu(harden_branch_predictor_fn, cpu) = state = SPECTRE_MITIGATED;
harden_branch_predictor_bpiall; method = SPECTRE_V2_METHOD_BPIALL;
spectre_v2_method = "BPIALL";
break; break;
case ARM_CPU_PART_CORTEX_A15: case ARM_CPU_PART_CORTEX_A15:
case ARM_CPU_PART_BRAHMA_B15: case ARM_CPU_PART_BRAHMA_B15:
per_cpu(harden_branch_predictor_fn, cpu) = state = SPECTRE_MITIGATED;
harden_branch_predictor_iciallu; method = SPECTRE_V2_METHOD_ICIALLU;
spectre_v2_method = "ICIALLU";
break; break;
#ifdef CONFIG_ARM_PSCI
case ARM_CPU_PART_BRAHMA_B53: case ARM_CPU_PART_BRAHMA_B53:
/* Requires no workaround */ /* Requires no workaround */
state = SPECTRE_UNAFFECTED;
break; break;
default: default:
/* Other ARM CPUs require no workaround */ /* Other ARM CPUs require no workaround */
if (read_cpuid_implementor() == ARM_CPU_IMP_ARM) if (read_cpuid_implementor() == ARM_CPU_IMP_ARM) {
state = SPECTRE_UNAFFECTED;
break; break;
}
/* fallthrough */ /* fallthrough */
/* Cortex A57/A72 require firmware workaround */ /* Cortex A57/A72 require firmware workaround */
case ARM_CPU_PART_CORTEX_A57: case ARM_CPU_PART_CORTEX_A57:
case ARM_CPU_PART_CORTEX_A72: { case ARM_CPU_PART_CORTEX_A72: {
struct arm_smccc_res res; struct arm_smccc_res res;
state = spectre_v2_get_cpu_fw_mitigation_state();
if (state != SPECTRE_MITIGATED)
break;
if (psci_ops.smccc_version == SMCCC_VERSION_1_0) if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
break; break;
@@ -87,10 +168,7 @@ static void cpu_v7_spectre_init(void)
ARM_SMCCC_ARCH_WORKAROUND_1, &res); ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 != 0) if ((int)res.a0 != 0)
break; break;
per_cpu(harden_branch_predictor_fn, cpu) = method = SPECTRE_V2_METHOD_HVC;
call_hvc_arch_workaround_1;
cpu_do_switch_mm = cpu_v7_hvc_switch_mm;
spectre_v2_method = "hypervisor";
break; break;
case PSCI_CONDUIT_SMC: case PSCI_CONDUIT_SMC:
@@ -98,29 +176,97 @@ static void cpu_v7_spectre_init(void)
ARM_SMCCC_ARCH_WORKAROUND_1, &res); ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 != 0) if ((int)res.a0 != 0)
break; break;
per_cpu(harden_branch_predictor_fn, cpu) = method = SPECTRE_V2_METHOD_SMC;
call_smc_arch_workaround_1;
cpu_do_switch_mm = cpu_v7_smc_switch_mm;
spectre_v2_method = "firmware";
break; break;
default: default:
state = SPECTRE_VULNERABLE;
break; break;
} }
} }
#endif
} }
if (spectre_v2_method) if (state == SPECTRE_MITIGATED)
pr_info("CPU%u: Spectre v2: using %s workaround\n", state = spectre_v2_install_workaround(method);
smp_processor_id(), spectre_v2_method);
spectre_v2_update_state(state, method);
}
#ifdef CONFIG_HARDEN_BRANCH_HISTORY
static int spectre_bhb_method;
static const char *spectre_bhb_method_name(int method)
{
switch (method) {
case SPECTRE_V2_METHOD_LOOP8:
return "loop";
case SPECTRE_V2_METHOD_BPIALL:
return "BPIALL";
default:
return "unknown";
}
}
static int spectre_bhb_install_workaround(int method)
{
if (spectre_bhb_method != method) {
if (spectre_bhb_method) {
pr_err("CPU%u: Spectre BHB: method disagreement, system vulnerable\n",
smp_processor_id());
return SPECTRE_VULNERABLE;
}
if (spectre_bhb_update_vectors(method) == SPECTRE_VULNERABLE)
return SPECTRE_VULNERABLE;
spectre_bhb_method = method;
}
pr_info("CPU%u: Spectre BHB: using %s workaround\n",
smp_processor_id(), spectre_bhb_method_name(method));
return SPECTRE_MITIGATED;
} }
#else #else
static void cpu_v7_spectre_init(void) static int spectre_bhb_install_workaround(int method)
{ {
return SPECTRE_VULNERABLE;
} }
#endif #endif
static void cpu_v7_spectre_bhb_init(void)
{
unsigned int state, method = 0;
switch (read_cpuid_part()) {
case ARM_CPU_PART_CORTEX_A15:
case ARM_CPU_PART_BRAHMA_B15:
case ARM_CPU_PART_CORTEX_A57:
case ARM_CPU_PART_CORTEX_A72:
state = SPECTRE_MITIGATED;
method = SPECTRE_V2_METHOD_LOOP8;
break;
case ARM_CPU_PART_CORTEX_A73:
case ARM_CPU_PART_CORTEX_A75:
state = SPECTRE_MITIGATED;
method = SPECTRE_V2_METHOD_BPIALL;
break;
default:
state = SPECTRE_UNAFFECTED;
break;
}
if (state == SPECTRE_MITIGATED)
state = spectre_bhb_install_workaround(method);
spectre_v2_update_state(state, method);
}
static __maybe_unused bool cpu_v7_check_auxcr_set(bool *warned, static __maybe_unused bool cpu_v7_check_auxcr_set(bool *warned,
u32 mask, const char *msg) u32 mask, const char *msg)
{ {
@@ -149,16 +295,17 @@ static bool check_spectre_auxcr(bool *warned, u32 bit)
void cpu_v7_ca8_ibe(void) void cpu_v7_ca8_ibe(void)
{ {
if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6))) if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6)))
cpu_v7_spectre_init(); cpu_v7_spectre_v2_init();
} }
void cpu_v7_ca15_ibe(void) void cpu_v7_ca15_ibe(void)
{ {
if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0))) if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0)))
cpu_v7_spectre_init(); cpu_v7_spectre_v2_init();
} }
void cpu_v7_bugs_init(void) void cpu_v7_bugs_init(void)
{ {
cpu_v7_spectre_init(); cpu_v7_spectre_v2_init();
cpu_v7_spectre_bhb_init();
} }

View File

@@ -203,7 +203,7 @@
#define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */
#define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */
#define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
#define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCE for Spectre variant 2 */
#define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */
#define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */ #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */
#define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */ #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */

View File

@@ -119,7 +119,7 @@
ANNOTATE_NOSPEC_ALTERNATIVE ANNOTATE_NOSPEC_ALTERNATIVE
ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *\reg), \ ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *\reg), \
__stringify(RETPOLINE_JMP \reg), X86_FEATURE_RETPOLINE, \ __stringify(RETPOLINE_JMP \reg), X86_FEATURE_RETPOLINE, \
__stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *\reg), X86_FEATURE_RETPOLINE_AMD __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *\reg), X86_FEATURE_RETPOLINE_LFENCE
#else #else
jmp *\reg jmp *\reg
#endif #endif
@@ -130,7 +130,7 @@
ANNOTATE_NOSPEC_ALTERNATIVE ANNOTATE_NOSPEC_ALTERNATIVE
ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; call *\reg), \ ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; call *\reg), \
__stringify(RETPOLINE_CALL \reg), X86_FEATURE_RETPOLINE,\ __stringify(RETPOLINE_CALL \reg), X86_FEATURE_RETPOLINE,\
__stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *\reg), X86_FEATURE_RETPOLINE_AMD __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *\reg), X86_FEATURE_RETPOLINE_LFENCE
#else #else
call *\reg call *\reg
#endif #endif
@@ -181,7 +181,7 @@
"lfence;\n" \ "lfence;\n" \
ANNOTATE_RETPOLINE_SAFE \ ANNOTATE_RETPOLINE_SAFE \
"call *%[thunk_target]\n", \ "call *%[thunk_target]\n", \
X86_FEATURE_RETPOLINE_AMD) X86_FEATURE_RETPOLINE_LFENCE)
# define THUNK_TARGET(addr) [thunk_target] "r" (addr) # define THUNK_TARGET(addr) [thunk_target] "r" (addr)
#else /* CONFIG_X86_32 */ #else /* CONFIG_X86_32 */
@@ -211,7 +211,7 @@
"lfence;\n" \ "lfence;\n" \
ANNOTATE_RETPOLINE_SAFE \ ANNOTATE_RETPOLINE_SAFE \
"call *%[thunk_target]\n", \ "call *%[thunk_target]\n", \
X86_FEATURE_RETPOLINE_AMD) X86_FEATURE_RETPOLINE_LFENCE)
# define THUNK_TARGET(addr) [thunk_target] "rm" (addr) # define THUNK_TARGET(addr) [thunk_target] "rm" (addr)
#endif #endif
@@ -223,9 +223,11 @@
/* The Spectre V2 mitigation variants */ /* The Spectre V2 mitigation variants */
enum spectre_v2_mitigation { enum spectre_v2_mitigation {
SPECTRE_V2_NONE, SPECTRE_V2_NONE,
SPECTRE_V2_RETPOLINE_GENERIC, SPECTRE_V2_RETPOLINE,
SPECTRE_V2_RETPOLINE_AMD, SPECTRE_V2_LFENCE,
SPECTRE_V2_IBRS_ENHANCED, SPECTRE_V2_EIBRS,
SPECTRE_V2_EIBRS_RETPOLINE,
SPECTRE_V2_EIBRS_LFENCE,
}; };
/* The indirect branch speculation control variants */ /* The indirect branch speculation control variants */

View File

@@ -31,6 +31,7 @@
#include <asm/intel-family.h> #include <asm/intel-family.h>
#include <asm/e820/api.h> #include <asm/e820/api.h>
#include <asm/hypervisor.h> #include <asm/hypervisor.h>
#include <linux/bpf.h>
#include "cpu.h" #include "cpu.h"
@@ -607,6 +608,32 @@ static inline const char *spectre_v2_module_string(void)
static inline const char *spectre_v2_module_string(void) { return ""; } static inline const char *spectre_v2_module_string(void) { return ""; }
#endif #endif
#define SPECTRE_V2_LFENCE_MSG "WARNING: LFENCE mitigation is not recommended for this CPU, data leaks possible!\n"
#define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n"
#define SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS+LFENCE mitigation and SMT, data leaks possible via Spectre v2 BHB attacks!\n"
#ifdef CONFIG_BPF_SYSCALL
void unpriv_ebpf_notify(int new_state)
{
if (new_state)
return;
/* Unprivileged eBPF is enabled */
switch (spectre_v2_enabled) {
case SPECTRE_V2_EIBRS:
pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
break;
case SPECTRE_V2_EIBRS_LFENCE:
if (sched_smt_active())
pr_err(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
break;
default:
break;
}
}
#endif
static inline bool match_option(const char *arg, int arglen, const char *opt) static inline bool match_option(const char *arg, int arglen, const char *opt)
{ {
int len = strlen(opt); int len = strlen(opt);
@@ -621,7 +648,10 @@ enum spectre_v2_mitigation_cmd {
SPECTRE_V2_CMD_FORCE, SPECTRE_V2_CMD_FORCE,
SPECTRE_V2_CMD_RETPOLINE, SPECTRE_V2_CMD_RETPOLINE,
SPECTRE_V2_CMD_RETPOLINE_GENERIC, SPECTRE_V2_CMD_RETPOLINE_GENERIC,
SPECTRE_V2_CMD_RETPOLINE_AMD, SPECTRE_V2_CMD_RETPOLINE_LFENCE,
SPECTRE_V2_CMD_EIBRS,
SPECTRE_V2_CMD_EIBRS_RETPOLINE,
SPECTRE_V2_CMD_EIBRS_LFENCE,
}; };
enum spectre_v2_user_cmd { enum spectre_v2_user_cmd {
@@ -694,6 +724,13 @@ spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd)
return SPECTRE_V2_USER_CMD_AUTO; return SPECTRE_V2_USER_CMD_AUTO;
} }
static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
{
return (mode == SPECTRE_V2_EIBRS ||
mode == SPECTRE_V2_EIBRS_RETPOLINE ||
mode == SPECTRE_V2_EIBRS_LFENCE);
}
static void __init static void __init
spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd) spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
{ {
@@ -756,10 +793,12 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
} }
/* /*
* If enhanced IBRS is enabled or SMT impossible, STIBP is not * If no STIBP, enhanced IBRS is enabled or SMT impossible, STIBP is not
* required. * required.
*/ */
if (!smt_possible || spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) if (!boot_cpu_has(X86_FEATURE_STIBP) ||
!smt_possible ||
spectre_v2_in_eibrs_mode(spectre_v2_enabled))
return; return;
/* /*
@@ -771,12 +810,6 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON)) boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
mode = SPECTRE_V2_USER_STRICT_PREFERRED; mode = SPECTRE_V2_USER_STRICT_PREFERRED;
/*
* If STIBP is not available, clear the STIBP mode.
*/
if (!boot_cpu_has(X86_FEATURE_STIBP))
mode = SPECTRE_V2_USER_NONE;
spectre_v2_user_stibp = mode; spectre_v2_user_stibp = mode;
set_mode: set_mode:
@@ -785,9 +818,11 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
static const char * const spectre_v2_strings[] = { static const char * const spectre_v2_strings[] = {
[SPECTRE_V2_NONE] = "Vulnerable", [SPECTRE_V2_NONE] = "Vulnerable",
[SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline", [SPECTRE_V2_RETPOLINE] = "Mitigation: Retpolines",
[SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline", [SPECTRE_V2_LFENCE] = "Mitigation: LFENCE",
[SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS", [SPECTRE_V2_EIBRS] = "Mitigation: Enhanced IBRS",
[SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced IBRS + LFENCE",
[SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced IBRS + Retpolines",
}; };
static const struct { static const struct {
@@ -798,8 +833,12 @@ static const struct {
{ "off", SPECTRE_V2_CMD_NONE, false }, { "off", SPECTRE_V2_CMD_NONE, false },
{ "on", SPECTRE_V2_CMD_FORCE, true }, { "on", SPECTRE_V2_CMD_FORCE, true },
{ "retpoline", SPECTRE_V2_CMD_RETPOLINE, false }, { "retpoline", SPECTRE_V2_CMD_RETPOLINE, false },
{ "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_AMD, false }, { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false },
{ "retpoline,lfence", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false },
{ "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false }, { "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
{ "eibrs", SPECTRE_V2_CMD_EIBRS, false },
{ "eibrs,lfence", SPECTRE_V2_CMD_EIBRS_LFENCE, false },
{ "eibrs,retpoline", SPECTRE_V2_CMD_EIBRS_RETPOLINE, false },
{ "auto", SPECTRE_V2_CMD_AUTO, false }, { "auto", SPECTRE_V2_CMD_AUTO, false },
}; };
@@ -836,16 +875,30 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
} }
if ((cmd == SPECTRE_V2_CMD_RETPOLINE || if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||
cmd == SPECTRE_V2_CMD_RETPOLINE_AMD || cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE ||
cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC) && cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC ||
cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
!IS_ENABLED(CONFIG_RETPOLINE)) { !IS_ENABLED(CONFIG_RETPOLINE)) {
pr_err("%s selected but not compiled in. Switching to AUTO select\n", mitigation_options[i].option); pr_err("%s selected but not compiled in. Switching to AUTO select\n",
mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO; return SPECTRE_V2_CMD_AUTO;
} }
if (cmd == SPECTRE_V2_CMD_RETPOLINE_AMD && if ((cmd == SPECTRE_V2_CMD_EIBRS ||
boot_cpu_data.x86_vendor != X86_VENDOR_AMD) { cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
pr_err("retpoline,amd selected but CPU is not AMD. Switching to AUTO select\n"); cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
!boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
pr_err("%s selected but CPU doesn't have eIBRS. Switching to AUTO select\n",
mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO;
}
if ((cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE ||
cmd == SPECTRE_V2_CMD_EIBRS_LFENCE) &&
!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
pr_err("%s selected, but CPU doesn't have a serializing LFENCE. Switching to AUTO select\n",
mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO; return SPECTRE_V2_CMD_AUTO;
} }
@@ -854,6 +907,16 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
return cmd; return cmd;
} }
static enum spectre_v2_mitigation __init spectre_v2_select_retpoline(void)
{
if (!IS_ENABLED(CONFIG_RETPOLINE)) {
pr_err("Kernel not compiled with retpoline; no mitigation available!");
return SPECTRE_V2_NONE;
}
return SPECTRE_V2_RETPOLINE;
}
static void __init spectre_v2_select_mitigation(void) static void __init spectre_v2_select_mitigation(void)
{ {
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline(); enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -874,48 +937,64 @@ static void __init spectre_v2_select_mitigation(void)
case SPECTRE_V2_CMD_FORCE: case SPECTRE_V2_CMD_FORCE:
case SPECTRE_V2_CMD_AUTO: case SPECTRE_V2_CMD_AUTO:
if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) { if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
mode = SPECTRE_V2_IBRS_ENHANCED; mode = SPECTRE_V2_EIBRS;
/* Force it so VMEXIT will restore correctly */ break;
x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
goto specv2_set_mode;
} }
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_auto; mode = spectre_v2_select_retpoline();
break; break;
case SPECTRE_V2_CMD_RETPOLINE_AMD:
if (IS_ENABLED(CONFIG_RETPOLINE)) case SPECTRE_V2_CMD_RETPOLINE_LFENCE:
goto retpoline_amd; pr_err(SPECTRE_V2_LFENCE_MSG);
mode = SPECTRE_V2_LFENCE;
break; break;
case SPECTRE_V2_CMD_RETPOLINE_GENERIC: case SPECTRE_V2_CMD_RETPOLINE_GENERIC:
if (IS_ENABLED(CONFIG_RETPOLINE)) mode = SPECTRE_V2_RETPOLINE;
goto retpoline_generic;
break; break;
case SPECTRE_V2_CMD_RETPOLINE: case SPECTRE_V2_CMD_RETPOLINE:
if (IS_ENABLED(CONFIG_RETPOLINE)) mode = spectre_v2_select_retpoline();
goto retpoline_auto; break;
case SPECTRE_V2_CMD_EIBRS:
mode = SPECTRE_V2_EIBRS;
break;
case SPECTRE_V2_CMD_EIBRS_LFENCE:
mode = SPECTRE_V2_EIBRS_LFENCE;
break;
case SPECTRE_V2_CMD_EIBRS_RETPOLINE:
mode = SPECTRE_V2_EIBRS_RETPOLINE;
break; break;
} }
pr_err("Spectre mitigation: kernel not compiled with retpoline; no mitigation available!");
return;
retpoline_auto: if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) { pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
retpoline_amd:
if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) { if (spectre_v2_in_eibrs_mode(mode)) {
pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n"); /* Force it so VMEXIT will restore correctly */
goto retpoline_generic; x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
} wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
mode = SPECTRE_V2_RETPOLINE_AMD; }
setup_force_cpu_cap(X86_FEATURE_RETPOLINE_AMD);
setup_force_cpu_cap(X86_FEATURE_RETPOLINE); switch (mode) {
} else { case SPECTRE_V2_NONE:
retpoline_generic: case SPECTRE_V2_EIBRS:
mode = SPECTRE_V2_RETPOLINE_GENERIC; break;
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
case SPECTRE_V2_LFENCE:
case SPECTRE_V2_EIBRS_LFENCE:
setup_force_cpu_cap(X86_FEATURE_RETPOLINE_LFENCE);
/* fallthrough */
case SPECTRE_V2_RETPOLINE:
case SPECTRE_V2_EIBRS_RETPOLINE:
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
break;
} }
specv2_set_mode:
spectre_v2_enabled = mode; spectre_v2_enabled = mode;
pr_info("%s\n", spectre_v2_strings[mode]); pr_info("%s\n", spectre_v2_strings[mode]);
@@ -941,7 +1020,7 @@ static void __init spectre_v2_select_mitigation(void)
* the CPU supports Enhanced IBRS, kernel might un-intentionally not * the CPU supports Enhanced IBRS, kernel might un-intentionally not
* enable IBRS around firmware calls. * enable IBRS around firmware calls.
*/ */
if (boot_cpu_has(X86_FEATURE_IBRS) && mode != SPECTRE_V2_IBRS_ENHANCED) { if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_eibrs_mode(mode)) {
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW); setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n"); pr_info("Enabling Restricted Speculation for firmware calls\n");
} }
@@ -1011,6 +1090,10 @@ void arch_smt_update(void)
{ {
mutex_lock(&spec_ctrl_mutex); mutex_lock(&spec_ctrl_mutex);
if (sched_smt_active() && unprivileged_ebpf_enabled() &&
spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
switch (spectre_v2_user_stibp) { switch (spectre_v2_user_stibp) {
case SPECTRE_V2_USER_NONE: case SPECTRE_V2_USER_NONE:
break; break;
@@ -1255,7 +1338,6 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE && if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
spectre_v2_user_stibp == SPECTRE_V2_USER_NONE) spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)
return 0; return 0;
/* /*
* With strict mode for both IBPB and STIBP, the instruction * With strict mode for both IBPB and STIBP, the instruction
* code paths avoid checking this task flag and instead, * code paths avoid checking this task flag and instead,
@@ -1600,7 +1682,7 @@ static ssize_t tsx_async_abort_show_state(char *buf)
static char *stibp_state(void) static char *stibp_state(void)
{ {
if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) if (spectre_v2_in_eibrs_mode(spectre_v2_enabled))
return ""; return "";
switch (spectre_v2_user_stibp) { switch (spectre_v2_user_stibp) {
@@ -1630,6 +1712,27 @@ static char *ibpb_state(void)
return ""; return "";
} }
static ssize_t spectre_v2_show_state(char *buf)
{
if (spectre_v2_enabled == SPECTRE_V2_LFENCE)
return sprintf(buf, "Vulnerable: LFENCE\n");
if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
return sprintf(buf, "Vulnerable: eIBRS with unprivileged eBPF\n");
if (sched_smt_active() && unprivileged_ebpf_enabled() &&
spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
return sprintf(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and SMT\n");
return sprintf(buf, "%s%s%s%s%s%s\n",
spectre_v2_strings[spectre_v2_enabled],
ibpb_state(),
boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
stibp_state(),
boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
spectre_v2_module_string());
}
static ssize_t srbds_show_state(char *buf) static ssize_t srbds_show_state(char *buf)
{ {
return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]); return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
@@ -1655,12 +1758,7 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
return sprintf(buf, "%s\n", spectre_v1_strings[spectre_v1_mitigation]); return sprintf(buf, "%s\n", spectre_v1_strings[spectre_v1_mitigation]);
case X86_BUG_SPECTRE_V2: case X86_BUG_SPECTRE_V2:
return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], return spectre_v2_show_state(buf);
ibpb_state(),
boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
stibp_state(),
boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
spectre_v2_module_string());
case X86_BUG_SPEC_STORE_BYPASS: case X86_BUG_SPEC_STORE_BYPASS:
return sprintf(buf, "%s\n", ssb_strings[ssb_mode]); return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);

View File

@@ -1344,7 +1344,8 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
rinfo->ring_ref[i] = GRANT_INVALID_REF; rinfo->ring_ref[i] = GRANT_INVALID_REF;
} }
} }
free_pages((unsigned long)rinfo->ring.sring, get_order(info->nr_ring_pages * XEN_PAGE_SIZE)); free_pages_exact(rinfo->ring.sring,
info->nr_ring_pages * XEN_PAGE_SIZE);
rinfo->ring.sring = NULL; rinfo->ring.sring = NULL;
if (rinfo->irq) if (rinfo->irq)
@@ -1428,9 +1429,15 @@ static int blkif_get_final_status(enum blk_req_status s1,
return BLKIF_RSP_OKAY; return BLKIF_RSP_OKAY;
} }
static bool blkif_completion(unsigned long *id, /*
struct blkfront_ring_info *rinfo, * Return values:
struct blkif_response *bret) * 1 response processed.
* 0 missing further responses.
* -1 error while processing.
*/
static int blkif_completion(unsigned long *id,
struct blkfront_ring_info *rinfo,
struct blkif_response *bret)
{ {
int i = 0; int i = 0;
struct scatterlist *sg; struct scatterlist *sg;
@@ -1453,7 +1460,7 @@ static bool blkif_completion(unsigned long *id,
/* Wait the second response if not yet here. */ /* Wait the second response if not yet here. */
if (s2->status < REQ_DONE) if (s2->status < REQ_DONE)
return false; return 0;
bret->status = blkif_get_final_status(s->status, bret->status = blkif_get_final_status(s->status,
s2->status); s2->status);
@@ -1504,42 +1511,43 @@ static bool blkif_completion(unsigned long *id,
} }
/* Add the persistent grant into the list of free grants */ /* Add the persistent grant into the list of free grants */
for (i = 0; i < num_grant; i++) { for (i = 0; i < num_grant; i++) {
if (gnttab_query_foreign_access(s->grants_used[i]->gref)) { if (!gnttab_try_end_foreign_access(s->grants_used[i]->gref)) {
/* /*
* If the grant is still mapped by the backend (the * If the grant is still mapped by the backend (the
* backend has chosen to make this grant persistent) * backend has chosen to make this grant persistent)
* we add it at the head of the list, so it will be * we add it at the head of the list, so it will be
* reused first. * reused first.
*/ */
if (!info->feature_persistent) if (!info->feature_persistent) {
pr_alert_ratelimited("backed has not unmapped grant: %u\n", pr_alert("backed has not unmapped grant: %u\n",
s->grants_used[i]->gref); s->grants_used[i]->gref);
return -1;
}
list_add(&s->grants_used[i]->node, &rinfo->grants); list_add(&s->grants_used[i]->node, &rinfo->grants);
rinfo->persistent_gnts_c++; rinfo->persistent_gnts_c++;
} else { } else {
/* /*
* If the grant is not mapped by the backend we end the * If the grant is not mapped by the backend we add it
* foreign access and add it to the tail of the list, * to the tail of the list, so it will not be picked
* so it will not be picked again unless we run out of * again unless we run out of persistent grants.
* persistent grants.
*/ */
gnttab_end_foreign_access(s->grants_used[i]->gref, 0, 0UL);
s->grants_used[i]->gref = GRANT_INVALID_REF; s->grants_used[i]->gref = GRANT_INVALID_REF;
list_add_tail(&s->grants_used[i]->node, &rinfo->grants); list_add_tail(&s->grants_used[i]->node, &rinfo->grants);
} }
} }
if (s->req.operation == BLKIF_OP_INDIRECT) { if (s->req.operation == BLKIF_OP_INDIRECT) {
for (i = 0; i < INDIRECT_GREFS(num_grant); i++) { for (i = 0; i < INDIRECT_GREFS(num_grant); i++) {
if (gnttab_query_foreign_access(s->indirect_grants[i]->gref)) { if (!gnttab_try_end_foreign_access(s->indirect_grants[i]->gref)) {
if (!info->feature_persistent) if (!info->feature_persistent) {
pr_alert_ratelimited("backed has not unmapped grant: %u\n", pr_alert("backed has not unmapped grant: %u\n",
s->indirect_grants[i]->gref); s->indirect_grants[i]->gref);
return -1;
}
list_add(&s->indirect_grants[i]->node, &rinfo->grants); list_add(&s->indirect_grants[i]->node, &rinfo->grants);
rinfo->persistent_gnts_c++; rinfo->persistent_gnts_c++;
} else { } else {
struct page *indirect_page; struct page *indirect_page;
gnttab_end_foreign_access(s->indirect_grants[i]->gref, 0, 0UL);
/* /*
* Add the used indirect page back to the list of * Add the used indirect page back to the list of
* available pages for indirect grefs. * available pages for indirect grefs.
@@ -1554,7 +1562,7 @@ static bool blkif_completion(unsigned long *id,
} }
} }
return true; return 1;
} }
static irqreturn_t blkif_interrupt(int irq, void *dev_id) static irqreturn_t blkif_interrupt(int irq, void *dev_id)
@@ -1620,12 +1628,17 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
} }
if (bret.operation != BLKIF_OP_DISCARD) { if (bret.operation != BLKIF_OP_DISCARD) {
int ret;
/* /*
* We may need to wait for an extra response if the * We may need to wait for an extra response if the
* I/O request is split in 2 * I/O request is split in 2
*/ */
if (!blkif_completion(&id, rinfo, &bret)) ret = blkif_completion(&id, rinfo, &bret);
if (!ret)
continue; continue;
if (unlikely(ret < 0))
goto err;
} }
if (add_id_to_freelist(rinfo, id)) { if (add_id_to_freelist(rinfo, id)) {
@@ -1731,8 +1744,7 @@ static int setup_blkring(struct xenbus_device *dev,
for (i = 0; i < info->nr_ring_pages; i++) for (i = 0; i < info->nr_ring_pages; i++)
rinfo->ring_ref[i] = GRANT_INVALID_REF; rinfo->ring_ref[i] = GRANT_INVALID_REF;
sring = (struct blkif_sring *)__get_free_pages(GFP_NOIO | __GFP_HIGH, sring = alloc_pages_exact(ring_size, GFP_NOIO);
get_order(ring_size));
if (!sring) { if (!sring) {
xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring"); xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
return -ENOMEM; return -ENOMEM;
@@ -1742,7 +1754,7 @@ static int setup_blkring(struct xenbus_device *dev,
err = xenbus_grant_ring(dev, rinfo->ring.sring, info->nr_ring_pages, gref); err = xenbus_grant_ring(dev, rinfo->ring.sring, info->nr_ring_pages, gref);
if (err < 0) { if (err < 0) {
free_pages((unsigned long)sring, get_order(ring_size)); free_pages_exact(sring, ring_size);
rinfo->ring.sring = NULL; rinfo->ring.sring = NULL;
goto fail; goto fail;
} }
@@ -2720,11 +2732,10 @@ static void purge_persistent_grants(struct blkfront_info *info)
list_for_each_entry_safe(gnt_list_entry, tmp, &rinfo->grants, list_for_each_entry_safe(gnt_list_entry, tmp, &rinfo->grants,
node) { node) {
if (gnt_list_entry->gref == GRANT_INVALID_REF || if (gnt_list_entry->gref == GRANT_INVALID_REF ||
gnttab_query_foreign_access(gnt_list_entry->gref)) !gnttab_try_end_foreign_access(gnt_list_entry->gref))
continue; continue;
list_del(&gnt_list_entry->node); list_del(&gnt_list_entry->node);
gnttab_end_foreign_access(gnt_list_entry->gref, 0, 0UL);
rinfo->persistent_gnts_c--; rinfo->persistent_gnts_c--;
gnt_list_entry->gref = GRANT_INVALID_REF; gnt_list_entry->gref = GRANT_INVALID_REF;
list_add_tail(&gnt_list_entry->node, &rinfo->grants); list_add_tail(&gnt_list_entry->node, &rinfo->grants);

View File

@@ -64,6 +64,21 @@ struct psci_operations psci_ops = {
.smccc_version = SMCCC_VERSION_1_0, .smccc_version = SMCCC_VERSION_1_0,
}; };
enum arm_smccc_conduit arm_smccc_1_1_get_conduit(void)
{
if (psci_ops.smccc_version < SMCCC_VERSION_1_1)
return SMCCC_CONDUIT_NONE;
switch (psci_ops.conduit) {
case PSCI_CONDUIT_SMC:
return SMCCC_CONDUIT_SMC;
case PSCI_CONDUIT_HVC:
return SMCCC_CONDUIT_HVC;
default:
return SMCCC_CONDUIT_NONE;
}
}
typedef unsigned long (psci_fn)(unsigned long, unsigned long, typedef unsigned long (psci_fn)(unsigned long, unsigned long,
unsigned long, unsigned long); unsigned long, unsigned long);
static psci_fn *invoke_psci_fn; static psci_fn *invoke_psci_fn;

View File

@@ -414,14 +414,12 @@ static bool xennet_tx_buf_gc(struct netfront_queue *queue)
queue->tx_link[id] = TX_LINK_NONE; queue->tx_link[id] = TX_LINK_NONE;
skb = queue->tx_skbs[id]; skb = queue->tx_skbs[id];
queue->tx_skbs[id] = NULL; queue->tx_skbs[id] = NULL;
if (unlikely(gnttab_query_foreign_access( if (unlikely(!gnttab_end_foreign_access_ref(
queue->grant_tx_ref[id]) != 0)) { queue->grant_tx_ref[id], GNTMAP_readonly))) {
dev_alert(dev, dev_alert(dev,
"Grant still in use by backend domain\n"); "Grant still in use by backend domain\n");
goto err; goto err;
} }
gnttab_end_foreign_access_ref(
queue->grant_tx_ref[id], GNTMAP_readonly);
gnttab_release_grant_reference( gnttab_release_grant_reference(
&queue->gref_tx_head, queue->grant_tx_ref[id]); &queue->gref_tx_head, queue->grant_tx_ref[id]);
queue->grant_tx_ref[id] = GRANT_INVALID_REF; queue->grant_tx_ref[id] = GRANT_INVALID_REF;
@@ -864,7 +862,6 @@ static int xennet_get_responses(struct netfront_queue *queue,
int max = XEN_NETIF_NR_SLOTS_MIN + (rx->status <= RX_COPY_THRESHOLD); int max = XEN_NETIF_NR_SLOTS_MIN + (rx->status <= RX_COPY_THRESHOLD);
int slots = 1; int slots = 1;
int err = 0; int err = 0;
unsigned long ret;
if (rx->flags & XEN_NETRXF_extra_info) { if (rx->flags & XEN_NETRXF_extra_info) {
err = xennet_get_extras(queue, extras, rp); err = xennet_get_extras(queue, extras, rp);
@@ -895,8 +892,13 @@ static int xennet_get_responses(struct netfront_queue *queue,
goto next; goto next;
} }
ret = gnttab_end_foreign_access_ref(ref, 0); if (!gnttab_end_foreign_access_ref(ref, 0)) {
BUG_ON(!ret); dev_alert(dev,
"Grant still in use by backend domain\n");
queue->info->broken = true;
dev_alert(dev, "Disabled for further use\n");
return -EINVAL;
}
gnttab_release_grant_reference(&queue->gref_rx_head, ref); gnttab_release_grant_reference(&queue->gref_rx_head, ref);
@@ -1100,6 +1102,10 @@ static int xennet_poll(struct napi_struct *napi, int budget)
err = xennet_get_responses(queue, &rinfo, rp, &tmpq); err = xennet_get_responses(queue, &rinfo, rp, &tmpq);
if (unlikely(err)) { if (unlikely(err)) {
if (queue->info->broken) {
spin_unlock(&queue->rx_lock);
return 0;
}
err: err:
while ((skb = __skb_dequeue(&tmpq))) while ((skb = __skb_dequeue(&tmpq)))
__skb_queue_tail(&errq, skb); __skb_queue_tail(&errq, skb);
@@ -1678,7 +1684,7 @@ static int setup_netfront(struct xenbus_device *dev,
struct netfront_queue *queue, unsigned int feature_split_evtchn) struct netfront_queue *queue, unsigned int feature_split_evtchn)
{ {
struct xen_netif_tx_sring *txs; struct xen_netif_tx_sring *txs;
struct xen_netif_rx_sring *rxs; struct xen_netif_rx_sring *rxs = NULL;
grant_ref_t gref; grant_ref_t gref;
int err; int err;
@@ -1698,21 +1704,21 @@ static int setup_netfront(struct xenbus_device *dev,
err = xenbus_grant_ring(dev, txs, 1, &gref); err = xenbus_grant_ring(dev, txs, 1, &gref);
if (err < 0) if (err < 0)
goto grant_tx_ring_fail; goto fail;
queue->tx_ring_ref = gref; queue->tx_ring_ref = gref;
rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH); rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
if (!rxs) { if (!rxs) {
err = -ENOMEM; err = -ENOMEM;
xenbus_dev_fatal(dev, err, "allocating rx ring page"); xenbus_dev_fatal(dev, err, "allocating rx ring page");
goto alloc_rx_ring_fail; goto fail;
} }
SHARED_RING_INIT(rxs); SHARED_RING_INIT(rxs);
FRONT_RING_INIT(&queue->rx, rxs, XEN_PAGE_SIZE); FRONT_RING_INIT(&queue->rx, rxs, XEN_PAGE_SIZE);
err = xenbus_grant_ring(dev, rxs, 1, &gref); err = xenbus_grant_ring(dev, rxs, 1, &gref);
if (err < 0) if (err < 0)
goto grant_rx_ring_fail; goto fail;
queue->rx_ring_ref = gref; queue->rx_ring_ref = gref;
if (feature_split_evtchn) if (feature_split_evtchn)
@@ -1725,22 +1731,28 @@ static int setup_netfront(struct xenbus_device *dev,
err = setup_netfront_single(queue); err = setup_netfront_single(queue);
if (err) if (err)
goto alloc_evtchn_fail; goto fail;
return 0; return 0;
/* If we fail to setup netfront, it is safe to just revoke access to /* If we fail to setup netfront, it is safe to just revoke access to
* granted pages because backend is not accessing it at this point. * granted pages because backend is not accessing it at this point.
*/ */
alloc_evtchn_fail: fail:
gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0); if (queue->rx_ring_ref != GRANT_INVALID_REF) {
grant_rx_ring_fail: gnttab_end_foreign_access(queue->rx_ring_ref, 0,
free_page((unsigned long)rxs); (unsigned long)rxs);
alloc_rx_ring_fail: queue->rx_ring_ref = GRANT_INVALID_REF;
gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0); } else {
grant_tx_ring_fail: free_page((unsigned long)rxs);
free_page((unsigned long)txs); }
fail: if (queue->tx_ring_ref != GRANT_INVALID_REF) {
gnttab_end_foreign_access(queue->tx_ring_ref, 0,
(unsigned long)txs);
queue->tx_ring_ref = GRANT_INVALID_REF;
} else {
free_page((unsigned long)txs);
}
return err; return err;
} }

View File

@@ -233,12 +233,11 @@ static void scsifront_gnttab_done(struct vscsifrnt_info *info,
return; return;
for (i = 0; i < shadow->nr_grants; i++) { for (i = 0; i < shadow->nr_grants; i++) {
if (unlikely(gnttab_query_foreign_access(shadow->gref[i]))) { if (unlikely(!gnttab_try_end_foreign_access(shadow->gref[i]))) {
shost_printk(KERN_ALERT, info->host, KBUILD_MODNAME shost_printk(KERN_ALERT, info->host, KBUILD_MODNAME
"grant still in use by backend\n"); "grant still in use by backend\n");
BUG(); BUG();
} }
gnttab_end_foreign_access(shadow->gref[i], 0, 0UL);
} }
kfree(shadow->sg); kfree(shadow->sg);

View File

@@ -169,20 +169,14 @@ static int add_grefs(struct ioctl_gntalloc_alloc_gref *op,
__del_gref(gref); __del_gref(gref);
} }
/* It's possible for the target domain to map the just-allocated grant
* references by blindly guessing their IDs; if this is done, then
* __del_gref will leave them in the queue_gref list. They need to be
* added to the global list so that we can free them when they are no
* longer referenced.
*/
if (unlikely(!list_empty(&queue_gref)))
list_splice_tail(&queue_gref, &gref_list);
mutex_unlock(&gref_mutex); mutex_unlock(&gref_mutex);
return rc; return rc;
} }
static void __del_gref(struct gntalloc_gref *gref) static void __del_gref(struct gntalloc_gref *gref)
{ {
unsigned long addr;
if (gref->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) { if (gref->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
uint8_t *tmp = kmap(gref->page); uint8_t *tmp = kmap(gref->page);
tmp[gref->notify.pgoff] = 0; tmp[gref->notify.pgoff] = 0;
@@ -196,21 +190,16 @@ static void __del_gref(struct gntalloc_gref *gref)
gref->notify.flags = 0; gref->notify.flags = 0;
if (gref->gref_id) { if (gref->gref_id) {
if (gnttab_query_foreign_access(gref->gref_id)) if (gref->page) {
return; addr = (unsigned long)page_to_virt(gref->page);
gnttab_end_foreign_access(gref->gref_id, 0, addr);
if (!gnttab_end_foreign_access_ref(gref->gref_id, 0)) } else
return; gnttab_free_grant_reference(gref->gref_id);
gnttab_free_grant_reference(gref->gref_id);
} }
gref_size--; gref_size--;
list_del(&gref->next_gref); list_del(&gref->next_gref);
if (gref->page)
__free_page(gref->page);
kfree(gref); kfree(gref);
} }

View File

@@ -135,12 +135,9 @@ struct gnttab_ops {
*/ */
unsigned long (*end_foreign_transfer_ref)(grant_ref_t ref); unsigned long (*end_foreign_transfer_ref)(grant_ref_t ref);
/* /*
* Query the status of a grant entry. Ref parameter is reference of * Read the frame number related to a given grant reference.
* queried grant entry, return value is the status of queried entry.
* Detailed status(writing/reading) can be gotten from the return value
* by bit operations.
*/ */
int (*query_foreign_access)(grant_ref_t ref); unsigned long (*read_frame)(grant_ref_t ref);
}; };
struct unmap_refs_callback_data { struct unmap_refs_callback_data {
@@ -285,22 +282,6 @@ int gnttab_grant_foreign_access(domid_t domid, unsigned long frame,
} }
EXPORT_SYMBOL_GPL(gnttab_grant_foreign_access); EXPORT_SYMBOL_GPL(gnttab_grant_foreign_access);
static int gnttab_query_foreign_access_v1(grant_ref_t ref)
{
return gnttab_shared.v1[ref].flags & (GTF_reading|GTF_writing);
}
static int gnttab_query_foreign_access_v2(grant_ref_t ref)
{
return grstatus[ref] & (GTF_reading|GTF_writing);
}
int gnttab_query_foreign_access(grant_ref_t ref)
{
return gnttab_interface->query_foreign_access(ref);
}
EXPORT_SYMBOL_GPL(gnttab_query_foreign_access);
static int gnttab_end_foreign_access_ref_v1(grant_ref_t ref, int readonly) static int gnttab_end_foreign_access_ref_v1(grant_ref_t ref, int readonly)
{ {
u16 flags, nflags; u16 flags, nflags;
@@ -354,6 +335,16 @@ int gnttab_end_foreign_access_ref(grant_ref_t ref, int readonly)
} }
EXPORT_SYMBOL_GPL(gnttab_end_foreign_access_ref); EXPORT_SYMBOL_GPL(gnttab_end_foreign_access_ref);
static unsigned long gnttab_read_frame_v1(grant_ref_t ref)
{
return gnttab_shared.v1[ref].frame;
}
static unsigned long gnttab_read_frame_v2(grant_ref_t ref)
{
return gnttab_shared.v2[ref].full_page.frame;
}
struct deferred_entry { struct deferred_entry {
struct list_head list; struct list_head list;
grant_ref_t ref; grant_ref_t ref;
@@ -383,12 +374,9 @@ static void gnttab_handle_deferred(struct timer_list *unused)
spin_unlock_irqrestore(&gnttab_list_lock, flags); spin_unlock_irqrestore(&gnttab_list_lock, flags);
if (_gnttab_end_foreign_access_ref(entry->ref, entry->ro)) { if (_gnttab_end_foreign_access_ref(entry->ref, entry->ro)) {
put_free_entry(entry->ref); put_free_entry(entry->ref);
if (entry->page) { pr_debug("freeing g.e. %#x (pfn %#lx)\n",
pr_debug("freeing g.e. %#x (pfn %#lx)\n", entry->ref, page_to_pfn(entry->page));
entry->ref, page_to_pfn(entry->page)); put_page(entry->page);
put_page(entry->page);
} else
pr_info("freeing g.e. %#x\n", entry->ref);
kfree(entry); kfree(entry);
entry = NULL; entry = NULL;
} else { } else {
@@ -413,9 +401,18 @@ static void gnttab_handle_deferred(struct timer_list *unused)
static void gnttab_add_deferred(grant_ref_t ref, bool readonly, static void gnttab_add_deferred(grant_ref_t ref, bool readonly,
struct page *page) struct page *page)
{ {
struct deferred_entry *entry = kmalloc(sizeof(*entry), GFP_ATOMIC); struct deferred_entry *entry;
gfp_t gfp = (in_atomic() || irqs_disabled()) ? GFP_ATOMIC : GFP_KERNEL;
const char *what = KERN_WARNING "leaking"; const char *what = KERN_WARNING "leaking";
entry = kmalloc(sizeof(*entry), gfp);
if (!page) {
unsigned long gfn = gnttab_interface->read_frame(ref);
page = pfn_to_page(gfn_to_pfn(gfn));
get_page(page);
}
if (entry) { if (entry) {
unsigned long flags; unsigned long flags;
@@ -436,11 +433,21 @@ static void gnttab_add_deferred(grant_ref_t ref, bool readonly,
what, ref, page ? page_to_pfn(page) : -1); what, ref, page ? page_to_pfn(page) : -1);
} }
int gnttab_try_end_foreign_access(grant_ref_t ref)
{
int ret = _gnttab_end_foreign_access_ref(ref, 0);
if (ret)
put_free_entry(ref);
return ret;
}
EXPORT_SYMBOL_GPL(gnttab_try_end_foreign_access);
void gnttab_end_foreign_access(grant_ref_t ref, int readonly, void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
unsigned long page) unsigned long page)
{ {
if (gnttab_end_foreign_access_ref(ref, readonly)) { if (gnttab_try_end_foreign_access(ref)) {
put_free_entry(ref);
if (page != 0) if (page != 0)
put_page(virt_to_page(page)); put_page(virt_to_page(page));
} else } else
@@ -1297,7 +1304,7 @@ static const struct gnttab_ops gnttab_v1_ops = {
.update_entry = gnttab_update_entry_v1, .update_entry = gnttab_update_entry_v1,
.end_foreign_access_ref = gnttab_end_foreign_access_ref_v1, .end_foreign_access_ref = gnttab_end_foreign_access_ref_v1,
.end_foreign_transfer_ref = gnttab_end_foreign_transfer_ref_v1, .end_foreign_transfer_ref = gnttab_end_foreign_transfer_ref_v1,
.query_foreign_access = gnttab_query_foreign_access_v1, .read_frame = gnttab_read_frame_v1,
}; };
static const struct gnttab_ops gnttab_v2_ops = { static const struct gnttab_ops gnttab_v2_ops = {
@@ -1309,7 +1316,7 @@ static const struct gnttab_ops gnttab_v2_ops = {
.update_entry = gnttab_update_entry_v2, .update_entry = gnttab_update_entry_v2,
.end_foreign_access_ref = gnttab_end_foreign_access_ref_v2, .end_foreign_access_ref = gnttab_end_foreign_access_ref_v2,
.end_foreign_transfer_ref = gnttab_end_foreign_transfer_ref_v2, .end_foreign_transfer_ref = gnttab_end_foreign_transfer_ref_v2,
.query_foreign_access = gnttab_query_foreign_access_v2, .read_frame = gnttab_read_frame_v2,
}; };
static bool gnttab_need_v2(void) static bool gnttab_need_v2(void)

View File

@@ -346,8 +346,8 @@ static void free_active_ring(struct sock_mapping *map)
if (!map->active.ring) if (!map->active.ring)
return; return;
free_pages((unsigned long)map->active.data.in, free_pages_exact(map->active.data.in,
map->active.ring->ring_order); PAGE_SIZE << map->active.ring->ring_order);
free_page((unsigned long)map->active.ring); free_page((unsigned long)map->active.ring);
} }
@@ -361,8 +361,8 @@ static int alloc_active_ring(struct sock_mapping *map)
goto out; goto out;
map->active.ring->ring_order = PVCALLS_RING_ORDER; map->active.ring->ring_order = PVCALLS_RING_ORDER;
bytes = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, bytes = alloc_pages_exact(PAGE_SIZE << PVCALLS_RING_ORDER,
PVCALLS_RING_ORDER); GFP_KERNEL | __GFP_ZERO);
if (!bytes) if (!bytes)
goto out; goto out;

View File

@@ -368,7 +368,14 @@ int xenbus_grant_ring(struct xenbus_device *dev, void *vaddr,
unsigned int nr_pages, grant_ref_t *grefs) unsigned int nr_pages, grant_ref_t *grefs)
{ {
int err; int err;
int i, j; unsigned int i;
grant_ref_t gref_head;
err = gnttab_alloc_grant_references(nr_pages, &gref_head);
if (err) {
xenbus_dev_fatal(dev, err, "granting access to ring page");
return err;
}
for (i = 0; i < nr_pages; i++) { for (i = 0; i < nr_pages; i++) {
unsigned long gfn; unsigned long gfn;
@@ -378,23 +385,14 @@ int xenbus_grant_ring(struct xenbus_device *dev, void *vaddr,
else else
gfn = virt_to_gfn(vaddr); gfn = virt_to_gfn(vaddr);
err = gnttab_grant_foreign_access(dev->otherend_id, gfn, 0); grefs[i] = gnttab_claim_grant_reference(&gref_head);
if (err < 0) { gnttab_grant_foreign_access_ref(grefs[i], dev->otherend_id,
xenbus_dev_fatal(dev, err, gfn, 0);
"granting access to ring page");
goto fail;
}
grefs[i] = err;
vaddr = vaddr + XEN_PAGE_SIZE; vaddr = vaddr + XEN_PAGE_SIZE;
} }
return 0; return 0;
fail:
for (j = 0; j < i; j++)
gnttab_end_foreign_access_ref(grefs[j], 0);
return err;
} }
EXPORT_SYMBOL_GPL(xenbus_grant_ring); EXPORT_SYMBOL_GPL(xenbus_grant_ring);

View File

@@ -89,6 +89,22 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <linux/types.h> #include <linux/types.h>
enum arm_smccc_conduit {
SMCCC_CONDUIT_NONE,
SMCCC_CONDUIT_SMC,
SMCCC_CONDUIT_HVC,
};
/**
* arm_smccc_1_1_get_conduit()
*
* Returns the conduit to be used for SMCCCv1.1 or later.
*
* When SMCCCv1.1 is not present, returns SMCCC_CONDUIT_NONE.
*/
enum arm_smccc_conduit arm_smccc_1_1_get_conduit(void);
/** /**
* struct arm_smccc_res - Result from SMC/HVC call * struct arm_smccc_res - Result from SMC/HVC call
* @a0-a3 result values from registers 0 to 3 * @a0-a3 result values from registers 0 to 3
@@ -311,5 +327,63 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
#define SMCCC_RET_NOT_SUPPORTED -1 #define SMCCC_RET_NOT_SUPPORTED -1
#define SMCCC_RET_NOT_REQUIRED -2 #define SMCCC_RET_NOT_REQUIRED -2
/*
* Like arm_smccc_1_1* but always returns SMCCC_RET_NOT_SUPPORTED.
* Used when the SMCCC conduit is not defined. The empty asm statement
* avoids compiler warnings about unused variables.
*/
#define __fail_smccc_1_1(...) \
do { \
__declare_args(__count_args(__VA_ARGS__), __VA_ARGS__); \
asm ("" __constraints(__count_args(__VA_ARGS__))); \
if (___res) \
___res->a0 = SMCCC_RET_NOT_SUPPORTED; \
} while (0)
/*
* arm_smccc_1_1_invoke() - make an SMCCC v1.1 compliant call
*
* This is a variadic macro taking one to eight source arguments, and
* an optional return structure.
*
* @a0-a7: arguments passed in registers 0 to 7
* @res: result values from registers 0 to 3
*
* This macro will make either an HVC call or an SMC call depending on the
* current SMCCC conduit. If no valid conduit is available then -1
* (SMCCC_RET_NOT_SUPPORTED) is returned in @res.a0 (if supplied).
*
* The return value also provides the conduit that was used.
*/
#define arm_smccc_1_1_invoke(...) ({ \
int method = arm_smccc_1_1_get_conduit(); \
switch (method) { \
case SMCCC_CONDUIT_HVC: \
arm_smccc_1_1_hvc(__VA_ARGS__); \
break; \
case SMCCC_CONDUIT_SMC: \
arm_smccc_1_1_smc(__VA_ARGS__); \
break; \
default: \
__fail_smccc_1_1(__VA_ARGS__); \
method = SMCCC_CONDUIT_NONE; \
break; \
} \
method; \
})
/* Paravirtualised time calls (defined by ARM DEN0057A) */
#define ARM_SMCCC_HV_PV_TIME_FEATURES \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_64, \
ARM_SMCCC_OWNER_STANDARD_HYP, \
0x20)
#define ARM_SMCCC_HV_PV_TIME_ST \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_64, \
ARM_SMCCC_OWNER_STANDARD_HYP, \
0x21)
#endif /*__ASSEMBLY__*/ #endif /*__ASSEMBLY__*/
#endif /*__LINUX_ARM_SMCCC_H*/ #endif /*__LINUX_ARM_SMCCC_H*/

View File

@@ -533,6 +533,11 @@ static inline int bpf_map_attr_numa_node(const union bpf_attr *attr)
struct bpf_prog *bpf_prog_get_type_path(const char *name, enum bpf_prog_type type); struct bpf_prog *bpf_prog_get_type_path(const char *name, enum bpf_prog_type type);
int array_map_alloc_check(union bpf_attr *attr); int array_map_alloc_check(union bpf_attr *attr);
static inline bool unprivileged_ebpf_enabled(void)
{
return !sysctl_unprivileged_bpf_disabled;
}
#else /* !CONFIG_BPF_SYSCALL */ #else /* !CONFIG_BPF_SYSCALL */
static inline struct bpf_prog *bpf_prog_get(u32 ufd) static inline struct bpf_prog *bpf_prog_get(u32 ufd)
{ {
@@ -644,6 +649,12 @@ static inline struct bpf_prog *bpf_prog_get_type_path(const char *name,
{ {
return ERR_PTR(-EOPNOTSUPP); return ERR_PTR(-EOPNOTSUPP);
} }
static inline bool unprivileged_ebpf_enabled(void)
{
return false;
}
#endif /* CONFIG_BPF_SYSCALL */ #endif /* CONFIG_BPF_SYSCALL */
static inline struct bpf_prog *bpf_prog_get_type(u32 ufd, static inline struct bpf_prog *bpf_prog_get_type(u32 ufd,

View File

@@ -97,17 +97,32 @@ int gnttab_end_foreign_access_ref(grant_ref_t ref, int readonly);
* access has been ended, free the given page too. Access will be ended * access has been ended, free the given page too. Access will be ended
* immediately iff the grant entry is not in use, otherwise it will happen * immediately iff the grant entry is not in use, otherwise it will happen
* some time later. page may be 0, in which case no freeing will occur. * some time later. page may be 0, in which case no freeing will occur.
* Note that the granted page might still be accessed (read or write) by the
* other side after gnttab_end_foreign_access() returns, so even if page was
* specified as 0 it is not allowed to just reuse the page for other
* purposes immediately. gnttab_end_foreign_access() will take an additional
* reference to the granted page in this case, which is dropped only after
* the grant is no longer in use.
* This requires that multi page allocations for areas subject to
* gnttab_end_foreign_access() are done via alloc_pages_exact() (and freeing
* via free_pages_exact()) in order to avoid high order pages.
*/ */
void gnttab_end_foreign_access(grant_ref_t ref, int readonly, void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
unsigned long page); unsigned long page);
/*
* End access through the given grant reference, iff the grant entry is
* no longer in use. In case of success ending foreign access, the
* grant reference is deallocated.
* Return 1 if the grant entry was freed, 0 if it is still in use.
*/
int gnttab_try_end_foreign_access(grant_ref_t ref);
int gnttab_grant_foreign_transfer(domid_t domid, unsigned long pfn); int gnttab_grant_foreign_transfer(domid_t domid, unsigned long pfn);
unsigned long gnttab_end_foreign_transfer_ref(grant_ref_t ref); unsigned long gnttab_end_foreign_transfer_ref(grant_ref_t ref);
unsigned long gnttab_end_foreign_transfer(grant_ref_t ref); unsigned long gnttab_end_foreign_transfer(grant_ref_t ref);
int gnttab_query_foreign_access(grant_ref_t ref);
/* /*
* operations on reserved batches of grant references * operations on reserved batches of grant references
*/ */

View File

@@ -252,6 +252,11 @@ static int sysrq_sysctl_handler(struct ctl_table *table, int write,
#endif #endif
#ifdef CONFIG_BPF_SYSCALL #ifdef CONFIG_BPF_SYSCALL
void __weak unpriv_ebpf_notify(int new_state)
{
}
static int bpf_unpriv_handler(struct ctl_table *table, int write, static int bpf_unpriv_handler(struct ctl_table *table, int write,
void *buffer, size_t *lenp, loff_t *ppos) void *buffer, size_t *lenp, loff_t *ppos)
{ {
@@ -269,6 +274,9 @@ static int bpf_unpriv_handler(struct ctl_table *table, int write,
return -EPERM; return -EPERM;
*(int *)table->data = unpriv_enable; *(int *)table->data = unpriv_enable;
} }
unpriv_ebpf_notify(unpriv_enable);
return ret; return ret;
} }
#endif #endif

View File

@@ -301,9 +301,9 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
ref = priv->rings[i].intf->ref[j]; ref = priv->rings[i].intf->ref[j];
gnttab_end_foreign_access(ref, 0, 0); gnttab_end_foreign_access(ref, 0, 0);
} }
free_pages((unsigned long)priv->rings[i].data.in, free_pages_exact(priv->rings[i].data.in,
XEN_9PFS_RING_ORDER - 1UL << (XEN_9PFS_RING_ORDER +
(PAGE_SHIFT - XEN_PAGE_SHIFT)); XEN_PAGE_SHIFT));
} }
gnttab_end_foreign_access(priv->rings[i].ref, 0, 0); gnttab_end_foreign_access(priv->rings[i].ref, 0, 0);
free_page((unsigned long)priv->rings[i].intf); free_page((unsigned long)priv->rings[i].intf);
@@ -341,8 +341,8 @@ static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev,
if (ret < 0) if (ret < 0)
goto out; goto out;
ring->ref = ret; ring->ref = ret;
bytes = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, bytes = alloc_pages_exact(1UL << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT),
XEN_9PFS_RING_ORDER - (PAGE_SHIFT - XEN_PAGE_SHIFT)); GFP_KERNEL | __GFP_ZERO);
if (!bytes) { if (!bytes) {
ret = -ENOMEM; ret = -ENOMEM;
goto out; goto out;
@@ -373,9 +373,7 @@ static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev,
if (bytes) { if (bytes) {
for (i--; i >= 0; i--) for (i--; i >= 0; i--)
gnttab_end_foreign_access(ring->intf->ref[i], 0, 0); gnttab_end_foreign_access(ring->intf->ref[i], 0, 0);
free_pages((unsigned long)bytes, free_pages_exact(bytes, 1UL << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT));
XEN_9PFS_RING_ORDER -
(PAGE_SHIFT - XEN_PAGE_SHIFT));
} }
gnttab_end_foreign_access(ring->ref, 0, 0); gnttab_end_foreign_access(ring->ref, 0, 0);
free_page((unsigned long)ring->intf); free_page((unsigned long)ring->intf);

View File

@@ -203,7 +203,7 @@
#define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */
#define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */
#define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
#define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCEs for Spectre variant 2 */
#define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */
#define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */ #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */
#define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */ #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */