Merge "Merge LTS tag v4.19.2 into msm-kona"

This commit is contained in:
qctecmdr Service
2018-12-07 09:40:54 -08:00
committed by Gerrit - the friendly Code Review server
388 changed files with 3277 additions and 4064 deletions

View File

@@ -191,21 +191,11 @@ Currently, the following pairs of encryption modes are supported:
- AES-256-XTS for contents and AES-256-CTS-CBC for filenames
- AES-128-CBC for contents and AES-128-CTS-CBC for filenames
- Speck128/256-XTS for contents and Speck128/256-CTS-CBC for filenames
It is strongly recommended to use AES-256-XTS for contents encryption.
AES-128-CBC was added only for low-powered embedded devices with
crypto accelerators such as CAAM or CESA that do not support XTS.
Similarly, Speck128/256 support was only added for older or low-end
CPUs which cannot do AES fast enough -- especially ARM CPUs which have
NEON instructions but not the Cryptography Extensions -- and for which
it would not otherwise be feasible to use encryption at all. It is
not recommended to use Speck on CPUs that have AES instructions.
Speck support is only available if it has been enabled in the crypto
API via CONFIG_CRYPTO_SPECK. Also, on ARM platforms, to get
acceptable performance CONFIG_CRYPTO_SPECK_NEON must be enabled.
New encryption modes can be added relatively easily, without changes
to individual filesystems. However, authenticated encryption (AE)
modes are not currently supported because of the difficulty of dealing

View File

@@ -16,10 +16,10 @@ CEC_RECEIVE, CEC_TRANSMIT - Receive or transmit a CEC message
Synopsis
========
.. c:function:: int ioctl( int fd, CEC_RECEIVE, struct cec_msg *argp )
.. c:function:: int ioctl( int fd, CEC_RECEIVE, struct cec_msg \*argp )
:name: CEC_RECEIVE
.. c:function:: int ioctl( int fd, CEC_TRANSMIT, struct cec_msg *argp )
.. c:function:: int ioctl( int fd, CEC_TRANSMIT, struct cec_msg \*argp )
:name: CEC_TRANSMIT
Arguments
@@ -272,6 +272,19 @@ View On' messages from initiator 0xf ('Unregistered') to destination 0 ('TV').
- The transmit failed after one or more retries. This status bit is
mutually exclusive with :ref:`CEC_TX_STATUS_OK <CEC-TX-STATUS-OK>`.
Other bits can still be set to explain which failures were seen.
* .. _`CEC-TX-STATUS-ABORTED`:
- ``CEC_TX_STATUS_ABORTED``
- 0x40
- The transmit was aborted due to an HDMI disconnect, or the adapter
was unconfigured, or a transmit was interrupted, or the driver
returned an error when attempting to start a transmit.
* .. _`CEC-TX-STATUS-TIMEOUT`:
- ``CEC_TX_STATUS_TIMEOUT``
- 0x80
- The transmit timed out. This should not normally happen and this
indicates a driver problem.
.. tabularcolumns:: |p{5.6cm}|p{0.9cm}|p{11.0cm}|
@@ -300,6 +313,14 @@ View On' messages from initiator 0xf ('Unregistered') to destination 0 ('TV').
- The message was received successfully but the reply was
``CEC_MSG_FEATURE_ABORT``. This status is only set if this message
was the reply to an earlier transmitted message.
* .. _`CEC-RX-STATUS-ABORTED`:
- ``CEC_RX_STATUS_ABORTED``
- 0x08
- The wait for a reply to an earlier transmitted message was aborted
because the HDMI cable was disconnected, the adapter was unconfigured
or the :ref:`CEC_TRANSMIT <CEC_RECEIVE>` that waited for a
reply was interrupted.

View File

@@ -226,16 +226,6 @@ xvYCC
:author: International Electrotechnical Commission (http://www.iec.ch)
.. _adobergb:
AdobeRGB
========
:title: Adobe© RGB (1998) Color Image Encoding Version 2005-05
:author: Adobe Systems Incorporated (http://www.adobe.com)
.. _oprgb:
opRGB

View File

@@ -51,8 +51,8 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
- See :ref:`col-rec709`.
* - ``V4L2_COLORSPACE_SRGB``
- See :ref:`col-srgb`.
* - ``V4L2_COLORSPACE_ADOBERGB``
- See :ref:`col-adobergb`.
* - ``V4L2_COLORSPACE_OPRGB``
- See :ref:`col-oprgb`.
* - ``V4L2_COLORSPACE_BT2020``
- See :ref:`col-bt2020`.
* - ``V4L2_COLORSPACE_DCI_P3``
@@ -90,8 +90,8 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
- Use the Rec. 709 transfer function.
* - ``V4L2_XFER_FUNC_SRGB``
- Use the sRGB transfer function.
* - ``V4L2_XFER_FUNC_ADOBERGB``
- Use the AdobeRGB transfer function.
* - ``V4L2_XFER_FUNC_OPRGB``
- Use the opRGB transfer function.
* - ``V4L2_XFER_FUNC_SMPTE240M``
- Use the SMPTE 240M transfer function.
* - ``V4L2_XFER_FUNC_NONE``

View File

@@ -290,15 +290,14 @@ Y' is clamped to the range [0…1] and Cb and Cr are clamped to the range
170M/BT.601. The Y'CbCr quantization is limited range.
.. _col-adobergb:
.. _col-oprgb:
Colorspace Adobe RGB (V4L2_COLORSPACE_ADOBERGB)
Colorspace opRGB (V4L2_COLORSPACE_OPRGB)
===============================================
The :ref:`adobergb` standard defines the colorspace used by computer
graphics that use the AdobeRGB colorspace. This is also known as the
:ref:`oprgb` standard. The default transfer function is
``V4L2_XFER_FUNC_ADOBERGB``. The default Y'CbCr encoding is
The :ref:`oprgb` standard defines the colorspace used by computer
graphics that use the opRGB colorspace. The default transfer function is
``V4L2_XFER_FUNC_OPRGB``. The default Y'CbCr encoding is
``V4L2_YCBCR_ENC_601``. The default Y'CbCr quantization is limited
range.
@@ -312,7 +311,7 @@ The chromaticities of the primary colors and the white reference are:
.. tabularcolumns:: |p{4.4cm}|p{4.4cm}|p{8.7cm}|
.. flat-table:: Adobe RGB Chromaticities
.. flat-table:: opRGB Chromaticities
:header-rows: 1
:stub-columns: 0
:widths: 1 1 2

View File

@@ -56,7 +56,8 @@ replace symbol V4L2_MEMORY_USERPTR :c:type:`v4l2_memory`
# Documented enum v4l2_colorspace
replace symbol V4L2_COLORSPACE_470_SYSTEM_BG :c:type:`v4l2_colorspace`
replace symbol V4L2_COLORSPACE_470_SYSTEM_M :c:type:`v4l2_colorspace`
replace symbol V4L2_COLORSPACE_ADOBERGB :c:type:`v4l2_colorspace`
replace symbol V4L2_COLORSPACE_OPRGB :c:type:`v4l2_colorspace`
replace define V4L2_COLORSPACE_ADOBERGB :c:type:`v4l2_colorspace`
replace symbol V4L2_COLORSPACE_BT2020 :c:type:`v4l2_colorspace`
replace symbol V4L2_COLORSPACE_DCI_P3 :c:type:`v4l2_colorspace`
replace symbol V4L2_COLORSPACE_DEFAULT :c:type:`v4l2_colorspace`
@@ -69,7 +70,8 @@ replace symbol V4L2_COLORSPACE_SRGB :c:type:`v4l2_colorspace`
# Documented enum v4l2_xfer_func
replace symbol V4L2_XFER_FUNC_709 :c:type:`v4l2_xfer_func`
replace symbol V4L2_XFER_FUNC_ADOBERGB :c:type:`v4l2_xfer_func`
replace symbol V4L2_XFER_FUNC_OPRGB :c:type:`v4l2_xfer_func`
replace define V4L2_XFER_FUNC_ADOBERGB :c:type:`v4l2_xfer_func`
replace symbol V4L2_XFER_FUNC_DCI_P3 :c:type:`v4l2_xfer_func`
replace symbol V4L2_XFER_FUNC_DEFAULT :c:type:`v4l2_xfer_func`
replace symbol V4L2_XFER_FUNC_NONE :c:type:`v4l2_xfer_func`

View File

@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 4
PATCHLEVEL = 19
SUBLEVEL = 1
SUBLEVEL = 2
EXTRAVERSION =
NAME = "People's Front"

View File

@@ -354,7 +354,7 @@
ti,hwmods = "pcie1";
phys = <&pcie1_phy>;
phy-names = "pcie-phy0";
ti,syscon-unaligned-access = <&scm_conf1 0x14 2>;
ti,syscon-unaligned-access = <&scm_conf1 0x14 1>;
status = "disabled";
};
};

View File

@@ -151,6 +151,8 @@
reg = <0x66>;
interrupt-parent = <&gpx0>;
interrupts = <4 IRQ_TYPE_NONE>, <3 IRQ_TYPE_NONE>;
pinctrl-names = "default";
pinctrl-0 = <&max8997_irq>;
max8997,pmic-buck1-dvs-voltage = <1350000>;
max8997,pmic-buck2-dvs-voltage = <1100000>;
@@ -288,6 +290,13 @@
};
};
&pinctrl_1 {
max8997_irq: max8997-irq {
samsung,pins = "gpx0-3", "gpx0-4";
samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
};
};
&sdhci_0 {
bus-width = <4>;
pinctrl-0 = <&sd0_clk &sd0_cmd &sd0_bus4 &sd0_cd>;

View File

@@ -54,62 +54,109 @@
device_type = "cpu";
compatible = "arm,cortex-a15";
reg = <0>;
clock-frequency = <1700000000>;
clocks = <&clock CLK_ARM_CLK>;
clock-names = "cpu";
clock-latency = <140000>;
operating-points = <
1700000 1300000
1600000 1250000
1500000 1225000
1400000 1200000
1300000 1150000
1200000 1125000
1100000 1100000
1000000 1075000
900000 1050000
800000 1025000
700000 1012500
600000 1000000
500000 975000
400000 950000
300000 937500
200000 925000
>;
operating-points-v2 = <&cpu0_opp_table>;
#cooling-cells = <2>; /* min followed by max */
};
cpu@1 {
device_type = "cpu";
compatible = "arm,cortex-a15";
reg = <1>;
clock-frequency = <1700000000>;
clocks = <&clock CLK_ARM_CLK>;
clock-names = "cpu";
clock-latency = <140000>;
operating-points = <
1700000 1300000
1600000 1250000
1500000 1225000
1400000 1200000
1300000 1150000
1200000 1125000
1100000 1100000
1000000 1075000
900000 1050000
800000 1025000
700000 1012500
600000 1000000
500000 975000
400000 950000
300000 937500
200000 925000
>;
operating-points-v2 = <&cpu0_opp_table>;
#cooling-cells = <2>; /* min followed by max */
};
};
cpu0_opp_table: opp_table0 {
compatible = "operating-points-v2";
opp-shared;
opp-200000000 {
opp-hz = /bits/ 64 <200000000>;
opp-microvolt = <925000>;
clock-latency-ns = <140000>;
};
opp-300000000 {
opp-hz = /bits/ 64 <300000000>;
opp-microvolt = <937500>;
clock-latency-ns = <140000>;
};
opp-400000000 {
opp-hz = /bits/ 64 <400000000>;
opp-microvolt = <950000>;
clock-latency-ns = <140000>;
};
opp-500000000 {
opp-hz = /bits/ 64 <500000000>;
opp-microvolt = <975000>;
clock-latency-ns = <140000>;
};
opp-600000000 {
opp-hz = /bits/ 64 <600000000>;
opp-microvolt = <1000000>;
clock-latency-ns = <140000>;
};
opp-700000000 {
opp-hz = /bits/ 64 <700000000>;
opp-microvolt = <1012500>;
clock-latency-ns = <140000>;
};
opp-800000000 {
opp-hz = /bits/ 64 <800000000>;
opp-microvolt = <1025000>;
clock-latency-ns = <140000>;
};
opp-900000000 {
opp-hz = /bits/ 64 <900000000>;
opp-microvolt = <1050000>;
clock-latency-ns = <140000>;
};
opp-1000000000 {
opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <1075000>;
clock-latency-ns = <140000>;
opp-suspend;
};
opp-1100000000 {
opp-hz = /bits/ 64 <1100000000>;
opp-microvolt = <1100000>;
clock-latency-ns = <140000>;
};
opp-1200000000 {
opp-hz = /bits/ 64 <1200000000>;
opp-microvolt = <1125000>;
clock-latency-ns = <140000>;
};
opp-1300000000 {
opp-hz = /bits/ 64 <1300000000>;
opp-microvolt = <1150000>;
clock-latency-ns = <140000>;
};
opp-1400000000 {
opp-hz = /bits/ 64 <1400000000>;
opp-microvolt = <1200000>;
clock-latency-ns = <140000>;
};
opp-1500000000 {
opp-hz = /bits/ 64 <1500000000>;
opp-microvolt = <1225000>;
clock-latency-ns = <140000>;
};
opp-1600000000 {
opp-hz = /bits/ 64 <1600000000>;
opp-microvolt = <1250000>;
clock-latency-ns = <140000>;
};
opp-1700000000 {
opp-hz = /bits/ 64 <1700000000>;
opp-microvolt = <1300000>;
clock-latency-ns = <140000>;
};
};
soc: soc {
sysram@2020000 {
compatible = "mmio-sram";

View File

@@ -613,7 +613,7 @@
status = "disabled";
};
sdr: sdr@ffc25000 {
sdr: sdr@ffcfb100 {
compatible = "altr,sdr-ctl", "syscon";
reg = <0xffcfb100 0x80>;
};

View File

@@ -121,10 +121,4 @@ config CRYPTO_CHACHA20_NEON
select CRYPTO_BLKCIPHER
select CRYPTO_CHACHA20
config CRYPTO_SPECK_NEON
tristate "NEON accelerated Speck cipher algorithms"
depends on KERNEL_MODE_NEON
select CRYPTO_BLKCIPHER
select CRYPTO_SPECK
endif

View File

@@ -10,7 +10,6 @@ obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o
obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o
obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o
obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o
obj-$(CONFIG_CRYPTO_SPECK_NEON) += speck-neon.o
ce-obj-$(CONFIG_CRYPTO_AES_ARM_CE) += aes-arm-ce.o
ce-obj-$(CONFIG_CRYPTO_SHA1_ARM_CE) += sha1-arm-ce.o
@@ -54,7 +53,6 @@ ghash-arm-ce-y := ghash-ce-core.o ghash-ce-glue.o
crct10dif-arm-ce-y := crct10dif-ce-core.o crct10dif-ce-glue.o
crc32-arm-ce-y:= crc32-ce-core.o crc32-ce-glue.o
chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
speck-neon-y := speck-neon-core.o speck-neon-glue.o
ifdef REGENERATE_ARM_CRYPTO
quiet_cmd_perl = PERL $@

View File

@@ -1,434 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
*
* Copyright (c) 2018 Google, Inc
*
* Author: Eric Biggers <ebiggers@google.com>
*/
#include <linux/linkage.h>
.text
.fpu neon
// arguments
ROUND_KEYS .req r0 // const {u64,u32} *round_keys
NROUNDS .req r1 // int nrounds
DST .req r2 // void *dst
SRC .req r3 // const void *src
NBYTES .req r4 // unsigned int nbytes
TWEAK .req r5 // void *tweak
// registers which hold the data being encrypted/decrypted
X0 .req q0
X0_L .req d0
X0_H .req d1
Y0 .req q1
Y0_H .req d3
X1 .req q2
X1_L .req d4
X1_H .req d5
Y1 .req q3
Y1_H .req d7
X2 .req q4
X2_L .req d8
X2_H .req d9
Y2 .req q5
Y2_H .req d11
X3 .req q6
X3_L .req d12
X3_H .req d13
Y3 .req q7
Y3_H .req d15
// the round key, duplicated in all lanes
ROUND_KEY .req q8
ROUND_KEY_L .req d16
ROUND_KEY_H .req d17
// index vector for vtbl-based 8-bit rotates
ROTATE_TABLE .req d18
// multiplication table for updating XTS tweaks
GF128MUL_TABLE .req d19
GF64MUL_TABLE .req d19
// current XTS tweak value(s)
TWEAKV .req q10
TWEAKV_L .req d20
TWEAKV_H .req d21
TMP0 .req q12
TMP0_L .req d24
TMP0_H .req d25
TMP1 .req q13
TMP2 .req q14
TMP3 .req q15
.align 4
.Lror64_8_table:
.byte 1, 2, 3, 4, 5, 6, 7, 0
.Lror32_8_table:
.byte 1, 2, 3, 0, 5, 6, 7, 4
.Lrol64_8_table:
.byte 7, 0, 1, 2, 3, 4, 5, 6
.Lrol32_8_table:
.byte 3, 0, 1, 2, 7, 4, 5, 6
.Lgf128mul_table:
.byte 0, 0x87
.fill 14
.Lgf64mul_table:
.byte 0, 0x1b, (0x1b << 1), (0x1b << 1) ^ 0x1b
.fill 12
/*
* _speck_round_128bytes() - Speck encryption round on 128 bytes at a time
*
* Do one Speck encryption round on the 128 bytes (8 blocks for Speck128, 16 for
* Speck64) stored in X0-X3 and Y0-Y3, using the round key stored in all lanes
* of ROUND_KEY. 'n' is the lane size: 64 for Speck128, or 32 for Speck64.
*
* The 8-bit rotates are implemented using vtbl instead of vshr + vsli because
* the vtbl approach is faster on some processors and the same speed on others.
*/
.macro _speck_round_128bytes n
// x = ror(x, 8)
vtbl.8 X0_L, {X0_L}, ROTATE_TABLE
vtbl.8 X0_H, {X0_H}, ROTATE_TABLE
vtbl.8 X1_L, {X1_L}, ROTATE_TABLE
vtbl.8 X1_H, {X1_H}, ROTATE_TABLE
vtbl.8 X2_L, {X2_L}, ROTATE_TABLE
vtbl.8 X2_H, {X2_H}, ROTATE_TABLE
vtbl.8 X3_L, {X3_L}, ROTATE_TABLE
vtbl.8 X3_H, {X3_H}, ROTATE_TABLE
// x += y
vadd.u\n X0, Y0
vadd.u\n X1, Y1
vadd.u\n X2, Y2
vadd.u\n X3, Y3
// x ^= k
veor X0, ROUND_KEY
veor X1, ROUND_KEY
veor X2, ROUND_KEY
veor X3, ROUND_KEY
// y = rol(y, 3)
vshl.u\n TMP0, Y0, #3
vshl.u\n TMP1, Y1, #3
vshl.u\n TMP2, Y2, #3
vshl.u\n TMP3, Y3, #3
vsri.u\n TMP0, Y0, #(\n - 3)
vsri.u\n TMP1, Y1, #(\n - 3)
vsri.u\n TMP2, Y2, #(\n - 3)
vsri.u\n TMP3, Y3, #(\n - 3)
// y ^= x
veor Y0, TMP0, X0
veor Y1, TMP1, X1
veor Y2, TMP2, X2
veor Y3, TMP3, X3
.endm
/*
* _speck_unround_128bytes() - Speck decryption round on 128 bytes at a time
*
* This is the inverse of _speck_round_128bytes().
*/
.macro _speck_unround_128bytes n
// y ^= x
veor TMP0, Y0, X0
veor TMP1, Y1, X1
veor TMP2, Y2, X2
veor TMP3, Y3, X3
// y = ror(y, 3)
vshr.u\n Y0, TMP0, #3
vshr.u\n Y1, TMP1, #3
vshr.u\n Y2, TMP2, #3
vshr.u\n Y3, TMP3, #3
vsli.u\n Y0, TMP0, #(\n - 3)
vsli.u\n Y1, TMP1, #(\n - 3)
vsli.u\n Y2, TMP2, #(\n - 3)
vsli.u\n Y3, TMP3, #(\n - 3)
// x ^= k
veor X0, ROUND_KEY
veor X1, ROUND_KEY
veor X2, ROUND_KEY
veor X3, ROUND_KEY
// x -= y
vsub.u\n X0, Y0
vsub.u\n X1, Y1
vsub.u\n X2, Y2
vsub.u\n X3, Y3
// x = rol(x, 8);
vtbl.8 X0_L, {X0_L}, ROTATE_TABLE
vtbl.8 X0_H, {X0_H}, ROTATE_TABLE
vtbl.8 X1_L, {X1_L}, ROTATE_TABLE
vtbl.8 X1_H, {X1_H}, ROTATE_TABLE
vtbl.8 X2_L, {X2_L}, ROTATE_TABLE
vtbl.8 X2_H, {X2_H}, ROTATE_TABLE
vtbl.8 X3_L, {X3_L}, ROTATE_TABLE
vtbl.8 X3_H, {X3_H}, ROTATE_TABLE
.endm
.macro _xts128_precrypt_one dst_reg, tweak_buf, tmp
// Load the next source block
vld1.8 {\dst_reg}, [SRC]!
// Save the current tweak in the tweak buffer
vst1.8 {TWEAKV}, [\tweak_buf:128]!
// XOR the next source block with the current tweak
veor \dst_reg, TWEAKV
/*
* Calculate the next tweak by multiplying the current one by x,
* modulo p(x) = x^128 + x^7 + x^2 + x + 1.
*/
vshr.u64 \tmp, TWEAKV, #63
vshl.u64 TWEAKV, #1
veor TWEAKV_H, \tmp\()_L
vtbl.8 \tmp\()_H, {GF128MUL_TABLE}, \tmp\()_H
veor TWEAKV_L, \tmp\()_H
.endm
.macro _xts64_precrypt_two dst_reg, tweak_buf, tmp
// Load the next two source blocks
vld1.8 {\dst_reg}, [SRC]!
// Save the current two tweaks in the tweak buffer
vst1.8 {TWEAKV}, [\tweak_buf:128]!
// XOR the next two source blocks with the current two tweaks
veor \dst_reg, TWEAKV
/*
* Calculate the next two tweaks by multiplying the current ones by x^2,
* modulo p(x) = x^64 + x^4 + x^3 + x + 1.
*/
vshr.u64 \tmp, TWEAKV, #62
vshl.u64 TWEAKV, #2
vtbl.8 \tmp\()_L, {GF64MUL_TABLE}, \tmp\()_L
vtbl.8 \tmp\()_H, {GF64MUL_TABLE}, \tmp\()_H
veor TWEAKV, \tmp
.endm
/*
* _speck_xts_crypt() - Speck-XTS encryption/decryption
*
* Encrypt or decrypt NBYTES bytes of data from the SRC buffer to the DST buffer
* using Speck-XTS, specifically the variant with a block size of '2n' and round
* count given by NROUNDS. The expanded round keys are given in ROUND_KEYS, and
* the current XTS tweak value is given in TWEAK. It's assumed that NBYTES is a
* nonzero multiple of 128.
*/
.macro _speck_xts_crypt n, decrypting
push {r4-r7}
mov r7, sp
/*
* The first four parameters were passed in registers r0-r3. Load the
* additional parameters, which were passed on the stack.
*/
ldr NBYTES, [sp, #16]
ldr TWEAK, [sp, #20]
/*
* If decrypting, modify the ROUND_KEYS parameter to point to the last
* round key rather than the first, since for decryption the round keys
* are used in reverse order.
*/
.if \decrypting
.if \n == 64
add ROUND_KEYS, ROUND_KEYS, NROUNDS, lsl #3
sub ROUND_KEYS, #8
.else
add ROUND_KEYS, ROUND_KEYS, NROUNDS, lsl #2
sub ROUND_KEYS, #4
.endif
.endif
// Load the index vector for vtbl-based 8-bit rotates
.if \decrypting
ldr r12, =.Lrol\n\()_8_table
.else
ldr r12, =.Lror\n\()_8_table
.endif
vld1.8 {ROTATE_TABLE}, [r12:64]
// One-time XTS preparation
/*
* Allocate stack space to store 128 bytes worth of tweaks. For
* performance, this space is aligned to a 16-byte boundary so that we
* can use the load/store instructions that declare 16-byte alignment.
* For Thumb2 compatibility, don't do the 'bic' directly on 'sp'.
*/
sub r12, sp, #128
bic r12, #0xf
mov sp, r12
.if \n == 64
// Load first tweak
vld1.8 {TWEAKV}, [TWEAK]
// Load GF(2^128) multiplication table
ldr r12, =.Lgf128mul_table
vld1.8 {GF128MUL_TABLE}, [r12:64]
.else
// Load first tweak
vld1.8 {TWEAKV_L}, [TWEAK]
// Load GF(2^64) multiplication table
ldr r12, =.Lgf64mul_table
vld1.8 {GF64MUL_TABLE}, [r12:64]
// Calculate second tweak, packing it together with the first
vshr.u64 TMP0_L, TWEAKV_L, #63
vtbl.u8 TMP0_L, {GF64MUL_TABLE}, TMP0_L
vshl.u64 TWEAKV_H, TWEAKV_L, #1
veor TWEAKV_H, TMP0_L
.endif
.Lnext_128bytes_\@:
/*
* Load the source blocks into {X,Y}[0-3], XOR them with their XTS tweak
* values, and save the tweaks on the stack for later. Then
* de-interleave the 'x' and 'y' elements of each block, i.e. make it so
* that the X[0-3] registers contain only the second halves of blocks,
* and the Y[0-3] registers contain only the first halves of blocks.
* (Speck uses the order (y, x) rather than the more intuitive (x, y).)
*/
mov r12, sp
.if \n == 64
_xts128_precrypt_one X0, r12, TMP0
_xts128_precrypt_one Y0, r12, TMP0
_xts128_precrypt_one X1, r12, TMP0
_xts128_precrypt_one Y1, r12, TMP0
_xts128_precrypt_one X2, r12, TMP0
_xts128_precrypt_one Y2, r12, TMP0
_xts128_precrypt_one X3, r12, TMP0
_xts128_precrypt_one Y3, r12, TMP0
vswp X0_L, Y0_H
vswp X1_L, Y1_H
vswp X2_L, Y2_H
vswp X3_L, Y3_H
.else
_xts64_precrypt_two X0, r12, TMP0
_xts64_precrypt_two Y0, r12, TMP0
_xts64_precrypt_two X1, r12, TMP0
_xts64_precrypt_two Y1, r12, TMP0
_xts64_precrypt_two X2, r12, TMP0
_xts64_precrypt_two Y2, r12, TMP0
_xts64_precrypt_two X3, r12, TMP0
_xts64_precrypt_two Y3, r12, TMP0
vuzp.32 Y0, X0
vuzp.32 Y1, X1
vuzp.32 Y2, X2
vuzp.32 Y3, X3
.endif
// Do the cipher rounds
mov r12, ROUND_KEYS
mov r6, NROUNDS
.Lnext_round_\@:
.if \decrypting
.if \n == 64
vld1.64 ROUND_KEY_L, [r12]
sub r12, #8
vmov ROUND_KEY_H, ROUND_KEY_L
.else
vld1.32 {ROUND_KEY_L[],ROUND_KEY_H[]}, [r12]
sub r12, #4
.endif
_speck_unround_128bytes \n
.else
.if \n == 64
vld1.64 ROUND_KEY_L, [r12]!
vmov ROUND_KEY_H, ROUND_KEY_L
.else
vld1.32 {ROUND_KEY_L[],ROUND_KEY_H[]}, [r12]!
.endif
_speck_round_128bytes \n
.endif
subs r6, r6, #1
bne .Lnext_round_\@
// Re-interleave the 'x' and 'y' elements of each block
.if \n == 64
vswp X0_L, Y0_H
vswp X1_L, Y1_H
vswp X2_L, Y2_H
vswp X3_L, Y3_H
.else
vzip.32 Y0, X0
vzip.32 Y1, X1
vzip.32 Y2, X2
vzip.32 Y3, X3
.endif
// XOR the encrypted/decrypted blocks with the tweaks we saved earlier
mov r12, sp
vld1.8 {TMP0, TMP1}, [r12:128]!
vld1.8 {TMP2, TMP3}, [r12:128]!
veor X0, TMP0
veor Y0, TMP1
veor X1, TMP2
veor Y1, TMP3
vld1.8 {TMP0, TMP1}, [r12:128]!
vld1.8 {TMP2, TMP3}, [r12:128]!
veor X2, TMP0
veor Y2, TMP1
veor X3, TMP2
veor Y3, TMP3
// Store the ciphertext in the destination buffer
vst1.8 {X0, Y0}, [DST]!
vst1.8 {X1, Y1}, [DST]!
vst1.8 {X2, Y2}, [DST]!
vst1.8 {X3, Y3}, [DST]!
// Continue if there are more 128-byte chunks remaining, else return
subs NBYTES, #128
bne .Lnext_128bytes_\@
// Store the next tweak
.if \n == 64
vst1.8 {TWEAKV}, [TWEAK]
.else
vst1.8 {TWEAKV_L}, [TWEAK]
.endif
mov sp, r7
pop {r4-r7}
bx lr
.endm
ENTRY(speck128_xts_encrypt_neon)
_speck_xts_crypt n=64, decrypting=0
ENDPROC(speck128_xts_encrypt_neon)
ENTRY(speck128_xts_decrypt_neon)
_speck_xts_crypt n=64, decrypting=1
ENDPROC(speck128_xts_decrypt_neon)
ENTRY(speck64_xts_encrypt_neon)
_speck_xts_crypt n=32, decrypting=0
ENDPROC(speck64_xts_encrypt_neon)
ENTRY(speck64_xts_decrypt_neon)
_speck_xts_crypt n=32, decrypting=1
ENDPROC(speck64_xts_decrypt_neon)

View File

@@ -1,288 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
*
* Copyright (c) 2018 Google, Inc
*
* Note: the NIST recommendation for XTS only specifies a 128-bit block size,
* but a 64-bit version (needed for Speck64) is fairly straightforward; the math
* is just done in GF(2^64) instead of GF(2^128), with the reducing polynomial
* x^64 + x^4 + x^3 + x + 1 from the original XEX paper (Rogaway, 2004:
* "Efficient Instantiations of Tweakable Blockciphers and Refinements to Modes
* OCB and PMAC"), represented as 0x1B.
*/
#include <asm/hwcap.h>
#include <asm/neon.h>
#include <asm/simd.h>
#include <crypto/algapi.h>
#include <crypto/gf128mul.h>
#include <crypto/internal/skcipher.h>
#include <crypto/speck.h>
#include <crypto/xts.h>
#include <linux/kernel.h>
#include <linux/module.h>
/* The assembly functions only handle multiples of 128 bytes */
#define SPECK_NEON_CHUNK_SIZE 128
/* Speck128 */
struct speck128_xts_tfm_ctx {
struct speck128_tfm_ctx main_key;
struct speck128_tfm_ctx tweak_key;
};
asmlinkage void speck128_xts_encrypt_neon(const u64 *round_keys, int nrounds,
void *dst, const void *src,
unsigned int nbytes, void *tweak);
asmlinkage void speck128_xts_decrypt_neon(const u64 *round_keys, int nrounds,
void *dst, const void *src,
unsigned int nbytes, void *tweak);
typedef void (*speck128_crypt_one_t)(const struct speck128_tfm_ctx *,
u8 *, const u8 *);
typedef void (*speck128_xts_crypt_many_t)(const u64 *, int, void *,
const void *, unsigned int, void *);
static __always_inline int
__speck128_xts_crypt(struct skcipher_request *req,
speck128_crypt_one_t crypt_one,
speck128_xts_crypt_many_t crypt_many)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
const struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
struct skcipher_walk walk;
le128 tweak;
int err;
err = skcipher_walk_virt(&walk, req, true);
crypto_speck128_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
while (walk.nbytes > 0) {
unsigned int nbytes = walk.nbytes;
u8 *dst = walk.dst.virt.addr;
const u8 *src = walk.src.virt.addr;
if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
unsigned int count;
count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
kernel_neon_begin();
(*crypt_many)(ctx->main_key.round_keys,
ctx->main_key.nrounds,
dst, src, count, &tweak);
kernel_neon_end();
dst += count;
src += count;
nbytes -= count;
}
/* Handle any remainder with generic code */
while (nbytes >= sizeof(tweak)) {
le128_xor((le128 *)dst, (const le128 *)src, &tweak);
(*crypt_one)(&ctx->main_key, dst, dst);
le128_xor((le128 *)dst, (const le128 *)dst, &tweak);
gf128mul_x_ble(&tweak, &tweak);
dst += sizeof(tweak);
src += sizeof(tweak);
nbytes -= sizeof(tweak);
}
err = skcipher_walk_done(&walk, nbytes);
}
return err;
}
static int speck128_xts_encrypt(struct skcipher_request *req)
{
return __speck128_xts_crypt(req, crypto_speck128_encrypt,
speck128_xts_encrypt_neon);
}
static int speck128_xts_decrypt(struct skcipher_request *req)
{
return __speck128_xts_crypt(req, crypto_speck128_decrypt,
speck128_xts_decrypt_neon);
}
static int speck128_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
unsigned int keylen)
{
struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
int err;
err = xts_verify_key(tfm, key, keylen);
if (err)
return err;
keylen /= 2;
err = crypto_speck128_setkey(&ctx->main_key, key, keylen);
if (err)
return err;
return crypto_speck128_setkey(&ctx->tweak_key, key + keylen, keylen);
}
/* Speck64 */
struct speck64_xts_tfm_ctx {
struct speck64_tfm_ctx main_key;
struct speck64_tfm_ctx tweak_key;
};
asmlinkage void speck64_xts_encrypt_neon(const u32 *round_keys, int nrounds,
void *dst, const void *src,
unsigned int nbytes, void *tweak);
asmlinkage void speck64_xts_decrypt_neon(const u32 *round_keys, int nrounds,
void *dst, const void *src,
unsigned int nbytes, void *tweak);
typedef void (*speck64_crypt_one_t)(const struct speck64_tfm_ctx *,
u8 *, const u8 *);
typedef void (*speck64_xts_crypt_many_t)(const u32 *, int, void *,
const void *, unsigned int, void *);
static __always_inline int
__speck64_xts_crypt(struct skcipher_request *req, speck64_crypt_one_t crypt_one,
speck64_xts_crypt_many_t crypt_many)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
const struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
struct skcipher_walk walk;
__le64 tweak;
int err;
err = skcipher_walk_virt(&walk, req, true);
crypto_speck64_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
while (walk.nbytes > 0) {
unsigned int nbytes = walk.nbytes;
u8 *dst = walk.dst.virt.addr;
const u8 *src = walk.src.virt.addr;
if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
unsigned int count;
count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
kernel_neon_begin();
(*crypt_many)(ctx->main_key.round_keys,
ctx->main_key.nrounds,
dst, src, count, &tweak);
kernel_neon_end();
dst += count;
src += count;
nbytes -= count;
}
/* Handle any remainder with generic code */
while (nbytes >= sizeof(tweak)) {
*(__le64 *)dst = *(__le64 *)src ^ tweak;
(*crypt_one)(&ctx->main_key, dst, dst);
*(__le64 *)dst ^= tweak;
tweak = cpu_to_le64((le64_to_cpu(tweak) << 1) ^
((tweak & cpu_to_le64(1ULL << 63)) ?
0x1B : 0));
dst += sizeof(tweak);
src += sizeof(tweak);
nbytes -= sizeof(tweak);
}
err = skcipher_walk_done(&walk, nbytes);
}
return err;
}
static int speck64_xts_encrypt(struct skcipher_request *req)
{
return __speck64_xts_crypt(req, crypto_speck64_encrypt,
speck64_xts_encrypt_neon);
}
static int speck64_xts_decrypt(struct skcipher_request *req)
{
return __speck64_xts_crypt(req, crypto_speck64_decrypt,
speck64_xts_decrypt_neon);
}
static int speck64_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
unsigned int keylen)
{
struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
int err;
err = xts_verify_key(tfm, key, keylen);
if (err)
return err;
keylen /= 2;
err = crypto_speck64_setkey(&ctx->main_key, key, keylen);
if (err)
return err;
return crypto_speck64_setkey(&ctx->tweak_key, key + keylen, keylen);
}
static struct skcipher_alg speck_algs[] = {
{
.base.cra_name = "xts(speck128)",
.base.cra_driver_name = "xts-speck128-neon",
.base.cra_priority = 300,
.base.cra_blocksize = SPECK128_BLOCK_SIZE,
.base.cra_ctxsize = sizeof(struct speck128_xts_tfm_ctx),
.base.cra_alignmask = 7,
.base.cra_module = THIS_MODULE,
.min_keysize = 2 * SPECK128_128_KEY_SIZE,
.max_keysize = 2 * SPECK128_256_KEY_SIZE,
.ivsize = SPECK128_BLOCK_SIZE,
.walksize = SPECK_NEON_CHUNK_SIZE,
.setkey = speck128_xts_setkey,
.encrypt = speck128_xts_encrypt,
.decrypt = speck128_xts_decrypt,
}, {
.base.cra_name = "xts(speck64)",
.base.cra_driver_name = "xts-speck64-neon",
.base.cra_priority = 300,
.base.cra_blocksize = SPECK64_BLOCK_SIZE,
.base.cra_ctxsize = sizeof(struct speck64_xts_tfm_ctx),
.base.cra_alignmask = 7,
.base.cra_module = THIS_MODULE,
.min_keysize = 2 * SPECK64_96_KEY_SIZE,
.max_keysize = 2 * SPECK64_128_KEY_SIZE,
.ivsize = SPECK64_BLOCK_SIZE,
.walksize = SPECK_NEON_CHUNK_SIZE,
.setkey = speck64_xts_setkey,
.encrypt = speck64_xts_encrypt,
.decrypt = speck64_xts_decrypt,
}
};
static int __init speck_neon_module_init(void)
{
if (!(elf_hwcap & HWCAP_NEON))
return -ENODEV;
return crypto_register_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
}
static void __exit speck_neon_module_exit(void)
{
crypto_unregister_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
}
module_init(speck_neon_module_init);
module_exit(speck_neon_module_exit);
MODULE_DESCRIPTION("Speck block cipher (NEON-accelerated)");
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
MODULE_ALIAS_CRYPTO("xts(speck128)");
MODULE_ALIAS_CRYPTO("xts-speck128-neon");
MODULE_ALIAS_CRYPTO("xts(speck64)");
MODULE_ALIAS_CRYPTO("xts-speck64-neon");

View File

@@ -335,7 +335,7 @@
sysmgr: sysmgr@ffd12000 {
compatible = "altr,sys-mgr", "syscon";
reg = <0xffd12000 0x1000>;
reg = <0xffd12000 0x228>;
};
/* Local timer */

View File

@@ -119,10 +119,4 @@ config CRYPTO_AES_ARM64_BS
select CRYPTO_AES_ARM64
select CRYPTO_SIMD
config CRYPTO_SPECK_NEON
tristate "NEON accelerated Speck cipher algorithms"
depends on KERNEL_MODE_NEON
select CRYPTO_BLKCIPHER
select CRYPTO_SPECK
endif

View File

@@ -56,9 +56,6 @@ sha512-arm64-y := sha512-glue.o sha512-core.o
obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o
chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
obj-$(CONFIG_CRYPTO_SPECK_NEON) += speck-neon.o
speck-neon-y := speck-neon-core.o speck-neon-glue.o
obj-$(CONFIG_CRYPTO_AES_ARM64) += aes-arm64.o
aes-arm64-y := aes-cipher-core.o aes-cipher-glue.o

View File

@@ -1,352 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* ARM64 NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
*
* Copyright (c) 2018 Google, Inc
*
* Author: Eric Biggers <ebiggers@google.com>
*/
#include <linux/linkage.h>
.text
// arguments
ROUND_KEYS .req x0 // const {u64,u32} *round_keys
NROUNDS .req w1 // int nrounds
NROUNDS_X .req x1
DST .req x2 // void *dst
SRC .req x3 // const void *src
NBYTES .req w4 // unsigned int nbytes
TWEAK .req x5 // void *tweak
// registers which hold the data being encrypted/decrypted
// (underscores avoid a naming collision with ARM64 registers x0-x3)
X_0 .req v0
Y_0 .req v1
X_1 .req v2
Y_1 .req v3
X_2 .req v4
Y_2 .req v5
X_3 .req v6
Y_3 .req v7
// the round key, duplicated in all lanes
ROUND_KEY .req v8
// index vector for tbl-based 8-bit rotates
ROTATE_TABLE .req v9
ROTATE_TABLE_Q .req q9
// temporary registers
TMP0 .req v10
TMP1 .req v11
TMP2 .req v12
TMP3 .req v13
// multiplication table for updating XTS tweaks
GFMUL_TABLE .req v14
GFMUL_TABLE_Q .req q14
// next XTS tweak value(s)
TWEAKV_NEXT .req v15
// XTS tweaks for the blocks currently being encrypted/decrypted
TWEAKV0 .req v16
TWEAKV1 .req v17
TWEAKV2 .req v18
TWEAKV3 .req v19
TWEAKV4 .req v20
TWEAKV5 .req v21
TWEAKV6 .req v22
TWEAKV7 .req v23
.align 4
.Lror64_8_table:
.octa 0x080f0e0d0c0b0a090007060504030201
.Lror32_8_table:
.octa 0x0c0f0e0d080b0a090407060500030201
.Lrol64_8_table:
.octa 0x0e0d0c0b0a09080f0605040302010007
.Lrol32_8_table:
.octa 0x0e0d0c0f0a09080b0605040702010003
.Lgf128mul_table:
.octa 0x00000000000000870000000000000001
.Lgf64mul_table:
.octa 0x0000000000000000000000002d361b00
/*
* _speck_round_128bytes() - Speck encryption round on 128 bytes at a time
*
* Do one Speck encryption round on the 128 bytes (8 blocks for Speck128, 16 for
* Speck64) stored in X0-X3 and Y0-Y3, using the round key stored in all lanes
* of ROUND_KEY. 'n' is the lane size: 64 for Speck128, or 32 for Speck64.
* 'lanes' is the lane specifier: "2d" for Speck128 or "4s" for Speck64.
*/
.macro _speck_round_128bytes n, lanes
// x = ror(x, 8)
tbl X_0.16b, {X_0.16b}, ROTATE_TABLE.16b
tbl X_1.16b, {X_1.16b}, ROTATE_TABLE.16b
tbl X_2.16b, {X_2.16b}, ROTATE_TABLE.16b
tbl X_3.16b, {X_3.16b}, ROTATE_TABLE.16b
// x += y
add X_0.\lanes, X_0.\lanes, Y_0.\lanes
add X_1.\lanes, X_1.\lanes, Y_1.\lanes
add X_2.\lanes, X_2.\lanes, Y_2.\lanes
add X_3.\lanes, X_3.\lanes, Y_3.\lanes
// x ^= k
eor X_0.16b, X_0.16b, ROUND_KEY.16b
eor X_1.16b, X_1.16b, ROUND_KEY.16b
eor X_2.16b, X_2.16b, ROUND_KEY.16b
eor X_3.16b, X_3.16b, ROUND_KEY.16b
// y = rol(y, 3)
shl TMP0.\lanes, Y_0.\lanes, #3
shl TMP1.\lanes, Y_1.\lanes, #3
shl TMP2.\lanes, Y_2.\lanes, #3
shl TMP3.\lanes, Y_3.\lanes, #3
sri TMP0.\lanes, Y_0.\lanes, #(\n - 3)
sri TMP1.\lanes, Y_1.\lanes, #(\n - 3)
sri TMP2.\lanes, Y_2.\lanes, #(\n - 3)
sri TMP3.\lanes, Y_3.\lanes, #(\n - 3)
// y ^= x
eor Y_0.16b, TMP0.16b, X_0.16b
eor Y_1.16b, TMP1.16b, X_1.16b
eor Y_2.16b, TMP2.16b, X_2.16b
eor Y_3.16b, TMP3.16b, X_3.16b
.endm
/*
* _speck_unround_128bytes() - Speck decryption round on 128 bytes at a time
*
* This is the inverse of _speck_round_128bytes().
*/
.macro _speck_unround_128bytes n, lanes
// y ^= x
eor TMP0.16b, Y_0.16b, X_0.16b
eor TMP1.16b, Y_1.16b, X_1.16b
eor TMP2.16b, Y_2.16b, X_2.16b
eor TMP3.16b, Y_3.16b, X_3.16b
// y = ror(y, 3)
ushr Y_0.\lanes, TMP0.\lanes, #3
ushr Y_1.\lanes, TMP1.\lanes, #3
ushr Y_2.\lanes, TMP2.\lanes, #3
ushr Y_3.\lanes, TMP3.\lanes, #3
sli Y_0.\lanes, TMP0.\lanes, #(\n - 3)
sli Y_1.\lanes, TMP1.\lanes, #(\n - 3)
sli Y_2.\lanes, TMP2.\lanes, #(\n - 3)
sli Y_3.\lanes, TMP3.\lanes, #(\n - 3)
// x ^= k
eor X_0.16b, X_0.16b, ROUND_KEY.16b
eor X_1.16b, X_1.16b, ROUND_KEY.16b
eor X_2.16b, X_2.16b, ROUND_KEY.16b
eor X_3.16b, X_3.16b, ROUND_KEY.16b
// x -= y
sub X_0.\lanes, X_0.\lanes, Y_0.\lanes
sub X_1.\lanes, X_1.\lanes, Y_1.\lanes
sub X_2.\lanes, X_2.\lanes, Y_2.\lanes
sub X_3.\lanes, X_3.\lanes, Y_3.\lanes
// x = rol(x, 8)
tbl X_0.16b, {X_0.16b}, ROTATE_TABLE.16b
tbl X_1.16b, {X_1.16b}, ROTATE_TABLE.16b
tbl X_2.16b, {X_2.16b}, ROTATE_TABLE.16b
tbl X_3.16b, {X_3.16b}, ROTATE_TABLE.16b
.endm
.macro _next_xts_tweak next, cur, tmp, n
.if \n == 64
/*
* Calculate the next tweak by multiplying the current one by x,
* modulo p(x) = x^128 + x^7 + x^2 + x + 1.
*/
sshr \tmp\().2d, \cur\().2d, #63
and \tmp\().16b, \tmp\().16b, GFMUL_TABLE.16b
shl \next\().2d, \cur\().2d, #1
ext \tmp\().16b, \tmp\().16b, \tmp\().16b, #8
eor \next\().16b, \next\().16b, \tmp\().16b
.else
/*
* Calculate the next two tweaks by multiplying the current ones by x^2,
* modulo p(x) = x^64 + x^4 + x^3 + x + 1.
*/
ushr \tmp\().2d, \cur\().2d, #62
shl \next\().2d, \cur\().2d, #2
tbl \tmp\().16b, {GFMUL_TABLE.16b}, \tmp\().16b
eor \next\().16b, \next\().16b, \tmp\().16b
.endif
.endm
/*
* _speck_xts_crypt() - Speck-XTS encryption/decryption
*
* Encrypt or decrypt NBYTES bytes of data from the SRC buffer to the DST buffer
* using Speck-XTS, specifically the variant with a block size of '2n' and round
* count given by NROUNDS. The expanded round keys are given in ROUND_KEYS, and
* the current XTS tweak value is given in TWEAK. It's assumed that NBYTES is a
* nonzero multiple of 128.
*/
.macro _speck_xts_crypt n, lanes, decrypting
/*
* If decrypting, modify the ROUND_KEYS parameter to point to the last
* round key rather than the first, since for decryption the round keys
* are used in reverse order.
*/
.if \decrypting
mov NROUNDS, NROUNDS /* zero the high 32 bits */
.if \n == 64
add ROUND_KEYS, ROUND_KEYS, NROUNDS_X, lsl #3
sub ROUND_KEYS, ROUND_KEYS, #8
.else
add ROUND_KEYS, ROUND_KEYS, NROUNDS_X, lsl #2
sub ROUND_KEYS, ROUND_KEYS, #4
.endif
.endif
// Load the index vector for tbl-based 8-bit rotates
.if \decrypting
ldr ROTATE_TABLE_Q, .Lrol\n\()_8_table
.else
ldr ROTATE_TABLE_Q, .Lror\n\()_8_table
.endif
// One-time XTS preparation
.if \n == 64
// Load first tweak
ld1 {TWEAKV0.16b}, [TWEAK]
// Load GF(2^128) multiplication table
ldr GFMUL_TABLE_Q, .Lgf128mul_table
.else
// Load first tweak
ld1 {TWEAKV0.8b}, [TWEAK]
// Load GF(2^64) multiplication table
ldr GFMUL_TABLE_Q, .Lgf64mul_table
// Calculate second tweak, packing it together with the first
ushr TMP0.2d, TWEAKV0.2d, #63
shl TMP1.2d, TWEAKV0.2d, #1
tbl TMP0.8b, {GFMUL_TABLE.16b}, TMP0.8b
eor TMP0.8b, TMP0.8b, TMP1.8b
mov TWEAKV0.d[1], TMP0.d[0]
.endif
.Lnext_128bytes_\@:
// Calculate XTS tweaks for next 128 bytes
_next_xts_tweak TWEAKV1, TWEAKV0, TMP0, \n
_next_xts_tweak TWEAKV2, TWEAKV1, TMP0, \n
_next_xts_tweak TWEAKV3, TWEAKV2, TMP0, \n
_next_xts_tweak TWEAKV4, TWEAKV3, TMP0, \n
_next_xts_tweak TWEAKV5, TWEAKV4, TMP0, \n
_next_xts_tweak TWEAKV6, TWEAKV5, TMP0, \n
_next_xts_tweak TWEAKV7, TWEAKV6, TMP0, \n
_next_xts_tweak TWEAKV_NEXT, TWEAKV7, TMP0, \n
// Load the next source blocks into {X,Y}[0-3]
ld1 {X_0.16b-Y_1.16b}, [SRC], #64
ld1 {X_2.16b-Y_3.16b}, [SRC], #64
// XOR the source blocks with their XTS tweaks
eor TMP0.16b, X_0.16b, TWEAKV0.16b
eor Y_0.16b, Y_0.16b, TWEAKV1.16b
eor TMP1.16b, X_1.16b, TWEAKV2.16b
eor Y_1.16b, Y_1.16b, TWEAKV3.16b
eor TMP2.16b, X_2.16b, TWEAKV4.16b
eor Y_2.16b, Y_2.16b, TWEAKV5.16b
eor TMP3.16b, X_3.16b, TWEAKV6.16b
eor Y_3.16b, Y_3.16b, TWEAKV7.16b
/*
* De-interleave the 'x' and 'y' elements of each block, i.e. make it so
* that the X[0-3] registers contain only the second halves of blocks,
* and the Y[0-3] registers contain only the first halves of blocks.
* (Speck uses the order (y, x) rather than the more intuitive (x, y).)
*/
uzp2 X_0.\lanes, TMP0.\lanes, Y_0.\lanes
uzp1 Y_0.\lanes, TMP0.\lanes, Y_0.\lanes
uzp2 X_1.\lanes, TMP1.\lanes, Y_1.\lanes
uzp1 Y_1.\lanes, TMP1.\lanes, Y_1.\lanes
uzp2 X_2.\lanes, TMP2.\lanes, Y_2.\lanes
uzp1 Y_2.\lanes, TMP2.\lanes, Y_2.\lanes
uzp2 X_3.\lanes, TMP3.\lanes, Y_3.\lanes
uzp1 Y_3.\lanes, TMP3.\lanes, Y_3.\lanes
// Do the cipher rounds
mov x6, ROUND_KEYS
mov w7, NROUNDS
.Lnext_round_\@:
.if \decrypting
ld1r {ROUND_KEY.\lanes}, [x6]
sub x6, x6, #( \n / 8 )
_speck_unround_128bytes \n, \lanes
.else
ld1r {ROUND_KEY.\lanes}, [x6], #( \n / 8 )
_speck_round_128bytes \n, \lanes
.endif
subs w7, w7, #1
bne .Lnext_round_\@
// Re-interleave the 'x' and 'y' elements of each block
zip1 TMP0.\lanes, Y_0.\lanes, X_0.\lanes
zip2 Y_0.\lanes, Y_0.\lanes, X_0.\lanes
zip1 TMP1.\lanes, Y_1.\lanes, X_1.\lanes
zip2 Y_1.\lanes, Y_1.\lanes, X_1.\lanes
zip1 TMP2.\lanes, Y_2.\lanes, X_2.\lanes
zip2 Y_2.\lanes, Y_2.\lanes, X_2.\lanes
zip1 TMP3.\lanes, Y_3.\lanes, X_3.\lanes
zip2 Y_3.\lanes, Y_3.\lanes, X_3.\lanes
// XOR the encrypted/decrypted blocks with the tweaks calculated earlier
eor X_0.16b, TMP0.16b, TWEAKV0.16b
eor Y_0.16b, Y_0.16b, TWEAKV1.16b
eor X_1.16b, TMP1.16b, TWEAKV2.16b
eor Y_1.16b, Y_1.16b, TWEAKV3.16b
eor X_2.16b, TMP2.16b, TWEAKV4.16b
eor Y_2.16b, Y_2.16b, TWEAKV5.16b
eor X_3.16b, TMP3.16b, TWEAKV6.16b
eor Y_3.16b, Y_3.16b, TWEAKV7.16b
mov TWEAKV0.16b, TWEAKV_NEXT.16b
// Store the ciphertext in the destination buffer
st1 {X_0.16b-Y_1.16b}, [DST], #64
st1 {X_2.16b-Y_3.16b}, [DST], #64
// Continue if there are more 128-byte chunks remaining
subs NBYTES, NBYTES, #128
bne .Lnext_128bytes_\@
// Store the next tweak and return
.if \n == 64
st1 {TWEAKV_NEXT.16b}, [TWEAK]
.else
st1 {TWEAKV_NEXT.8b}, [TWEAK]
.endif
ret
.endm
ENTRY(speck128_xts_encrypt_neon)
_speck_xts_crypt n=64, lanes=2d, decrypting=0
ENDPROC(speck128_xts_encrypt_neon)
ENTRY(speck128_xts_decrypt_neon)
_speck_xts_crypt n=64, lanes=2d, decrypting=1
ENDPROC(speck128_xts_decrypt_neon)
ENTRY(speck64_xts_encrypt_neon)
_speck_xts_crypt n=32, lanes=4s, decrypting=0
ENDPROC(speck64_xts_encrypt_neon)
ENTRY(speck64_xts_decrypt_neon)
_speck_xts_crypt n=32, lanes=4s, decrypting=1
ENDPROC(speck64_xts_decrypt_neon)

View File

@@ -1,282 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
* (64-bit version; based on the 32-bit version)
*
* Copyright (c) 2018 Google, Inc
*/
#include <asm/hwcap.h>
#include <asm/neon.h>
#include <asm/simd.h>
#include <crypto/algapi.h>
#include <crypto/gf128mul.h>
#include <crypto/internal/skcipher.h>
#include <crypto/speck.h>
#include <crypto/xts.h>
#include <linux/kernel.h>
#include <linux/module.h>
/* The assembly functions only handle multiples of 128 bytes */
#define SPECK_NEON_CHUNK_SIZE 128
/* Speck128 */
struct speck128_xts_tfm_ctx {
struct speck128_tfm_ctx main_key;
struct speck128_tfm_ctx tweak_key;
};
asmlinkage void speck128_xts_encrypt_neon(const u64 *round_keys, int nrounds,
void *dst, const void *src,
unsigned int nbytes, void *tweak);
asmlinkage void speck128_xts_decrypt_neon(const u64 *round_keys, int nrounds,
void *dst, const void *src,
unsigned int nbytes, void *tweak);
typedef void (*speck128_crypt_one_t)(const struct speck128_tfm_ctx *,
u8 *, const u8 *);
typedef void (*speck128_xts_crypt_many_t)(const u64 *, int, void *,
const void *, unsigned int, void *);
static __always_inline int
__speck128_xts_crypt(struct skcipher_request *req,
speck128_crypt_one_t crypt_one,
speck128_xts_crypt_many_t crypt_many)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
const struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
struct skcipher_walk walk;
le128 tweak;
int err;
err = skcipher_walk_virt(&walk, req, true);
crypto_speck128_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
while (walk.nbytes > 0) {
unsigned int nbytes = walk.nbytes;
u8 *dst = walk.dst.virt.addr;
const u8 *src = walk.src.virt.addr;
if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
unsigned int count;
count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
kernel_neon_begin();
(*crypt_many)(ctx->main_key.round_keys,
ctx->main_key.nrounds,
dst, src, count, &tweak);
kernel_neon_end();
dst += count;
src += count;
nbytes -= count;
}
/* Handle any remainder with generic code */
while (nbytes >= sizeof(tweak)) {
le128_xor((le128 *)dst, (const le128 *)src, &tweak);
(*crypt_one)(&ctx->main_key, dst, dst);
le128_xor((le128 *)dst, (const le128 *)dst, &tweak);
gf128mul_x_ble(&tweak, &tweak);
dst += sizeof(tweak);
src += sizeof(tweak);
nbytes -= sizeof(tweak);
}
err = skcipher_walk_done(&walk, nbytes);
}
return err;
}
static int speck128_xts_encrypt(struct skcipher_request *req)
{
return __speck128_xts_crypt(req, crypto_speck128_encrypt,
speck128_xts_encrypt_neon);
}
static int speck128_xts_decrypt(struct skcipher_request *req)
{
return __speck128_xts_crypt(req, crypto_speck128_decrypt,
speck128_xts_decrypt_neon);
}
static int speck128_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
unsigned int keylen)
{
struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
int err;
err = xts_verify_key(tfm, key, keylen);
if (err)
return err;
keylen /= 2;
err = crypto_speck128_setkey(&ctx->main_key, key, keylen);
if (err)
return err;
return crypto_speck128_setkey(&ctx->tweak_key, key + keylen, keylen);
}
/* Speck64 */
struct speck64_xts_tfm_ctx {
struct speck64_tfm_ctx main_key;
struct speck64_tfm_ctx tweak_key;
};
asmlinkage void speck64_xts_encrypt_neon(const u32 *round_keys, int nrounds,
void *dst, const void *src,
unsigned int nbytes, void *tweak);
asmlinkage void speck64_xts_decrypt_neon(const u32 *round_keys, int nrounds,
void *dst, const void *src,
unsigned int nbytes, void *tweak);
typedef void (*speck64_crypt_one_t)(const struct speck64_tfm_ctx *,
u8 *, const u8 *);
typedef void (*speck64_xts_crypt_many_t)(const u32 *, int, void *,
const void *, unsigned int, void *);
static __always_inline int
__speck64_xts_crypt(struct skcipher_request *req, speck64_crypt_one_t crypt_one,
speck64_xts_crypt_many_t crypt_many)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
const struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
struct skcipher_walk walk;
__le64 tweak;
int err;
err = skcipher_walk_virt(&walk, req, true);
crypto_speck64_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
while (walk.nbytes > 0) {
unsigned int nbytes = walk.nbytes;
u8 *dst = walk.dst.virt.addr;
const u8 *src = walk.src.virt.addr;
if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
unsigned int count;
count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
kernel_neon_begin();
(*crypt_many)(ctx->main_key.round_keys,
ctx->main_key.nrounds,
dst, src, count, &tweak);
kernel_neon_end();
dst += count;
src += count;
nbytes -= count;
}
/* Handle any remainder with generic code */
while (nbytes >= sizeof(tweak)) {
*(__le64 *)dst = *(__le64 *)src ^ tweak;
(*crypt_one)(&ctx->main_key, dst, dst);
*(__le64 *)dst ^= tweak;
tweak = cpu_to_le64((le64_to_cpu(tweak) << 1) ^
((tweak & cpu_to_le64(1ULL << 63)) ?
0x1B : 0));
dst += sizeof(tweak);
src += sizeof(tweak);
nbytes -= sizeof(tweak);
}
err = skcipher_walk_done(&walk, nbytes);
}
return err;
}
static int speck64_xts_encrypt(struct skcipher_request *req)
{
return __speck64_xts_crypt(req, crypto_speck64_encrypt,
speck64_xts_encrypt_neon);
}
static int speck64_xts_decrypt(struct skcipher_request *req)
{
return __speck64_xts_crypt(req, crypto_speck64_decrypt,
speck64_xts_decrypt_neon);
}
static int speck64_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
unsigned int keylen)
{
struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
int err;
err = xts_verify_key(tfm, key, keylen);
if (err)
return err;
keylen /= 2;
err = crypto_speck64_setkey(&ctx->main_key, key, keylen);
if (err)
return err;
return crypto_speck64_setkey(&ctx->tweak_key, key + keylen, keylen);
}
static struct skcipher_alg speck_algs[] = {
{
.base.cra_name = "xts(speck128)",
.base.cra_driver_name = "xts-speck128-neon",
.base.cra_priority = 300,
.base.cra_blocksize = SPECK128_BLOCK_SIZE,
.base.cra_ctxsize = sizeof(struct speck128_xts_tfm_ctx),
.base.cra_alignmask = 7,
.base.cra_module = THIS_MODULE,
.min_keysize = 2 * SPECK128_128_KEY_SIZE,
.max_keysize = 2 * SPECK128_256_KEY_SIZE,
.ivsize = SPECK128_BLOCK_SIZE,
.walksize = SPECK_NEON_CHUNK_SIZE,
.setkey = speck128_xts_setkey,
.encrypt = speck128_xts_encrypt,
.decrypt = speck128_xts_decrypt,
}, {
.base.cra_name = "xts(speck64)",
.base.cra_driver_name = "xts-speck64-neon",
.base.cra_priority = 300,
.base.cra_blocksize = SPECK64_BLOCK_SIZE,
.base.cra_ctxsize = sizeof(struct speck64_xts_tfm_ctx),
.base.cra_alignmask = 7,
.base.cra_module = THIS_MODULE,
.min_keysize = 2 * SPECK64_96_KEY_SIZE,
.max_keysize = 2 * SPECK64_128_KEY_SIZE,
.ivsize = SPECK64_BLOCK_SIZE,
.walksize = SPECK_NEON_CHUNK_SIZE,
.setkey = speck64_xts_setkey,
.encrypt = speck64_xts_encrypt,
.decrypt = speck64_xts_decrypt,
}
};
static int __init speck_neon_module_init(void)
{
if (!(elf_hwcap & HWCAP_ASIMD))
return -ENODEV;
return crypto_register_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
}
static void __exit speck_neon_module_exit(void)
{
crypto_unregister_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
}
module_init(speck_neon_module_init);
module_exit(speck_neon_module_exit);
MODULE_DESCRIPTION("Speck block cipher (NEON-accelerated)");
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
MODULE_ALIAS_CRYPTO("xts(speck128)");
MODULE_ALIAS_CRYPTO("xts-speck128-neon");
MODULE_ALIAS_CRYPTO("xts(speck64)");
MODULE_ALIAS_CRYPTO("xts-speck64-neon");

View File

@@ -848,15 +848,29 @@ static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unus
}
static bool has_cache_idc(const struct arm64_cpu_capabilities *entry,
int __unused)
int scope)
{
return read_sanitised_ftr_reg(SYS_CTR_EL0) & BIT(CTR_IDC_SHIFT);
u64 ctr;
if (scope == SCOPE_SYSTEM)
ctr = arm64_ftr_reg_ctrel0.sys_val;
else
ctr = read_cpuid_cachetype();
return ctr & BIT(CTR_IDC_SHIFT);
}
static bool has_cache_dic(const struct arm64_cpu_capabilities *entry,
int __unused)
int scope)
{
return read_sanitised_ftr_reg(SYS_CTR_EL0) & BIT(CTR_DIC_SHIFT);
u64 ctr;
if (scope == SCOPE_SYSTEM)
ctr = arm64_ftr_reg_ctrel0.sys_val;
else
ctr = read_cpuid_cachetype();
return ctr & BIT(CTR_DIC_SHIFT);
}
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0

View File

@@ -589,7 +589,7 @@ el1_undef:
inherit_daif pstate=x23, tmp=x2
mov x0, sp
bl do_undefinstr
ASM_BUG()
kernel_exit 1
el1_dbg:
/*
* Debug exception handling

View File

@@ -311,10 +311,12 @@ static int call_undef_hook(struct pt_regs *regs)
int (*fn)(struct pt_regs *regs, u32 instr) = NULL;
void __user *pc = (void __user *)instruction_pointer(regs);
if (!user_mode(regs))
return 1;
if (compat_thumb_mode(regs)) {
if (!user_mode(regs)) {
__le32 instr_le;
if (probe_kernel_address((__force __le32 *)pc, instr_le))
goto exit;
instr = le32_to_cpu(instr_le);
} else if (compat_thumb_mode(regs)) {
/* 16-bit Thumb instruction */
__le16 instr_le;
if (get_user(instr_le, (__le16 __user *)pc))
@@ -408,6 +410,7 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
return;
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
BUG_ON(!user_mode(regs));
}
void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)

View File

@@ -12,7 +12,7 @@ lib-y := clear_user.o delay.o copy_from_user.o \
# when supported by the CPU. Result and argument registers are handled
# correctly, based on the function prototype.
lib-$(CONFIG_ARM64_LSE_ATOMICS) += atomic_ll_sc.o
CFLAGS_atomic_ll_sc.o := -fcall-used-x0 -ffixed-x1 -ffixed-x2 \
CFLAGS_atomic_ll_sc.o := -ffixed-x1 -ffixed-x2 \
-ffixed-x3 -ffixed-x4 -ffixed-x5 -ffixed-x6 \
-ffixed-x7 -fcall-saved-x8 -fcall-saved-x9 \
-fcall-saved-x10 -fcall-saved-x11 -fcall-saved-x12 \

View File

@@ -657,7 +657,6 @@ CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SM4=m
CONFIG_CRYPTO_SPECK=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_LZO=m

View File

@@ -614,7 +614,6 @@ CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SM4=m
CONFIG_CRYPTO_SPECK=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_LZO=m

View File

@@ -635,7 +635,6 @@ CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SM4=m
CONFIG_CRYPTO_SPECK=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_LZO=m

View File

@@ -606,7 +606,6 @@ CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SM4=m
CONFIG_CRYPTO_SPECK=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_LZO=m

View File

@@ -616,7 +616,6 @@ CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SM4=m
CONFIG_CRYPTO_SPECK=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_LZO=m

View File

@@ -638,7 +638,6 @@ CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SM4=m
CONFIG_CRYPTO_SPECK=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_LZO=m

View File

@@ -720,7 +720,6 @@ CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SM4=m
CONFIG_CRYPTO_SPECK=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_LZO=m

View File

@@ -606,7 +606,6 @@ CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SM4=m
CONFIG_CRYPTO_SPECK=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_LZO=m

View File

@@ -606,7 +606,6 @@ CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SM4=m
CONFIG_CRYPTO_SPECK=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_LZO=m

View File

@@ -629,7 +629,6 @@ CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SM4=m
CONFIG_CRYPTO_SPECK=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_LZO=m

View File

@@ -607,7 +607,6 @@ CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SM4=m
CONFIG_CRYPTO_SPECK=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_LZO=m

View File

@@ -608,7 +608,6 @@ CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SM4=m
CONFIG_CRYPTO_SPECK=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_LZO=m

View File

@@ -67,7 +67,7 @@ void (*cvmx_override_pko_queue_priority) (int pko_port,
void (*cvmx_override_ipd_port_setup) (int ipd_port);
/* Port count per interface */
static int interface_port_count[5];
static int interface_port_count[9];
/**
* Return the number of interfaces the chip has. Each interface

View File

@@ -81,7 +81,7 @@ extern unsigned int vced_count, vcei_count;
#endif
#define VDSO_RANDOMIZE_SIZE (TASK_IS_32BIT_ADDR ? SZ_1M : SZ_256M)
#define VDSO_RANDOMIZE_SIZE (TASK_IS_32BIT_ADDR ? SZ_1M : SZ_64M)
extern unsigned long mips_stack_top(void);
#define STACK_TOP mips_stack_top()

View File

@@ -186,7 +186,7 @@
bv,n 0(%r3)
nop
.word 0 /* checksum (will be patched) */
.word PA(os_hpmc) /* address of handler */
.word 0 /* address of handler */
.word 0 /* length of handler */
.endm

View File

@@ -85,7 +85,7 @@ END(hpmc_pim_data)
.import intr_save, code
.align 16
ENTRY_CFI(os_hpmc)
ENTRY(os_hpmc)
.os_hpmc:
/*
@@ -302,7 +302,6 @@ os_hpmc_6:
b .
nop
.align 16 /* make function length multiple of 16 bytes */
ENDPROC_CFI(os_hpmc)
.os_hpmc_end:

View File

@@ -802,7 +802,8 @@ void __init initialize_ivt(const void *iva)
* the Length/4 words starting at Address is zero.
*/
/* Compute Checksum for HPMC handler */
/* Setup IVA and compute checksum for HPMC handler */
ivap[6] = (u32)__pa(os_hpmc);
length = os_hpmc_size;
ivap[7] = length;

View File

@@ -494,12 +494,8 @@ static void __init map_pages(unsigned long start_vaddr,
pte = pte_mkhuge(pte);
}
if (address >= end_paddr) {
if (force)
break;
else
pte_val(pte) = 0;
}
if (address >= end_paddr)
break;
set_pte(pg_table, pte);

View File

@@ -393,7 +393,14 @@ extern struct bus_type mpic_subsys;
#define MPIC_REGSET_TSI108 MPIC_REGSET(1) /* Tsi108/109 PIC */
/* Get the version of primary MPIC */
#ifdef CONFIG_MPIC
extern u32 fsl_mpic_primary_get_version(void);
#else
static inline u32 fsl_mpic_primary_get_version(void)
{
return 0;
}
#endif
/* Allocate the controller structure and setup the linux irq descs
* for the range if interrupts passed in. No HW initialization is

View File

@@ -89,6 +89,13 @@ static void flush_and_reload_slb(void)
static void flush_erat(void)
{
#ifdef CONFIG_PPC_BOOK3S_64
if (!early_cpu_has_feature(CPU_FTR_ARCH_300)) {
flush_and_reload_slb();
return;
}
#endif
/* PPC_INVALIDATE_ERAT can only be used on ISA v3 and newer */
asm volatile(PPC_INVALIDATE_ERAT : : :"memory");
}

View File

@@ -74,6 +74,14 @@ int module_finalize(const Elf_Ehdr *hdr,
(void *)sect->sh_addr + sect->sh_size);
#endif /* CONFIG_PPC64 */
#ifdef PPC64_ELF_ABI_v1
sect = find_section(hdr, sechdrs, ".opd");
if (sect != NULL) {
me->arch.start_opd = sect->sh_addr;
me->arch.end_opd = sect->sh_addr + sect->sh_size;
}
#endif /* PPC64_ELF_ABI_v1 */
#ifdef CONFIG_PPC_BARRIER_NOSPEC
sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
if (sect != NULL)

View File

@@ -360,11 +360,6 @@ int module_frob_arch_sections(Elf64_Ehdr *hdr,
else if (strcmp(secstrings+sechdrs[i].sh_name,"__versions")==0)
dedotify_versions((void *)hdr + sechdrs[i].sh_offset,
sechdrs[i].sh_size);
else if (!strcmp(secstrings + sechdrs[i].sh_name, ".opd")) {
me->arch.start_opd = sechdrs[i].sh_addr;
me->arch.end_opd = sechdrs[i].sh_addr +
sechdrs[i].sh_size;
}
/* We don't handle .init for the moment: rename to _init */
while ((p = strstr(secstrings + sechdrs[i].sh_name, ".init")))

View File

@@ -243,13 +243,19 @@ static void cpu_ready_for_interrupts(void)
}
/*
* Fixup HFSCR:TM based on CPU features. The bit is set by our
* early asm init because at that point we haven't updated our
* CPU features from firmware and device-tree. Here we have,
* so let's do it.
* Set HFSCR:TM based on CPU features:
* In the special case of TM no suspend (P9N DD2.1), Linux is
* told TM is off via the dt-ftrs but told to (partially) use
* it via OPAL_REINIT_CPUS_TM_SUSPEND_DISABLED. So HFSCR[TM]
* will be off from dt-ftrs but we need to turn it on for the
* no suspend case.
*/
if (cpu_has_feature(CPU_FTR_HVMODE) && !cpu_has_feature(CPU_FTR_TM_COMP))
mtspr(SPRN_HFSCR, mfspr(SPRN_HFSCR) & ~HFSCR_TM);
if (cpu_has_feature(CPU_FTR_HVMODE)) {
if (cpu_has_feature(CPU_FTR_TM_COMP))
mtspr(SPRN_HFSCR, mfspr(SPRN_HFSCR) | HFSCR_TM);
else
mtspr(SPRN_HFSCR, mfspr(SPRN_HFSCR) & ~HFSCR_TM);
}
/* Set IR and DR in PACA MSR */
get_paca()->kernel_msr = MSR_KERNEL;

View File

@@ -115,6 +115,8 @@ static void tlbiel_all_isa300(unsigned int num_sets, unsigned int is)
tlbiel_hash_set_isa300(0, is, 0, 2, 1);
asm volatile("ptesync": : :"memory");
asm volatile(PPC_INVALIDATE_ERAT "; isync" : : :"memory");
}
void hash__tlbiel_all(unsigned int action)
@@ -140,8 +142,6 @@ void hash__tlbiel_all(unsigned int action)
tlbiel_all_isa206(POWER7_TLB_SETS, is);
else
WARN(1, "%s called on pre-POWER7 CPU\n", __func__);
asm volatile(PPC_INVALIDATE_ERAT "; isync" : : :"memory");
}
static inline unsigned long ___tlbie(unsigned long vpn, int psize,

View File

@@ -221,7 +221,6 @@ CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SM4=m
CONFIG_CRYPTO_SPECK=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_DEFLATE=m

View File

@@ -183,17 +183,19 @@ static void fill_hdr(struct sthyi_sctns *sctns)
static void fill_stsi_mac(struct sthyi_sctns *sctns,
struct sysinfo_1_1_1 *sysinfo)
{
sclp_ocf_cpc_name_copy(sctns->mac.infmname);
if (*(u64 *)sctns->mac.infmname != 0)
sctns->mac.infmval1 |= MAC_NAME_VLD;
if (stsi(sysinfo, 1, 1, 1))
return;
sclp_ocf_cpc_name_copy(sctns->mac.infmname);
memcpy(sctns->mac.infmtype, sysinfo->type, sizeof(sctns->mac.infmtype));
memcpy(sctns->mac.infmmanu, sysinfo->manufacturer, sizeof(sctns->mac.infmmanu));
memcpy(sctns->mac.infmpman, sysinfo->plant, sizeof(sctns->mac.infmpman));
memcpy(sctns->mac.infmseq, sysinfo->sequence, sizeof(sctns->mac.infmseq));
sctns->mac.infmval1 |= MAC_ID_VLD | MAC_NAME_VLD;
sctns->mac.infmval1 |= MAC_ID_VLD;
}
static void fill_stsi_par(struct sthyi_sctns *sctns,

View File

@@ -738,6 +738,7 @@ efi_main(struct efi_config *c, struct boot_params *boot_params)
struct desc_struct *desc;
void *handle;
efi_system_table_t *_table;
unsigned long cmdline_paddr;
efi_early = c;
@@ -755,6 +756,15 @@ efi_main(struct efi_config *c, struct boot_params *boot_params)
else
setup_boot_services32(efi_early);
/*
* make_boot_params() may have been called before efi_main(), in which
* case this is the second time we parse the cmdline. This is ok,
* parsing the cmdline multiple times does not have side-effects.
*/
cmdline_paddr = ((u64)hdr->cmd_line_ptr |
((u64)boot_params->ext_cmd_line_ptr << 32));
efi_parse_options((char *)cmdline_paddr);
/*
* If the boot loader gave us a value for secure_boot then we use that,
* otherwise we ask the BIOS.

View File

@@ -391,6 +391,13 @@ int main(int argc, char ** argv)
die("Unable to mmap '%s': %m", argv[2]);
/* Number of 16-byte paragraphs, including space for a 4-byte CRC */
sys_size = (sz + 15 + 4) / 16;
#ifdef CONFIG_EFI_STUB
/*
* COFF requires minimum 32-byte alignment of sections, and
* adding a signature is problematic without that alignment.
*/
sys_size = (sys_size + 1) & ~1;
#endif
/* Patch the setup code with the appropriate size parameters */
buf[0x1f1] = setup_sectors-1;

View File

@@ -817,7 +817,7 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
/* Linearize assoc, if not already linear */
if (req->src->length >= assoclen && req->src->length &&
(!PageHighMem(sg_page(req->src)) ||
req->src->offset + req->src->length < PAGE_SIZE)) {
req->src->offset + req->src->length <= PAGE_SIZE)) {
scatterwalk_start(&assoc_sg_walk, req->src);
assoc = scatterwalk_map(&assoc_sg_walk);
} else {

View File

@@ -177,6 +177,7 @@ enum {
#define DR6_BD (1 << 13)
#define DR6_BS (1 << 14)
#define DR6_BT (1 << 15)
#define DR6_RTM (1 << 16)
#define DR6_FIXED_1 0xfffe0ff0
#define DR6_INIT 0xffff0ff0

View File

@@ -469,6 +469,12 @@ static inline void __native_flush_tlb_one_user(unsigned long addr)
*/
static inline void __flush_tlb_all(void)
{
/*
* This is to catch users with enabled preemption and the PGE feature
* and don't trigger the warning in __native_flush_tlb().
*/
VM_WARN_ON_ONCE(preemptible());
if (boot_cpu_has(X86_FEATURE_PGE)) {
__flush_tlb_global();
} else {

View File

@@ -31,6 +31,11 @@ static __init int set_corruption_check(char *arg)
ssize_t ret;
unsigned long val;
if (!arg) {
pr_err("memory_corruption_check config string not provided\n");
return -EINVAL;
}
ret = kstrtoul(arg, 10, &val);
if (ret)
return ret;
@@ -45,6 +50,11 @@ static __init int set_corruption_check_period(char *arg)
ssize_t ret;
unsigned long val;
if (!arg) {
pr_err("memory_corruption_check_period config string not provided\n");
return -EINVAL;
}
ret = kstrtoul(arg, 10, &val);
if (ret)
return ret;
@@ -59,6 +69,11 @@ static __init int set_corruption_check_size(char *arg)
char *end;
unsigned size;
if (!arg) {
pr_err("memory_corruption_check_size config string not provided\n");
return -EINVAL;
}
size = memparse(arg, &end);
if (*end == '\0')

View File

@@ -35,12 +35,10 @@ static void __init spectre_v2_select_mitigation(void);
static void __init ssb_select_mitigation(void);
static void __init l1tf_select_mitigation(void);
/*
* Our boot-time value of the SPEC_CTRL MSR. We read it once so that any
* writes to SPEC_CTRL contain whatever reserved bits have been set.
*/
u64 __ro_after_init x86_spec_ctrl_base;
/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
u64 x86_spec_ctrl_base;
EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
static DEFINE_MUTEX(spec_ctrl_mutex);
/*
* The vendor and possibly platform specific bits which can be modified in
@@ -325,6 +323,46 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
return cmd;
}
static bool stibp_needed(void)
{
if (spectre_v2_enabled == SPECTRE_V2_NONE)
return false;
if (!boot_cpu_has(X86_FEATURE_STIBP))
return false;
return true;
}
static void update_stibp_msr(void *info)
{
wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
}
void arch_smt_update(void)
{
u64 mask;
if (!stibp_needed())
return;
mutex_lock(&spec_ctrl_mutex);
mask = x86_spec_ctrl_base;
if (cpu_smt_control == CPU_SMT_ENABLED)
mask |= SPEC_CTRL_STIBP;
else
mask &= ~SPEC_CTRL_STIBP;
if (mask != x86_spec_ctrl_base) {
pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
cpu_smt_control == CPU_SMT_ENABLED ?
"Enabling" : "Disabling");
x86_spec_ctrl_base = mask;
on_each_cpu(update_stibp_msr, NULL, 1);
}
mutex_unlock(&spec_ctrl_mutex);
}
static void __init spectre_v2_select_mitigation(void)
{
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -424,6 +462,9 @@ static void __init spectre_v2_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
/* Enable STIBP if appropriate */
arch_smt_update();
}
#undef pr_fmt
@@ -814,6 +855,8 @@ static ssize_t l1tf_show_state(char *buf)
static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
char *buf, unsigned int bug)
{
int ret;
if (!boot_cpu_has_bug(bug))
return sprintf(buf, "Not affected\n");
@@ -831,10 +874,12 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
return sprintf(buf, "Mitigation: __user pointer sanitization\n");
case X86_BUG_SPECTRE_V2:
return sprintf(buf, "%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
ret = sprintf(buf, "%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
(x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
spectre_v2_module_string());
return ret;
case X86_BUG_SPEC_STORE_BYPASS:
return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);

View File

@@ -2805,6 +2805,13 @@ static int rdtgroup_show_options(struct seq_file *seq, struct kernfs_root *kf)
{
if (rdt_resources_all[RDT_RESOURCE_L3DATA].alloc_enabled)
seq_puts(seq, ",cdp");
if (rdt_resources_all[RDT_RESOURCE_L2DATA].alloc_enabled)
seq_puts(seq, ",cdpl2");
if (is_mba_sc(&rdt_resources_all[RDT_RESOURCE_MBA]))
seq_puts(seq, ",mba_MBps");
return 0;
}

View File

@@ -179,7 +179,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
opt_pre_handler(&op->kp, regs);
__this_cpu_write(current_kprobe, NULL);
}
preempt_enable_no_resched();
preempt_enable();
}
NOKPROBE_SYMBOL(optimized_callback);

View File

@@ -3294,10 +3294,13 @@ static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned long *exit
}
} else {
if (vmcs12->exception_bitmap & (1u << nr)) {
if (nr == DB_VECTOR)
if (nr == DB_VECTOR) {
*exit_qual = vcpu->arch.dr6;
else
*exit_qual &= ~(DR6_FIXED_1 | DR6_BT);
*exit_qual ^= DR6_RTM;
} else {
*exit_qual = 0;
}
return 1;
}
}
@@ -14010,13 +14013,6 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu,
if (!page_address_valid(vcpu, kvm_state->vmx.vmxon_pa))
return -EINVAL;
if (kvm_state->size < sizeof(kvm_state) + sizeof(*vmcs12))
return -EINVAL;
if (kvm_state->vmx.vmcs_pa == kvm_state->vmx.vmxon_pa ||
!page_address_valid(vcpu, kvm_state->vmx.vmcs_pa))
return -EINVAL;
if ((kvm_state->vmx.smm.flags & KVM_STATE_NESTED_SMM_GUEST_MODE) &&
(kvm_state->flags & KVM_STATE_NESTED_GUEST_MODE))
return -EINVAL;
@@ -14046,6 +14042,14 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu,
if (ret)
return ret;
/* Empty 'VMXON' state is permitted */
if (kvm_state->size < sizeof(kvm_state) + sizeof(*vmcs12))
return 0;
if (kvm_state->vmx.vmcs_pa == kvm_state->vmx.vmxon_pa ||
!page_address_valid(vcpu, kvm_state->vmx.vmcs_pa))
return -EINVAL;
set_current_vmptr(vmx, kvm_state->vmx.vmcs_pa);
if (kvm_state->vmx.smm.flags & KVM_STATE_NESTED_SMM_VMXON) {

View File

@@ -400,9 +400,17 @@ void __init numa_emulation(struct numa_meminfo *numa_meminfo, int numa_dist_cnt)
n = simple_strtoul(emu_cmdline, &emu_cmdline, 0);
ret = -1;
for_each_node_mask(i, physnode_mask) {
/*
* The reason we pass in blk[0] is due to
* numa_remove_memblk_from() called by
* emu_setup_memblk() will delete entry 0
* and then move everything else up in the pi.blk
* array. Therefore we should always be looking
* at blk[0].
*/
ret = split_nodes_size_interleave_uniform(&ei, &pi,
pi.blk[i].start, pi.blk[i].end, 0,
n, &pi.blk[i], nid);
pi.blk[0].start, pi.blk[0].end, 0,
n, &pi.blk[0], nid);
if (ret < 0)
break;
if (ret < n) {

View File

@@ -2086,9 +2086,13 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
/*
* We should perform an IPI and flush all tlbs,
* but that can deadlock->flush only current cpu:
* but that can deadlock->flush only current cpu.
* Preemption needs to be disabled around __flush_tlb_all() due to
* CR3 reload in __native_flush_tlb().
*/
preempt_disable();
__flush_tlb_all();
preempt_enable();
arch_flush_lazy_mmu_mode();
}

View File

@@ -16,6 +16,7 @@
#include <asm/msr.h>
#include <asm/olpc.h>
#include <asm/x86_init.h>
static void rtc_wake_on(struct device *dev)
{
@@ -75,6 +76,8 @@ static int __init xo1_rtc_init(void)
if (r)
return r;
x86_platform.legacy.rtc = 0;
device_init_wakeup(&xo1_rtc_device.dev, 1);
return 0;
}

View File

@@ -75,7 +75,7 @@ static void __init init_pvh_bootparams(void)
* Version 2.12 supports Xen entry point but we will use default x86/PC
* environment (i.e. hardware_subarch 0).
*/
pvh_bootparams.hdr.version = 0x212;
pvh_bootparams.hdr.version = (2 << 8) | 12;
pvh_bootparams.hdr.type_of_loader = (9 << 4) | 0; /* Xen loader */
x86_init.acpi.get_root_pointer = pvh_get_root_pointer;

View File

@@ -146,6 +146,10 @@ void xen_unplug_emulated_devices(void)
{
int r;
/* PVH guests don't have emulated devices. */
if (xen_pvh_domain())
return;
/* user explicitly requested no unplug */
if (xen_emul_unplug & XEN_UNPLUG_NEVER)
return;

View File

@@ -9,6 +9,7 @@
#include <linux/log2.h>
#include <linux/gfp.h>
#include <linux/slab.h>
#include <linux/atomic.h>
#include <asm/paravirt.h>
#include <asm/qspinlock.h>
@@ -21,6 +22,7 @@
static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
static DEFINE_PER_CPU(char *, irq_name);
static DEFINE_PER_CPU(atomic_t, xen_qlock_wait_nest);
static bool xen_pvspin = true;
static void xen_qlock_kick(int cpu)
@@ -40,33 +42,24 @@ static void xen_qlock_kick(int cpu)
static void xen_qlock_wait(u8 *byte, u8 val)
{
int irq = __this_cpu_read(lock_kicker_irq);
atomic_t *nest_cnt = this_cpu_ptr(&xen_qlock_wait_nest);
/* If kicker interrupts not initialized yet, just spin */
if (irq == -1)
if (irq == -1 || in_nmi())
return;
/* clear pending */
xen_clear_irq_pending(irq);
barrier();
/* Detect reentry. */
atomic_inc(nest_cnt);
/*
* We check the byte value after clearing pending IRQ to make sure
* that we won't miss a wakeup event because of the clearing.
*
* The sync_clear_bit() call in xen_clear_irq_pending() is atomic.
* So it is effectively a memory barrier for x86.
*/
if (READ_ONCE(*byte) != val)
return;
/* If irq pending already and no nested call clear it. */
if (atomic_read(nest_cnt) == 1 && xen_test_irq_pending(irq)) {
xen_clear_irq_pending(irq);
} else if (READ_ONCE(*byte) == val) {
/* Block until irq becomes pending (or a spurious wakeup) */
xen_poll_irq(irq);
}
/*
* If an interrupt happens here, it will leave the wakeup irq
* pending, which will cause xen_poll_irq() to return
* immediately.
*/
/* Block until irq becomes pending (or perhaps a spurious wakeup) */
xen_poll_irq(irq);
atomic_dec(nest_cnt);
}
static irqreturn_t dummy_handler(int irq, void *dev_id)

View File

@@ -181,7 +181,7 @@ canary:
.fill 48, 1, 0
early_stack:
.fill 256, 1, 0
.fill BOOT_STACK_SIZE, 1, 0
early_stack_end:
ELFNOTE(Xen, XEN_ELFNOTE_PHYS32_ENTRY,

View File

@@ -1181,10 +1181,17 @@ bool __bfq_deactivate_entity(struct bfq_entity *entity, bool ins_into_idle_tree)
st = bfq_entity_service_tree(entity);
is_in_service = entity == sd->in_service_entity;
if (is_in_service) {
bfq_calc_finish(entity, entity->service);
bfq_calc_finish(entity, entity->service);
if (is_in_service)
sd->in_service_entity = NULL;
}
else
/*
* Non in-service entity: nobody will take care of
* resetting its service counter on expiration. Do it
* now.
*/
entity->service = 0;
if (entity->tree == &st->active)
bfq_active_extract(st, entity);

View File

@@ -58,8 +58,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
if (!req_sects)
goto fail;
if (req_sects > UINT_MAX >> 9)
req_sects = UINT_MAX >> 9;
req_sects = min(req_sects, bio_allowed_max_sectors(q));
end_sect = sector + req_sects;
@@ -162,7 +161,7 @@ static int __blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
return -EOPNOTSUPP;
/* Ensure that max_write_same_sectors doesn't overflow bi_size */
max_write_same_sectors = UINT_MAX >> 9;
max_write_same_sectors = bio_allowed_max_sectors(q);
while (nr_sects) {
bio = next_bio(bio, 1, gfp_mask);

View File

@@ -27,7 +27,8 @@ static struct bio *blk_bio_discard_split(struct request_queue *q,
/* Zero-sector (unknown) and one-sector granularities are the same. */
granularity = max(q->limits.discard_granularity >> 9, 1U);
max_discard_sectors = min(q->limits.max_discard_sectors, UINT_MAX >> 9);
max_discard_sectors = min(q->limits.max_discard_sectors,
bio_allowed_max_sectors(q));
max_discard_sectors -= max_discard_sectors % granularity;
if (unlikely(!max_discard_sectors)) {

View File

@@ -328,6 +328,16 @@ static inline unsigned long blk_rq_deadline(struct request *rq)
return rq->__deadline & ~0x1UL;
}
/*
* The max size one bio can handle is UINT_MAX becasue bvec_iter.bi_size
* is defined as 'unsigned int', meantime it has to aligned to with logical
* block size which is the minimum accepted unit by hardware.
*/
static inline unsigned int bio_allowed_max_sectors(struct request_queue *q)
{
return round_down(UINT_MAX, queue_logical_block_size(q)) >> 9;
}
/*
* Internal io_context interface
*/

View File

@@ -31,6 +31,24 @@
static struct bio_set bounce_bio_set, bounce_bio_split;
static mempool_t page_pool, isa_page_pool;
static void init_bounce_bioset(void)
{
static bool bounce_bs_setup;
int ret;
if (bounce_bs_setup)
return;
ret = bioset_init(&bounce_bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS);
BUG_ON(ret);
if (bioset_integrity_create(&bounce_bio_set, BIO_POOL_SIZE))
BUG_ON(1);
ret = bioset_init(&bounce_bio_split, BIO_POOL_SIZE, 0, 0);
BUG_ON(ret);
bounce_bs_setup = true;
}
#if defined(CONFIG_HIGHMEM)
static __init int init_emergency_pool(void)
{
@@ -44,14 +62,7 @@ static __init int init_emergency_pool(void)
BUG_ON(ret);
pr_info("pool size: %d pages\n", POOL_SIZE);
ret = bioset_init(&bounce_bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS);
BUG_ON(ret);
if (bioset_integrity_create(&bounce_bio_set, BIO_POOL_SIZE))
BUG_ON(1);
ret = bioset_init(&bounce_bio_split, BIO_POOL_SIZE, 0, 0);
BUG_ON(ret);
init_bounce_bioset();
return 0;
}
@@ -86,6 +97,8 @@ static void *mempool_alloc_pages_isa(gfp_t gfp_mask, void *data)
return mempool_alloc_pages(gfp_mask | GFP_DMA, data);
}
static DEFINE_MUTEX(isa_mutex);
/*
* gets called "every" time someone init's a queue with BLK_BOUNCE_ISA
* as the max address, so check if the pool has already been created.
@@ -94,14 +107,20 @@ int init_emergency_isa_pool(void)
{
int ret;
if (mempool_initialized(&isa_page_pool))
mutex_lock(&isa_mutex);
if (mempool_initialized(&isa_page_pool)) {
mutex_unlock(&isa_mutex);
return 0;
}
ret = mempool_init(&isa_page_pool, ISA_POOL_SIZE, mempool_alloc_pages_isa,
mempool_free_pages, (void *) 0);
BUG_ON(ret);
pr_info("isa pool size: %d pages\n", ISA_POOL_SIZE);
init_bounce_bioset();
mutex_unlock(&isa_mutex);
return 0;
}

View File

@@ -1590,20 +1590,6 @@ config CRYPTO_SM4
If unsure, say N.
config CRYPTO_SPECK
tristate "Speck cipher algorithm"
select CRYPTO_ALGAPI
help
Speck is a lightweight block cipher that is tuned for optimal
performance in software (rather than hardware).
Speck may not be as secure as AES, and should only be used on systems
where AES is not fast enough.
See also: <https://eprint.iacr.org/2013/404.pdf>
If unsure, say N.
config CRYPTO_TEA
tristate "TEA, XTEA and XETA cipher algorithms"
select CRYPTO_ALGAPI

View File

@@ -115,7 +115,6 @@ obj-$(CONFIG_CRYPTO_TEA) += tea.o
obj-$(CONFIG_CRYPTO_KHAZAD) += khazad.o
obj-$(CONFIG_CRYPTO_ANUBIS) += anubis.o
obj-$(CONFIG_CRYPTO_SEED) += seed.o
obj-$(CONFIG_CRYPTO_SPECK) += speck.o
obj-$(CONFIG_CRYPTO_SALSA20) += salsa20_generic.o
obj-$(CONFIG_CRYPTO_CHACHA20) += chacha20_generic.o
obj-$(CONFIG_CRYPTO_POLY1305) += poly1305_generic.o

View File

@@ -21,7 +21,7 @@
union aegis_block {
__le64 words64[AEGIS_BLOCK_SIZE / sizeof(__le64)];
u32 words32[AEGIS_BLOCK_SIZE / sizeof(u32)];
__le32 words32[AEGIS_BLOCK_SIZE / sizeof(__le32)];
u8 bytes[AEGIS_BLOCK_SIZE];
};
@@ -57,24 +57,22 @@ static void crypto_aegis_aesenc(union aegis_block *dst,
const union aegis_block *src,
const union aegis_block *key)
{
u32 *d = dst->words32;
const u8 *s = src->bytes;
const u32 *k = key->words32;
const u32 *t0 = crypto_ft_tab[0];
const u32 *t1 = crypto_ft_tab[1];
const u32 *t2 = crypto_ft_tab[2];
const u32 *t3 = crypto_ft_tab[3];
u32 d0, d1, d2, d3;
d0 = t0[s[ 0]] ^ t1[s[ 5]] ^ t2[s[10]] ^ t3[s[15]] ^ k[0];
d1 = t0[s[ 4]] ^ t1[s[ 9]] ^ t2[s[14]] ^ t3[s[ 3]] ^ k[1];
d2 = t0[s[ 8]] ^ t1[s[13]] ^ t2[s[ 2]] ^ t3[s[ 7]] ^ k[2];
d3 = t0[s[12]] ^ t1[s[ 1]] ^ t2[s[ 6]] ^ t3[s[11]] ^ k[3];
d0 = t0[s[ 0]] ^ t1[s[ 5]] ^ t2[s[10]] ^ t3[s[15]];
d1 = t0[s[ 4]] ^ t1[s[ 9]] ^ t2[s[14]] ^ t3[s[ 3]];
d2 = t0[s[ 8]] ^ t1[s[13]] ^ t2[s[ 2]] ^ t3[s[ 7]];
d3 = t0[s[12]] ^ t1[s[ 1]] ^ t2[s[ 6]] ^ t3[s[11]];
d[0] = d0;
d[1] = d1;
d[2] = d2;
d[3] = d3;
dst->words32[0] = cpu_to_le32(d0) ^ key->words32[0];
dst->words32[1] = cpu_to_le32(d1) ^ key->words32[1];
dst->words32[2] = cpu_to_le32(d2) ^ key->words32[2];
dst->words32[3] = cpu_to_le32(d3) ^ key->words32[3];
}
#endif /* _CRYPTO_AEGIS_H */

View File

@@ -143,7 +143,12 @@ static inline int get_index128(be128 *block)
return x + ffz(val);
}
return x;
/*
* If we get here, then x == 128 and we are incrementing the counter
* from all ones to all zeros. This means we must return index 127, i.e.
* the one corresponding to key2*{ 1,...,1 }.
*/
return 127;
}
static int post_crypt(struct skcipher_request *req)

View File

@@ -385,14 +385,11 @@ static void crypto_morus1280_final(struct morus1280_state *state,
struct morus1280_block *tag_xor,
u64 assoclen, u64 cryptlen)
{
u64 assocbits = assoclen * 8;
u64 cryptbits = cryptlen * 8;
struct morus1280_block tmp;
unsigned int i;
tmp.words[0] = cpu_to_le64(assocbits);
tmp.words[1] = cpu_to_le64(cryptbits);
tmp.words[0] = assoclen * 8;
tmp.words[1] = cryptlen * 8;
tmp.words[2] = 0;
tmp.words[3] = 0;

View File

@@ -384,21 +384,13 @@ static void crypto_morus640_final(struct morus640_state *state,
struct morus640_block *tag_xor,
u64 assoclen, u64 cryptlen)
{
u64 assocbits = assoclen * 8;
u64 cryptbits = cryptlen * 8;
u32 assocbits_lo = (u32)assocbits;
u32 assocbits_hi = (u32)(assocbits >> 32);
u32 cryptbits_lo = (u32)cryptbits;
u32 cryptbits_hi = (u32)(cryptbits >> 32);
struct morus640_block tmp;
unsigned int i;
tmp.words[0] = cpu_to_le32(assocbits_lo);
tmp.words[1] = cpu_to_le32(assocbits_hi);
tmp.words[2] = cpu_to_le32(cryptbits_lo);
tmp.words[3] = cpu_to_le32(cryptbits_hi);
tmp.words[0] = lower_32_bits(assoclen * 8);
tmp.words[1] = upper_32_bits(assoclen * 8);
tmp.words[2] = lower_32_bits(cryptlen * 8);
tmp.words[3] = upper_32_bits(cryptlen * 8);
for (i = 0; i < MORUS_BLOCK_WORDS; i++)
state->s[4].words[i] ^= state->s[0].words[i];

View File

@@ -1,307 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Speck: a lightweight block cipher
*
* Copyright (c) 2018 Google, Inc
*
* Speck has 10 variants, including 5 block sizes. For now we only implement
* the variants Speck128/128, Speck128/192, Speck128/256, Speck64/96, and
* Speck64/128. Speck${B}/${K} denotes the variant with a block size of B bits
* and a key size of K bits. The Speck128 variants are believed to be the most
* secure variants, and they use the same block size and key sizes as AES. The
* Speck64 variants are less secure, but on 32-bit processors are usually
* faster. The remaining variants (Speck32, Speck48, and Speck96) are even less
* secure and/or not as well suited for implementation on either 32-bit or
* 64-bit processors, so are omitted.
*
* Reference: "The Simon and Speck Families of Lightweight Block Ciphers"
* https://eprint.iacr.org/2013/404.pdf
*
* In a correspondence, the Speck designers have also clarified that the words
* should be interpreted in little-endian format, and the words should be
* ordered such that the first word of each block is 'y' rather than 'x', and
* the first key word (rather than the last) becomes the first round key.
*/
#include <asm/unaligned.h>
#include <crypto/speck.h>
#include <linux/bitops.h>
#include <linux/crypto.h>
#include <linux/init.h>
#include <linux/module.h>
/* Speck128 */
static __always_inline void speck128_round(u64 *x, u64 *y, u64 k)
{
*x = ror64(*x, 8);
*x += *y;
*x ^= k;
*y = rol64(*y, 3);
*y ^= *x;
}
static __always_inline void speck128_unround(u64 *x, u64 *y, u64 k)
{
*y ^= *x;
*y = ror64(*y, 3);
*x ^= k;
*x -= *y;
*x = rol64(*x, 8);
}
void crypto_speck128_encrypt(const struct speck128_tfm_ctx *ctx,
u8 *out, const u8 *in)
{
u64 y = get_unaligned_le64(in);
u64 x = get_unaligned_le64(in + 8);
int i;
for (i = 0; i < ctx->nrounds; i++)
speck128_round(&x, &y, ctx->round_keys[i]);
put_unaligned_le64(y, out);
put_unaligned_le64(x, out + 8);
}
EXPORT_SYMBOL_GPL(crypto_speck128_encrypt);
static void speck128_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
crypto_speck128_encrypt(crypto_tfm_ctx(tfm), out, in);
}
void crypto_speck128_decrypt(const struct speck128_tfm_ctx *ctx,
u8 *out, const u8 *in)
{
u64 y = get_unaligned_le64(in);
u64 x = get_unaligned_le64(in + 8);
int i;
for (i = ctx->nrounds - 1; i >= 0; i--)
speck128_unround(&x, &y, ctx->round_keys[i]);
put_unaligned_le64(y, out);
put_unaligned_le64(x, out + 8);
}
EXPORT_SYMBOL_GPL(crypto_speck128_decrypt);
static void speck128_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
crypto_speck128_decrypt(crypto_tfm_ctx(tfm), out, in);
}
int crypto_speck128_setkey(struct speck128_tfm_ctx *ctx, const u8 *key,
unsigned int keylen)
{
u64 l[3];
u64 k;
int i;
switch (keylen) {
case SPECK128_128_KEY_SIZE:
k = get_unaligned_le64(key);
l[0] = get_unaligned_le64(key + 8);
ctx->nrounds = SPECK128_128_NROUNDS;
for (i = 0; i < ctx->nrounds; i++) {
ctx->round_keys[i] = k;
speck128_round(&l[0], &k, i);
}
break;
case SPECK128_192_KEY_SIZE:
k = get_unaligned_le64(key);
l[0] = get_unaligned_le64(key + 8);
l[1] = get_unaligned_le64(key + 16);
ctx->nrounds = SPECK128_192_NROUNDS;
for (i = 0; i < ctx->nrounds; i++) {
ctx->round_keys[i] = k;
speck128_round(&l[i % 2], &k, i);
}
break;
case SPECK128_256_KEY_SIZE:
k = get_unaligned_le64(key);
l[0] = get_unaligned_le64(key + 8);
l[1] = get_unaligned_le64(key + 16);
l[2] = get_unaligned_le64(key + 24);
ctx->nrounds = SPECK128_256_NROUNDS;
for (i = 0; i < ctx->nrounds; i++) {
ctx->round_keys[i] = k;
speck128_round(&l[i % 3], &k, i);
}
break;
default:
return -EINVAL;
}
return 0;
}
EXPORT_SYMBOL_GPL(crypto_speck128_setkey);
static int speck128_setkey(struct crypto_tfm *tfm, const u8 *key,
unsigned int keylen)
{
return crypto_speck128_setkey(crypto_tfm_ctx(tfm), key, keylen);
}
/* Speck64 */
static __always_inline void speck64_round(u32 *x, u32 *y, u32 k)
{
*x = ror32(*x, 8);
*x += *y;
*x ^= k;
*y = rol32(*y, 3);
*y ^= *x;
}
static __always_inline void speck64_unround(u32 *x, u32 *y, u32 k)
{
*y ^= *x;
*y = ror32(*y, 3);
*x ^= k;
*x -= *y;
*x = rol32(*x, 8);
}
void crypto_speck64_encrypt(const struct speck64_tfm_ctx *ctx,
u8 *out, const u8 *in)
{
u32 y = get_unaligned_le32(in);
u32 x = get_unaligned_le32(in + 4);
int i;
for (i = 0; i < ctx->nrounds; i++)
speck64_round(&x, &y, ctx->round_keys[i]);
put_unaligned_le32(y, out);
put_unaligned_le32(x, out + 4);
}
EXPORT_SYMBOL_GPL(crypto_speck64_encrypt);
static void speck64_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
crypto_speck64_encrypt(crypto_tfm_ctx(tfm), out, in);
}
void crypto_speck64_decrypt(const struct speck64_tfm_ctx *ctx,
u8 *out, const u8 *in)
{
u32 y = get_unaligned_le32(in);
u32 x = get_unaligned_le32(in + 4);
int i;
for (i = ctx->nrounds - 1; i >= 0; i--)
speck64_unround(&x, &y, ctx->round_keys[i]);
put_unaligned_le32(y, out);
put_unaligned_le32(x, out + 4);
}
EXPORT_SYMBOL_GPL(crypto_speck64_decrypt);
static void speck64_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
crypto_speck64_decrypt(crypto_tfm_ctx(tfm), out, in);
}
int crypto_speck64_setkey(struct speck64_tfm_ctx *ctx, const u8 *key,
unsigned int keylen)
{
u32 l[3];
u32 k;
int i;
switch (keylen) {
case SPECK64_96_KEY_SIZE:
k = get_unaligned_le32(key);
l[0] = get_unaligned_le32(key + 4);
l[1] = get_unaligned_le32(key + 8);
ctx->nrounds = SPECK64_96_NROUNDS;
for (i = 0; i < ctx->nrounds; i++) {
ctx->round_keys[i] = k;
speck64_round(&l[i % 2], &k, i);
}
break;
case SPECK64_128_KEY_SIZE:
k = get_unaligned_le32(key);
l[0] = get_unaligned_le32(key + 4);
l[1] = get_unaligned_le32(key + 8);
l[2] = get_unaligned_le32(key + 12);
ctx->nrounds = SPECK64_128_NROUNDS;
for (i = 0; i < ctx->nrounds; i++) {
ctx->round_keys[i] = k;
speck64_round(&l[i % 3], &k, i);
}
break;
default:
return -EINVAL;
}
return 0;
}
EXPORT_SYMBOL_GPL(crypto_speck64_setkey);
static int speck64_setkey(struct crypto_tfm *tfm, const u8 *key,
unsigned int keylen)
{
return crypto_speck64_setkey(crypto_tfm_ctx(tfm), key, keylen);
}
/* Algorithm definitions */
static struct crypto_alg speck_algs[] = {
{
.cra_name = "speck128",
.cra_driver_name = "speck128-generic",
.cra_priority = 100,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = SPECK128_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct speck128_tfm_ctx),
.cra_module = THIS_MODULE,
.cra_u = {
.cipher = {
.cia_min_keysize = SPECK128_128_KEY_SIZE,
.cia_max_keysize = SPECK128_256_KEY_SIZE,
.cia_setkey = speck128_setkey,
.cia_encrypt = speck128_encrypt,
.cia_decrypt = speck128_decrypt
}
}
}, {
.cra_name = "speck64",
.cra_driver_name = "speck64-generic",
.cra_priority = 100,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = SPECK64_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct speck64_tfm_ctx),
.cra_module = THIS_MODULE,
.cra_u = {
.cipher = {
.cia_min_keysize = SPECK64_96_KEY_SIZE,
.cia_max_keysize = SPECK64_128_KEY_SIZE,
.cia_setkey = speck64_setkey,
.cia_encrypt = speck64_encrypt,
.cia_decrypt = speck64_decrypt
}
}
}
};
static int __init speck_module_init(void)
{
return crypto_register_algs(speck_algs, ARRAY_SIZE(speck_algs));
}
static void __exit speck_module_exit(void)
{
crypto_unregister_algs(speck_algs, ARRAY_SIZE(speck_algs));
}
module_init(speck_module_init);
module_exit(speck_module_exit);
MODULE_DESCRIPTION("Speck block cipher (generic)");
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
MODULE_ALIAS_CRYPTO("speck128");
MODULE_ALIAS_CRYPTO("speck128-generic");
MODULE_ALIAS_CRYPTO("speck64");
MODULE_ALIAS_CRYPTO("speck64-generic");

View File

@@ -1103,6 +1103,9 @@ static void test_ahash_speed_common(const char *algo, unsigned int secs,
break;
}
if (speed[i].klen)
crypto_ahash_setkey(tfm, tvmem[0], speed[i].klen);
pr_info("test%3u "
"(%5u byte blocks,%5u bytes per update,%4u updates): ",
i, speed[i].blen, speed[i].plen, speed[i].blen / speed[i].plen);

View File

@@ -3037,18 +3037,6 @@ static const struct alg_test_desc alg_test_descs[] = {
.suite = {
.cipher = __VECS(sm4_tv_template)
}
}, {
.alg = "ecb(speck128)",
.test = alg_test_skcipher,
.suite = {
.cipher = __VECS(speck128_tv_template)
}
}, {
.alg = "ecb(speck64)",
.test = alg_test_skcipher,
.suite = {
.cipher = __VECS(speck64_tv_template)
}
}, {
.alg = "ecb(tea)",
.test = alg_test_skcipher,
@@ -3576,18 +3564,6 @@ static const struct alg_test_desc alg_test_descs[] = {
.suite = {
.cipher = __VECS(serpent_xts_tv_template)
}
}, {
.alg = "xts(speck128)",
.test = alg_test_skcipher,
.suite = {
.cipher = __VECS(speck128_xts_tv_template)
}
}, {
.alg = "xts(speck64)",
.test = alg_test_skcipher,
.suite = {
.cipher = __VECS(speck64_xts_tv_template)
}
}, {
.alg = "xts(twofish)",
.test = alg_test_skcipher,

View File

@@ -10198,744 +10198,6 @@ static const struct cipher_testvec sm4_tv_template[] = {
}
};
/*
* Speck test vectors taken from the original paper:
* "The Simon and Speck Families of Lightweight Block Ciphers"
* https://eprint.iacr.org/2013/404.pdf
*
* Note that the paper does not make byte and word order clear. But it was
* confirmed with the authors that the intended orders are little endian byte
* order and (y, x) word order. Equivalently, the printed test vectors, when
* looking at only the bytes (ignoring the whitespace that divides them into
* words), are backwards: the left-most byte is actually the one with the
* highest memory address, while the right-most byte is actually the one with
* the lowest memory address.
*/
static const struct cipher_testvec speck128_tv_template[] = {
{ /* Speck128/128 */
.key = "\x00\x01\x02\x03\x04\x05\x06\x07"
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
.klen = 16,
.ptext = "\x20\x6d\x61\x64\x65\x20\x69\x74"
"\x20\x65\x71\x75\x69\x76\x61\x6c",
.ctext = "\x18\x0d\x57\x5c\xdf\xfe\x60\x78"
"\x65\x32\x78\x79\x51\x98\x5d\xa6",
.len = 16,
}, { /* Speck128/192 */
.key = "\x00\x01\x02\x03\x04\x05\x06\x07"
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
"\x10\x11\x12\x13\x14\x15\x16\x17",
.klen = 24,
.ptext = "\x65\x6e\x74\x20\x74\x6f\x20\x43"
"\x68\x69\x65\x66\x20\x48\x61\x72",
.ctext = "\x86\x18\x3c\xe0\x5d\x18\xbc\xf9"
"\x66\x55\x13\x13\x3a\xcf\xe4\x1b",
.len = 16,
}, { /* Speck128/256 */
.key = "\x00\x01\x02\x03\x04\x05\x06\x07"
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
"\x10\x11\x12\x13\x14\x15\x16\x17"
"\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f",
.klen = 32,
.ptext = "\x70\x6f\x6f\x6e\x65\x72\x2e\x20"
"\x49\x6e\x20\x74\x68\x6f\x73\x65",
.ctext = "\x43\x8f\x18\x9c\x8d\xb4\xee\x4e"
"\x3e\xf5\xc0\x05\x04\x01\x09\x41",
.len = 16,
},
};
/*
* Speck128-XTS test vectors, taken from the AES-XTS test vectors with the
* ciphertext recomputed with Speck128 as the cipher
*/
static const struct cipher_testvec speck128_xts_tv_template[] = {
{
.key = "\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00",
.klen = 32,
.iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00",
.ptext = "\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00",
.ctext = "\xbe\xa0\xe7\x03\xd7\xfe\xab\x62"
"\x3b\x99\x4a\x64\x74\x77\xac\xed"
"\xd8\xf4\xa6\xcf\xae\xb9\x07\x42"
"\x51\xd9\xb6\x1d\xe0\x5e\xbc\x54",
.len = 32,
}, {
.key = "\x11\x11\x11\x11\x11\x11\x11\x11"
"\x11\x11\x11\x11\x11\x11\x11\x11"
"\x22\x22\x22\x22\x22\x22\x22\x22"
"\x22\x22\x22\x22\x22\x22\x22\x22",
.klen = 32,
.iv = "\x33\x33\x33\x33\x33\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00",
.ptext = "\x44\x44\x44\x44\x44\x44\x44\x44"
"\x44\x44\x44\x44\x44\x44\x44\x44"
"\x44\x44\x44\x44\x44\x44\x44\x44"
"\x44\x44\x44\x44\x44\x44\x44\x44",
.ctext = "\xfb\x53\x81\x75\x6f\x9f\x34\xad"
"\x7e\x01\xed\x7b\xcc\xda\x4e\x4a"
"\xd4\x84\xa4\x53\xd5\x88\x73\x1b"
"\xfd\xcb\xae\x0d\xf3\x04\xee\xe6",
.len = 32,
}, {
.key = "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8"
"\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0"
"\x22\x22\x22\x22\x22\x22\x22\x22"
"\x22\x22\x22\x22\x22\x22\x22\x22",
.klen = 32,
.iv = "\x33\x33\x33\x33\x33\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00",
.ptext = "\x44\x44\x44\x44\x44\x44\x44\x44"
"\x44\x44\x44\x44\x44\x44\x44\x44"
"\x44\x44\x44\x44\x44\x44\x44\x44"
"\x44\x44\x44\x44\x44\x44\x44\x44",
.ctext = "\x21\x52\x84\x15\xd1\xf7\x21\x55"
"\xd9\x75\x4a\xd3\xc5\xdb\x9f\x7d"
"\xda\x63\xb2\xf1\x82\xb0\x89\x59"
"\x86\xd4\xaa\xaa\xdd\xff\x4f\x92",
.len = 32,
}, {
.key = "\x27\x18\x28\x18\x28\x45\x90\x45"
"\x23\x53\x60\x28\x74\x71\x35\x26"
"\x31\x41\x59\x26\x53\x58\x97\x93"
"\x23\x84\x62\x64\x33\x83\x27\x95",
.klen = 32,
.iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00",
.ptext = "\x00\x01\x02\x03\x04\x05\x06\x07"
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
"\x10\x11\x12\x13\x14\x15\x16\x17"
"\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
"\x20\x21\x22\x23\x24\x25\x26\x27"
"\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
"\x30\x31\x32\x33\x34\x35\x36\x37"
"\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
"\x40\x41\x42\x43\x44\x45\x46\x47"
"\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
"\x50\x51\x52\x53\x54\x55\x56\x57"
"\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
"\x60\x61\x62\x63\x64\x65\x66\x67"
"\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
"\x70\x71\x72\x73\x74\x75\x76\x77"
"\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
"\x80\x81\x82\x83\x84\x85\x86\x87"
"\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
"\x90\x91\x92\x93\x94\x95\x96\x97"
"\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
"\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
"\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
"\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
"\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
"\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
"\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
"\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
"\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
"\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
"\xe8\xe9\xea\xeb\xec\xed\xee\xef"
"\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
"\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
"\x00\x01\x02\x03\x04\x05\x06\x07"
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
"\x10\x11\x12\x13\x14\x15\x16\x17"
"\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
"\x20\x21\x22\x23\x24\x25\x26\x27"
"\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
"\x30\x31\x32\x33\x34\x35\x36\x37"
"\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
"\x40\x41\x42\x43\x44\x45\x46\x47"
"\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
"\x50\x51\x52\x53\x54\x55\x56\x57"
"\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
"\x60\x61\x62\x63\x64\x65\x66\x67"
"\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
"\x70\x71\x72\x73\x74\x75\x76\x77"
"\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
"\x80\x81\x82\x83\x84\x85\x86\x87"
"\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
"\x90\x91\x92\x93\x94\x95\x96\x97"
"\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
"\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
"\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
"\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
"\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
"\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
"\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
"\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
"\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
"\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
"\xe8\xe9\xea\xeb\xec\xed\xee\xef"
"\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
"\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
.ctext = "\x57\xb5\xf8\x71\x6e\x6d\xdd\x82"
"\x53\xd0\xed\x2d\x30\xc1\x20\xef"
"\x70\x67\x5e\xff\x09\x70\xbb\xc1"
"\x3a\x7b\x48\x26\xd9\x0b\xf4\x48"
"\xbe\xce\xb1\xc7\xb2\x67\xc4\xa7"
"\x76\xf8\x36\x30\xb7\xb4\x9a\xd9"
"\xf5\x9d\xd0\x7b\xc1\x06\x96\x44"
"\x19\xc5\x58\x84\x63\xb9\x12\x68"
"\x68\xc7\xaa\x18\x98\xf2\x1f\x5c"
"\x39\xa6\xd8\x32\x2b\xc3\x51\xfd"
"\x74\x79\x2e\xb4\x44\xd7\x69\xc4"
"\xfc\x29\xe6\xed\x26\x1e\xa6\x9d"
"\x1c\xbe\x00\x0e\x7f\x3a\xca\xfb"
"\x6d\x13\x65\xa0\xf9\x31\x12\xe2"
"\x26\xd1\xec\x2b\x0a\x8b\x59\x99"
"\xa7\x49\xa0\x0e\x09\x33\x85\x50"
"\xc3\x23\xca\x7a\xdd\x13\x45\x5f"
"\xde\x4c\xa7\xcb\x00\x8a\x66\x6f"
"\xa2\xb6\xb1\x2e\xe1\xa0\x18\xf6"
"\xad\xf3\xbd\xeb\xc7\xef\x55\x4f"
"\x79\x91\x8d\x36\x13\x7b\xd0\x4a"
"\x6c\x39\xfb\x53\xb8\x6f\x02\x51"
"\xa5\x20\xac\x24\x1c\x73\x59\x73"
"\x58\x61\x3a\x87\x58\xb3\x20\x56"
"\x39\x06\x2b\x4d\xd3\x20\x2b\x89"
"\x3f\xa2\xf0\x96\xeb\x7f\xa4\xcd"
"\x11\xae\xbd\xcb\x3a\xb4\xd9\x91"
"\x09\x35\x71\x50\x65\xac\x92\xe3"
"\x7b\x32\xc0\x7a\xdd\xd4\xc3\x92"
"\x6f\xeb\x79\xde\x6f\xd3\x25\xc9"
"\xcd\x63\xf5\x1e\x7a\x3b\x26\x9d"
"\x77\x04\x80\xa9\xbf\x38\xb5\xbd"
"\xb8\x05\x07\xbd\xfd\xab\x7b\xf8"
"\x2a\x26\xcc\x49\x14\x6d\x55\x01"
"\x06\x94\xd8\xb2\x2d\x53\x83\x1b"
"\x8f\xd4\xdd\x57\x12\x7e\x18\xba"
"\x8e\xe2\x4d\x80\xef\x7e\x6b\x9d"
"\x24\xa9\x60\xa4\x97\x85\x86\x2a"
"\x01\x00\x09\xf1\xcb\x4a\x24\x1c"
"\xd8\xf6\xe6\x5b\xe7\x5d\xf2\xc4"
"\x97\x1c\x10\xc6\x4d\x66\x4f\x98"
"\x87\x30\xac\xd5\xea\x73\x49\x10"
"\x80\xea\xe5\x5f\x4d\x5f\x03\x33"
"\x66\x02\x35\x3d\x60\x06\x36\x4f"
"\x14\x1c\xd8\x07\x1f\x78\xd0\xf8"
"\x4f\x6c\x62\x7c\x15\xa5\x7c\x28"
"\x7c\xcc\xeb\x1f\xd1\x07\x90\x93"
"\x7e\xc2\xa8\x3a\x80\xc0\xf5\x30"
"\xcc\x75\xcf\x16\x26\xa9\x26\x3b"
"\xe7\x68\x2f\x15\x21\x5b\xe4\x00"
"\xbd\x48\x50\xcd\x75\x70\xc4\x62"
"\xbb\x41\xfb\x89\x4a\x88\x3b\x3b"
"\x51\x66\x02\x69\x04\x97\x36\xd4"
"\x75\xae\x0b\xa3\x42\xf8\xca\x79"
"\x8f\x93\xe9\xcc\x38\xbd\xd6\xd2"
"\xf9\x70\x4e\xc3\x6a\x8e\x25\xbd"
"\xea\x15\x5a\xa0\x85\x7e\x81\x0d"
"\x03\xe7\x05\x39\xf5\x05\x26\xee"
"\xec\xaa\x1f\x3d\xc9\x98\x76\x01"
"\x2c\xf4\xfc\xa3\x88\x77\x38\xc4"
"\x50\x65\x50\x6d\x04\x1f\xdf\x5a"
"\xaa\xf2\x01\xa9\xc1\x8d\xee\xca"
"\x47\x26\xef\x39\xb8\xb4\xf2\xd1"
"\xd6\xbb\x1b\x2a\xc1\x34\x14\xcf",
.len = 512,
}, {
.key = "\x27\x18\x28\x18\x28\x45\x90\x45"
"\x23\x53\x60\x28\x74\x71\x35\x26"
"\x62\x49\x77\x57\x24\x70\x93\x69"
"\x99\x59\x57\x49\x66\x96\x76\x27"
"\x31\x41\x59\x26\x53\x58\x97\x93"
"\x23\x84\x62\x64\x33\x83\x27\x95"
"\x02\x88\x41\x97\x16\x93\x99\x37"
"\x51\x05\x82\x09\x74\x94\x45\x92",
.klen = 64,
.iv = "\xff\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00",
.ptext = "\x00\x01\x02\x03\x04\x05\x06\x07"
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
"\x10\x11\x12\x13\x14\x15\x16\x17"
"\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
"\x20\x21\x22\x23\x24\x25\x26\x27"
"\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
"\x30\x31\x32\x33\x34\x35\x36\x37"
"\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
"\x40\x41\x42\x43\x44\x45\x46\x47"
"\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
"\x50\x51\x52\x53\x54\x55\x56\x57"
"\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
"\x60\x61\x62\x63\x64\x65\x66\x67"
"\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
"\x70\x71\x72\x73\x74\x75\x76\x77"
"\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
"\x80\x81\x82\x83\x84\x85\x86\x87"
"\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
"\x90\x91\x92\x93\x94\x95\x96\x97"
"\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
"\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
"\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
"\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
"\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
"\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
"\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
"\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
"\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
"\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
"\xe8\xe9\xea\xeb\xec\xed\xee\xef"
"\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
"\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
"\x00\x01\x02\x03\x04\x05\x06\x07"
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
"\x10\x11\x12\x13\x14\x15\x16\x17"
"\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
"\x20\x21\x22\x23\x24\x25\x26\x27"
"\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
"\x30\x31\x32\x33\x34\x35\x36\x37"
"\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
"\x40\x41\x42\x43\x44\x45\x46\x47"
"\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
"\x50\x51\x52\x53\x54\x55\x56\x57"
"\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
"\x60\x61\x62\x63\x64\x65\x66\x67"
"\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
"\x70\x71\x72\x73\x74\x75\x76\x77"
"\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
"\x80\x81\x82\x83\x84\x85\x86\x87"
"\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
"\x90\x91\x92\x93\x94\x95\x96\x97"
"\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
"\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
"\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
"\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
"\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
"\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
"\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
"\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
"\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
"\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
"\xe8\xe9\xea\xeb\xec\xed\xee\xef"
"\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
"\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
.ctext = "\xc5\x85\x2a\x4b\x73\xe4\xf6\xf1"
"\x7e\xf9\xf6\xe9\xa3\x73\x36\xcb"
"\xaa\xb6\x22\xb0\x24\x6e\x3d\x73"
"\x92\x99\xde\xd3\x76\xed\xcd\x63"
"\x64\x3a\x22\x57\xc1\x43\x49\xd4"
"\x79\x36\x31\x19\x62\xae\x10\x7e"
"\x7d\xcf\x7a\xe2\x6b\xce\x27\xfa"
"\xdc\x3d\xd9\x83\xd3\x42\x4c\xe0"
"\x1b\xd6\x1d\x1a\x6f\xd2\x03\x00"
"\xfc\x81\x99\x8a\x14\x62\xf5\x7e"
"\x0d\xe7\x12\xe8\x17\x9d\x0b\xec"
"\xe2\xf7\xc9\xa7\x63\xd1\x79\xb6"
"\x62\x62\x37\xfe\x0a\x4c\x4a\x37"
"\x70\xc7\x5e\x96\x5f\xbc\x8e\x9e"
"\x85\x3c\x4f\x26\x64\x85\xbc\x68"
"\xb0\xe0\x86\x5e\x26\x41\xce\x11"
"\x50\xda\x97\x14\xe9\x9e\xc7\x6d"
"\x3b\xdc\x43\xde\x2b\x27\x69\x7d"
"\xfc\xb0\x28\xbd\x8f\xb1\xc6\x31"
"\x14\x4d\xf0\x74\x37\xfd\x07\x25"
"\x96\x55\xe5\xfc\x9e\x27\x2a\x74"
"\x1b\x83\x4d\x15\x83\xac\x57\xa0"
"\xac\xa5\xd0\x38\xef\x19\x56\x53"
"\x25\x4b\xfc\xce\x04\x23\xe5\x6b"
"\xf6\xc6\x6c\x32\x0b\xb3\x12\xc5"
"\xed\x22\x34\x1c\x5d\xed\x17\x06"
"\x36\xa3\xe6\x77\xb9\x97\x46\xb8"
"\xe9\x3f\x7e\xc7\xbc\x13\x5c\xdc"
"\x6e\x3f\x04\x5e\xd1\x59\xa5\x82"
"\x35\x91\x3d\x1b\xe4\x97\x9f\x92"
"\x1c\x5e\x5f\x6f\x41\xd4\x62\xa1"
"\x8d\x39\xfc\x42\xfb\x38\x80\xb9"
"\x0a\xe3\xcc\x6a\x93\xd9\x7a\xb1"
"\xe9\x69\xaf\x0a\x6b\x75\x38\xa7"
"\xa1\xbf\xf7\xda\x95\x93\x4b\x78"
"\x19\xf5\x94\xf9\xd2\x00\x33\x37"
"\xcf\xf5\x9e\x9c\xf3\xcc\xa6\xee"
"\x42\xb2\x9e\x2c\x5f\x48\x23\x26"
"\x15\x25\x17\x03\x3d\xfe\x2c\xfc"
"\xeb\xba\xda\xe0\x00\x05\xb6\xa6"
"\x07\xb3\xe8\x36\x5b\xec\x5b\xbf"
"\xd6\x5b\x00\x74\xc6\x97\xf1\x6a"
"\x49\xa1\xc3\xfa\x10\x52\xb9\x14"
"\xad\xb7\x73\xf8\x78\x12\xc8\x59"
"\x17\x80\x4c\x57\x39\xf1\x6d\x80"
"\x25\x77\x0f\x5e\x7d\xf0\xaf\x21"
"\xec\xce\xb7\xc8\x02\x8a\xed\x53"
"\x2c\x25\x68\x2e\x1f\x85\x5e\x67"
"\xd1\x07\x7a\x3a\x89\x08\xe0\x34"
"\xdc\xdb\x26\xb4\x6b\x77\xfc\x40"
"\x31\x15\x72\xa0\xf0\x73\xd9\x3b"
"\xd5\xdb\xfe\xfc\x8f\xa9\x44\xa2"
"\x09\x9f\xc6\x33\xe5\xe2\x88\xe8"
"\xf3\xf0\x1a\xf4\xce\x12\x0f\xd6"
"\xf7\x36\xe6\xa4\xf4\x7a\x10\x58"
"\xcc\x1f\x48\x49\x65\x47\x75\xe9"
"\x28\xe1\x65\x7b\xf2\xc4\xb5\x07"
"\xf2\xec\x76\xd8\x8f\x09\xf3\x16"
"\xa1\x51\x89\x3b\xeb\x96\x42\xac"
"\x65\xe0\x67\x63\x29\xdc\xb4\x7d"
"\xf2\x41\x51\x6a\xcb\xde\x3c\xfb"
"\x66\x8d\x13\xca\xe0\x59\x2a\x00"
"\xc9\x53\x4c\xe6\x9e\xe2\x73\xd5"
"\x67\x19\xb2\xbd\x9a\x63\xd7\x5c",
.len = 512,
.also_non_np = 1,
.np = 3,
.tap = { 512 - 20, 4, 16 },
}
};
static const struct cipher_testvec speck64_tv_template[] = {
{ /* Speck64/96 */
.key = "\x00\x01\x02\x03\x08\x09\x0a\x0b"
"\x10\x11\x12\x13",
.klen = 12,
.ptext = "\x65\x61\x6e\x73\x20\x46\x61\x74",
.ctext = "\x6c\x94\x75\x41\xec\x52\x79\x9f",
.len = 8,
}, { /* Speck64/128 */
.key = "\x00\x01\x02\x03\x08\x09\x0a\x0b"
"\x10\x11\x12\x13\x18\x19\x1a\x1b",
.klen = 16,
.ptext = "\x2d\x43\x75\x74\x74\x65\x72\x3b",
.ctext = "\x8b\x02\x4e\x45\x48\xa5\x6f\x8c",
.len = 8,
},
};
/*
* Speck64-XTS test vectors, taken from the AES-XTS test vectors with the
* ciphertext recomputed with Speck64 as the cipher, and key lengths adjusted
*/
static const struct cipher_testvec speck64_xts_tv_template[] = {
{
.key = "\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00",
.klen = 24,
.iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00",
.ptext = "\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00",
.ctext = "\x84\xaf\x54\x07\x19\xd4\x7c\xa6"
"\xe4\xfe\xdf\xc4\x1f\x34\xc3\xc2"
"\x80\xf5\x72\xe7\xcd\xf0\x99\x22"
"\x35\xa7\x2f\x06\xef\xdc\x51\xaa",
.len = 32,
}, {
.key = "\x11\x11\x11\x11\x11\x11\x11\x11"
"\x11\x11\x11\x11\x11\x11\x11\x11"
"\x22\x22\x22\x22\x22\x22\x22\x22",
.klen = 24,
.iv = "\x33\x33\x33\x33\x33\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00",
.ptext = "\x44\x44\x44\x44\x44\x44\x44\x44"
"\x44\x44\x44\x44\x44\x44\x44\x44"
"\x44\x44\x44\x44\x44\x44\x44\x44"
"\x44\x44\x44\x44\x44\x44\x44\x44",
.ctext = "\x12\x56\x73\xcd\x15\x87\xa8\x59"
"\xcf\x84\xae\xd9\x1c\x66\xd6\x9f"
"\xb3\x12\x69\x7e\x36\xeb\x52\xff"
"\x62\xdd\xba\x90\xb3\xe1\xee\x99",
.len = 32,
}, {
.key = "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8"
"\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0"
"\x22\x22\x22\x22\x22\x22\x22\x22",
.klen = 24,
.iv = "\x33\x33\x33\x33\x33\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00",
.ptext = "\x44\x44\x44\x44\x44\x44\x44\x44"
"\x44\x44\x44\x44\x44\x44\x44\x44"
"\x44\x44\x44\x44\x44\x44\x44\x44"
"\x44\x44\x44\x44\x44\x44\x44\x44",
.ctext = "\x15\x1b\xe4\x2c\xa2\x5a\x2d\x2c"
"\x27\x36\xc0\xbf\x5d\xea\x36\x37"
"\x2d\x1a\x88\xbc\x66\xb5\xd0\x0b"
"\xa1\xbc\x19\xb2\x0f\x3b\x75\x34",
.len = 32,
}, {
.key = "\x27\x18\x28\x18\x28\x45\x90\x45"
"\x23\x53\x60\x28\x74\x71\x35\x26"
"\x31\x41\x59\x26\x53\x58\x97\x93",
.klen = 24,
.iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00",
.ptext = "\x00\x01\x02\x03\x04\x05\x06\x07"
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
"\x10\x11\x12\x13\x14\x15\x16\x17"
"\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
"\x20\x21\x22\x23\x24\x25\x26\x27"
"\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
"\x30\x31\x32\x33\x34\x35\x36\x37"
"\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
"\x40\x41\x42\x43\x44\x45\x46\x47"
"\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
"\x50\x51\x52\x53\x54\x55\x56\x57"
"\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
"\x60\x61\x62\x63\x64\x65\x66\x67"
"\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
"\x70\x71\x72\x73\x74\x75\x76\x77"
"\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
"\x80\x81\x82\x83\x84\x85\x86\x87"
"\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
"\x90\x91\x92\x93\x94\x95\x96\x97"
"\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
"\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
"\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
"\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
"\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
"\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
"\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
"\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
"\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
"\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
"\xe8\xe9\xea\xeb\xec\xed\xee\xef"
"\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
"\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
"\x00\x01\x02\x03\x04\x05\x06\x07"
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
"\x10\x11\x12\x13\x14\x15\x16\x17"
"\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
"\x20\x21\x22\x23\x24\x25\x26\x27"
"\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
"\x30\x31\x32\x33\x34\x35\x36\x37"
"\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
"\x40\x41\x42\x43\x44\x45\x46\x47"
"\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
"\x50\x51\x52\x53\x54\x55\x56\x57"
"\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
"\x60\x61\x62\x63\x64\x65\x66\x67"
"\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
"\x70\x71\x72\x73\x74\x75\x76\x77"
"\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
"\x80\x81\x82\x83\x84\x85\x86\x87"
"\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
"\x90\x91\x92\x93\x94\x95\x96\x97"
"\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
"\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
"\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
"\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
"\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
"\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
"\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
"\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
"\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
"\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
"\xe8\xe9\xea\xeb\xec\xed\xee\xef"
"\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
"\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
.ctext = "\xaf\xa1\x81\xa6\x32\xbb\x15\x8e"
"\xf8\x95\x2e\xd3\xe6\xee\x7e\x09"
"\x0c\x1a\xf5\x02\x97\x8b\xe3\xb3"
"\x11\xc7\x39\x96\xd0\x95\xf4\x56"
"\xf4\xdd\x03\x38\x01\x44\x2c\xcf"
"\x88\xae\x8e\x3c\xcd\xe7\xaa\x66"
"\xfe\x3d\xc6\xfb\x01\x23\x51\x43"
"\xd5\xd2\x13\x86\x94\x34\xe9\x62"
"\xf9\x89\xe3\xd1\x7b\xbe\xf8\xef"
"\x76\x35\x04\x3f\xdb\x23\x9d\x0b"
"\x85\x42\xb9\x02\xd6\xcc\xdb\x96"
"\xa7\x6b\x27\xb6\xd4\x45\x8f\x7d"
"\xae\xd2\x04\xd5\xda\xc1\x7e\x24"
"\x8c\x73\xbe\x48\x7e\xcf\x65\x28"
"\x29\xe5\xbe\x54\x30\xcb\x46\x95"
"\x4f\x2e\x8a\x36\xc8\x27\xc5\xbe"
"\xd0\x1a\xaf\xab\x26\xcd\x9e\x69"
"\xa1\x09\x95\x71\x26\xe9\xc4\xdf"
"\xe6\x31\xc3\x46\xda\xaf\x0b\x41"
"\x1f\xab\xb1\x8e\xd6\xfc\x0b\xb3"
"\x82\xc0\x37\x27\xfc\x91\xa7\x05"
"\xfb\xc5\xdc\x2b\x74\x96\x48\x43"
"\x5d\x9c\x19\x0f\x60\x63\x3a\x1f"
"\x6f\xf0\x03\xbe\x4d\xfd\xc8\x4a"
"\xc6\xa4\x81\x6d\xc3\x12\x2a\x5c"
"\x07\xff\xf3\x72\x74\x48\xb5\x40"
"\x50\xb5\xdd\x90\x43\x31\x18\x15"
"\x7b\xf2\xa6\xdb\x83\xc8\x4b\x4a"
"\x29\x93\x90\x8b\xda\x07\xf0\x35"
"\x6d\x90\x88\x09\x4e\x83\xf5\x5b"
"\x94\x12\xbb\x33\x27\x1d\x3f\x23"
"\x51\xa8\x7c\x07\xa2\xae\x77\xa6"
"\x50\xfd\xcc\xc0\x4f\x80\x7a\x9f"
"\x66\xdd\xcd\x75\x24\x8b\x33\xf7"
"\x20\xdb\x83\x9b\x4f\x11\x63\x6e"
"\xcf\x37\xef\xc9\x11\x01\x5c\x45"
"\x32\x99\x7c\x3c\x9e\x42\x89\xe3"
"\x70\x6d\x15\x9f\xb1\xe6\xb6\x05"
"\xfe\x0c\xb9\x49\x2d\x90\x6d\xcc"
"\x5d\x3f\xc1\xfe\x89\x0a\x2e\x2d"
"\xa0\xa8\x89\x3b\x73\x39\xa5\x94"
"\x4c\xa4\xa6\xbb\xa7\x14\x46\x89"
"\x10\xff\xaf\xef\xca\xdd\x4f\x80"
"\xb3\xdf\x3b\xab\xd4\xe5\x5a\xc7"
"\x33\xca\x00\x8b\x8b\x3f\xea\xec"
"\x68\x8a\xc2\x6d\xfd\xd4\x67\x0f"
"\x22\x31\xe1\x0e\xfe\x5a\x04\xd5"
"\x64\xa3\xf1\x1a\x76\x28\xcc\x35"
"\x36\xa7\x0a\x74\xf7\x1c\x44\x9b"
"\xc7\x1b\x53\x17\x02\xea\xd1\xad"
"\x13\x51\x73\xc0\xa0\xb2\x05\x32"
"\xa8\xa2\x37\x2e\xe1\x7a\x3a\x19"
"\x26\xb4\x6c\x62\x5d\xb3\x1a\x1d"
"\x59\xda\xee\x1a\x22\x18\xda\x0d"
"\x88\x0f\x55\x8b\x72\x62\xfd\xc1"
"\x69\x13\xcd\x0d\x5f\xc1\x09\x52"
"\xee\xd6\xe3\x84\x4d\xee\xf6\x88"
"\xaf\x83\xdc\x76\xf4\xc0\x93\x3f"
"\x4a\x75\x2f\xb0\x0b\x3e\xc4\x54"
"\x7d\x69\x8d\x00\x62\x77\x0d\x14"
"\xbe\x7c\xa6\x7d\xc5\x24\x4f\xf3"
"\x50\xf7\x5f\xf4\xc2\xca\x41\x97"
"\x37\xbe\x75\x74\xcd\xf0\x75\x6e"
"\x25\x23\x94\xbd\xda\x8d\xb0\xd4",
.len = 512,
}, {
.key = "\x27\x18\x28\x18\x28\x45\x90\x45"
"\x23\x53\x60\x28\x74\x71\x35\x26"
"\x62\x49\x77\x57\x24\x70\x93\x69"
"\x99\x59\x57\x49\x66\x96\x76\x27",
.klen = 32,
.iv = "\xff\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00",
.ptext = "\x00\x01\x02\x03\x04\x05\x06\x07"
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
"\x10\x11\x12\x13\x14\x15\x16\x17"
"\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
"\x20\x21\x22\x23\x24\x25\x26\x27"
"\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
"\x30\x31\x32\x33\x34\x35\x36\x37"
"\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
"\x40\x41\x42\x43\x44\x45\x46\x47"
"\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
"\x50\x51\x52\x53\x54\x55\x56\x57"
"\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
"\x60\x61\x62\x63\x64\x65\x66\x67"
"\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
"\x70\x71\x72\x73\x74\x75\x76\x77"
"\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
"\x80\x81\x82\x83\x84\x85\x86\x87"
"\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
"\x90\x91\x92\x93\x94\x95\x96\x97"
"\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
"\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
"\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
"\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
"\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
"\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
"\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
"\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
"\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
"\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
"\xe8\xe9\xea\xeb\xec\xed\xee\xef"
"\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
"\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
"\x00\x01\x02\x03\x04\x05\x06\x07"
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
"\x10\x11\x12\x13\x14\x15\x16\x17"
"\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
"\x20\x21\x22\x23\x24\x25\x26\x27"
"\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
"\x30\x31\x32\x33\x34\x35\x36\x37"
"\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
"\x40\x41\x42\x43\x44\x45\x46\x47"
"\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
"\x50\x51\x52\x53\x54\x55\x56\x57"
"\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
"\x60\x61\x62\x63\x64\x65\x66\x67"
"\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
"\x70\x71\x72\x73\x74\x75\x76\x77"
"\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
"\x80\x81\x82\x83\x84\x85\x86\x87"
"\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
"\x90\x91\x92\x93\x94\x95\x96\x97"
"\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
"\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
"\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
"\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
"\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
"\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
"\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
"\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
"\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
"\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
"\xe8\xe9\xea\xeb\xec\xed\xee\xef"
"\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
"\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
.ctext = "\x55\xed\x71\xd3\x02\x8e\x15\x3b"
"\xc6\x71\x29\x2d\x3e\x89\x9f\x59"
"\x68\x6a\xcc\x8a\x56\x97\xf3\x95"
"\x4e\x51\x08\xda\x2a\xf8\x6f\x3c"
"\x78\x16\xea\x80\xdb\x33\x75\x94"
"\xf9\x29\xc4\x2b\x76\x75\x97\xc7"
"\xf2\x98\x2c\xf9\xff\xc8\xd5\x2b"
"\x18\xf1\xaf\xcf\x7c\xc5\x0b\xee"
"\xad\x3c\x76\x7c\xe6\x27\xa2\x2a"
"\xe4\x66\xe1\xab\xa2\x39\xfc\x7c"
"\xf5\xec\x32\x74\xa3\xb8\x03\x88"
"\x52\xfc\x2e\x56\x3f\xa1\xf0\x9f"
"\x84\x5e\x46\xed\x20\x89\xb6\x44"
"\x8d\xd0\xed\x54\x47\x16\xbe\x95"
"\x8a\xb3\x6b\x72\xc4\x32\x52\x13"
"\x1b\xb0\x82\xbe\xac\xf9\x70\xa6"
"\x44\x18\xdd\x8c\x6e\xca\x6e\x45"
"\x8f\x1e\x10\x07\x57\x25\x98\x7b"
"\x17\x8c\x78\xdd\x80\xa7\xd9\xd8"
"\x63\xaf\xb9\x67\x57\xfd\xbc\xdb"
"\x44\xe9\xc5\x65\xd1\xc7\x3b\xff"
"\x20\xa0\x80\x1a\xc3\x9a\xad\x5e"
"\x5d\x3b\xd3\x07\xd9\xf5\xfd\x3d"
"\x4a\x8b\xa8\xd2\x6e\x7a\x51\x65"
"\x6c\x8e\x95\xe0\x45\xc9\x5f\x4a"
"\x09\x3c\x3d\x71\x7f\x0c\x84\x2a"
"\xc8\x48\x52\x1a\xc2\xd5\xd6\x78"
"\x92\x1e\xa0\x90\x2e\xea\xf0\xf3"
"\xdc\x0f\xb1\xaf\x0d\x9b\x06\x2e"
"\x35\x10\x30\x82\x0d\xe7\xc5\x9b"
"\xde\x44\x18\xbd\x9f\xd1\x45\xa9"
"\x7b\x7a\x4a\xad\x35\x65\x27\xca"
"\xb2\xc3\xd4\x9b\x71\x86\x70\xee"
"\xf1\x89\x3b\x85\x4b\x5b\xaa\xaf"
"\xfc\x42\xc8\x31\x59\xbe\x16\x60"
"\x4f\xf9\xfa\x12\xea\xd0\xa7\x14"
"\xf0\x7a\xf3\xd5\x8d\xbd\x81\xef"
"\x52\x7f\x29\x51\x94\x20\x67\x3c"
"\xd1\xaf\x77\x9f\x22\x5a\x4e\x63"
"\xe7\xff\x73\x25\xd1\xdd\x96\x8a"
"\x98\x52\x6d\xf3\xac\x3e\xf2\x18"
"\x6d\xf6\x0a\x29\xa6\x34\x3d\xed"
"\xe3\x27\x0d\x9d\x0a\x02\x44\x7e"
"\x5a\x7e\x67\x0f\x0a\x9e\xd6\xad"
"\x91\xe6\x4d\x81\x8c\x5c\x59\xaa"
"\xfb\xeb\x56\x53\xd2\x7d\x4c\x81"
"\x65\x53\x0f\x41\x11\xbd\x98\x99"
"\xf9\xc6\xfa\x51\x2e\xa3\xdd\x8d"
"\x84\x98\xf9\x34\xed\x33\x2a\x1f"
"\x82\xed\xc1\x73\x98\xd3\x02\xdc"
"\xe6\xc2\x33\x1d\xa2\xb4\xca\x76"
"\x63\x51\x34\x9d\x96\x12\xae\xce"
"\x83\xc9\x76\x5e\xa4\x1b\x53\x37"
"\x17\xd5\xc0\x80\x1d\x62\xf8\x3d"
"\x54\x27\x74\xbb\x10\x86\x57\x46"
"\x68\xe1\xed\x14\xe7\x9d\xfc\x84"
"\x47\xbc\xc2\xf8\x19\x4b\x99\xcf"
"\x7a\xe9\xc4\xb8\x8c\x82\x72\x4d"
"\x7b\x4f\x38\x55\x36\x71\x64\xc1"
"\xfc\x5c\x75\x52\x33\x02\x18\xf8"
"\x17\xe1\x2b\xc2\x43\x39\xbd\x76"
"\x9b\x63\x76\x32\x2f\x19\x72\x10"
"\x9f\x21\x0c\xf1\x66\x50\x7f\xa5"
"\x0d\x1f\x46\xe0\xba\xd3\x2f\x3c",
.len = 512,
.also_non_np = 1,
.np = 3,
.tap = { 512 - 20, 4, 16 },
}
};
/* Cast6 test vectors from RFC 2612 */
static const struct cipher_testvec cast6_tv_template[] = {
{

View File

@@ -117,11 +117,17 @@ static void lpit_update_residency(struct lpit_residency_info *info,
if (!info->iomem_addr)
return;
if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0))
return;
/* Silently fail, if cpuidle attribute group is not present */
sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj,
&dev_attr_low_power_idle_system_residency_us.attr,
"cpuidle");
} else if (info->gaddr.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE) {
if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0))
return;
/* Silently fail, if cpuidle attribute group is not present */
sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj,
&dev_attr_low_power_idle_cpu_residency_us.attr,

View File

@@ -327,9 +327,11 @@ static const struct acpi_device_id acpi_lpss_device_ids[] = {
{ "INT33FC", },
/* Braswell LPSS devices */
{ "80862286", LPSS_ADDR(lpss_dma_desc) },
{ "80862288", LPSS_ADDR(bsw_pwm_dev_desc) },
{ "8086228A", LPSS_ADDR(bsw_uart_dev_desc) },
{ "8086228E", LPSS_ADDR(bsw_spi_dev_desc) },
{ "808622C0", LPSS_ADDR(lpss_dma_desc) },
{ "808622C1", LPSS_ADDR(bsw_i2c_dev_desc) },
/* Broadwell LPSS devices */

View File

@@ -643,7 +643,7 @@ static acpi_status __init acpi_processor_ids_walk(acpi_handle handle,
status = acpi_get_type(handle, &acpi_type);
if (ACPI_FAILURE(status))
return false;
return status;
switch (acpi_type) {
case ACPI_TYPE_PROCESSOR:
@@ -663,11 +663,12 @@ static acpi_status __init acpi_processor_ids_walk(acpi_handle handle,
}
processor_validated_ids_update(uid);
return true;
return AE_OK;
err:
/* Exit on error, but don't abort the namespace walk */
acpi_handle_info(handle, "Invalid processor object\n");
return false;
return AE_OK;
}

View File

@@ -417,6 +417,10 @@ acpi_ds_eval_region_operands(struct acpi_walk_state *walk_state,
ACPI_FORMAT_UINT64(obj_desc->region.address),
obj_desc->region.length));
status = acpi_ut_add_address_range(obj_desc->region.space_id,
obj_desc->region.address,
obj_desc->region.length, node);
/* Now the address and length are valid for this opregion */
obj_desc->region.flags |= AOPOBJ_DATA_VALID;

View File

@@ -417,6 +417,7 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
union acpi_parse_object *op = NULL; /* current op */
struct acpi_parse_state *parser_state;
u8 *aml_op_start = NULL;
u8 opcode_length;
ACPI_FUNCTION_TRACE_PTR(ps_parse_loop, walk_state);
@@ -540,8 +541,19 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
"Skip parsing opcode %s",
acpi_ps_get_opcode_name
(walk_state->opcode)));
/*
* Determine the opcode length before skipping the opcode.
* An opcode can be 1 byte or 2 bytes in length.
*/
opcode_length = 1;
if ((walk_state->opcode & 0xFF00) ==
AML_EXTENDED_OPCODE) {
opcode_length = 2;
}
walk_state->parser_state.aml =
walk_state->aml + 1;
walk_state->aml + opcode_length;
walk_state->parser_state.aml =
acpi_ps_get_next_package_end
(&walk_state->parser_state);

View File

@@ -2466,7 +2466,8 @@ static int ars_get_cap(struct acpi_nfit_desc *acpi_desc,
return cmd_rc;
}
static int ars_start(struct acpi_nfit_desc *acpi_desc, struct nfit_spa *nfit_spa)
static int ars_start(struct acpi_nfit_desc *acpi_desc,
struct nfit_spa *nfit_spa, enum nfit_ars_state req_type)
{
int rc;
int cmd_rc;
@@ -2477,7 +2478,7 @@ static int ars_start(struct acpi_nfit_desc *acpi_desc, struct nfit_spa *nfit_spa
memset(&ars_start, 0, sizeof(ars_start));
ars_start.address = spa->address;
ars_start.length = spa->length;
if (test_bit(ARS_SHORT, &nfit_spa->ars_state))
if (req_type == ARS_REQ_SHORT)
ars_start.flags = ND_ARS_RETURN_PREV_DATA;
if (nfit_spa_type(spa) == NFIT_SPA_PM)
ars_start.type = ND_ARS_PERSISTENT;
@@ -2534,6 +2535,15 @@ static void ars_complete(struct acpi_nfit_desc *acpi_desc,
struct nd_region *nd_region = nfit_spa->nd_region;
struct device *dev;
lockdep_assert_held(&acpi_desc->init_mutex);
/*
* Only advance the ARS state for ARS runs initiated by the
* kernel, ignore ARS results from BIOS initiated runs for scrub
* completion tracking.
*/
if (acpi_desc->scrub_spa != nfit_spa)
return;
if ((ars_status->address >= spa->address && ars_status->address
< spa->address + spa->length)
|| (ars_status->address < spa->address)) {
@@ -2553,28 +2563,13 @@ static void ars_complete(struct acpi_nfit_desc *acpi_desc,
} else
return;
if (test_bit(ARS_DONE, &nfit_spa->ars_state))
return;
if (!test_and_clear_bit(ARS_REQ, &nfit_spa->ars_state))
return;
acpi_desc->scrub_spa = NULL;
if (nd_region) {
dev = nd_region_dev(nd_region);
nvdimm_region_notify(nd_region, NVDIMM_REVALIDATE_POISON);
} else
dev = acpi_desc->dev;
dev_dbg(dev, "ARS: range %d %s complete\n", spa->range_index,
test_bit(ARS_SHORT, &nfit_spa->ars_state)
? "short" : "long");
clear_bit(ARS_SHORT, &nfit_spa->ars_state);
if (test_and_clear_bit(ARS_REQ_REDO, &nfit_spa->ars_state)) {
set_bit(ARS_SHORT, &nfit_spa->ars_state);
set_bit(ARS_REQ, &nfit_spa->ars_state);
dev_dbg(dev, "ARS: processing scrub request received while in progress\n");
} else
set_bit(ARS_DONE, &nfit_spa->ars_state);
dev_dbg(dev, "ARS: range %d complete\n", spa->range_index);
}
static int ars_status_process_records(struct acpi_nfit_desc *acpi_desc)
@@ -2855,46 +2850,55 @@ static int acpi_nfit_query_poison(struct acpi_nfit_desc *acpi_desc)
return 0;
}
static int ars_register(struct acpi_nfit_desc *acpi_desc, struct nfit_spa *nfit_spa,
int *query_rc)
static int ars_register(struct acpi_nfit_desc *acpi_desc,
struct nfit_spa *nfit_spa)
{
int rc = *query_rc;
int rc;
if (no_init_ars)
if (no_init_ars || test_bit(ARS_FAILED, &nfit_spa->ars_state))
return acpi_nfit_register_region(acpi_desc, nfit_spa);
set_bit(ARS_REQ, &nfit_spa->ars_state);
set_bit(ARS_SHORT, &nfit_spa->ars_state);
set_bit(ARS_REQ_SHORT, &nfit_spa->ars_state);
set_bit(ARS_REQ_LONG, &nfit_spa->ars_state);
switch (rc) {
switch (acpi_nfit_query_poison(acpi_desc)) {
case 0:
case -EAGAIN:
rc = ars_start(acpi_desc, nfit_spa);
if (rc == -EBUSY) {
*query_rc = rc;
rc = ars_start(acpi_desc, nfit_spa, ARS_REQ_SHORT);
/* shouldn't happen, try again later */
if (rc == -EBUSY)
break;
} else if (rc == 0) {
rc = acpi_nfit_query_poison(acpi_desc);
} else {
if (rc) {
set_bit(ARS_FAILED, &nfit_spa->ars_state);
break;
}
if (rc == -EAGAIN)
clear_bit(ARS_SHORT, &nfit_spa->ars_state);
else if (rc == 0)
ars_complete(acpi_desc, nfit_spa);
clear_bit(ARS_REQ_SHORT, &nfit_spa->ars_state);
rc = acpi_nfit_query_poison(acpi_desc);
if (rc)
break;
acpi_desc->scrub_spa = nfit_spa;
ars_complete(acpi_desc, nfit_spa);
/*
* If ars_complete() says we didn't complete the
* short scrub, we'll try again with a long
* request.
*/
acpi_desc->scrub_spa = NULL;
break;
case -EBUSY:
case -ENOMEM:
case -ENOSPC:
/*
* BIOS was using ARS, wait for it to complete (or
* resources to become available) and then perform our
* own scrubs.
*/
break;
default:
set_bit(ARS_FAILED, &nfit_spa->ars_state);
break;
}
if (test_and_clear_bit(ARS_DONE, &nfit_spa->ars_state))
set_bit(ARS_REQ, &nfit_spa->ars_state);
return acpi_nfit_register_region(acpi_desc, nfit_spa);
}
@@ -2916,6 +2920,8 @@ static unsigned int __acpi_nfit_scrub(struct acpi_nfit_desc *acpi_desc,
struct device *dev = acpi_desc->dev;
struct nfit_spa *nfit_spa;
lockdep_assert_held(&acpi_desc->init_mutex);
if (acpi_desc->cancel)
return 0;
@@ -2939,21 +2945,49 @@ static unsigned int __acpi_nfit_scrub(struct acpi_nfit_desc *acpi_desc,
ars_complete_all(acpi_desc);
list_for_each_entry(nfit_spa, &acpi_desc->spas, list) {
enum nfit_ars_state req_type;
int rc;
if (test_bit(ARS_FAILED, &nfit_spa->ars_state))
continue;
if (test_bit(ARS_REQ, &nfit_spa->ars_state)) {
int rc = ars_start(acpi_desc, nfit_spa);
clear_bit(ARS_DONE, &nfit_spa->ars_state);
dev = nd_region_dev(nfit_spa->nd_region);
dev_dbg(dev, "ARS: range %d ARS start (%d)\n",
nfit_spa->spa->range_index, rc);
if (rc == 0 || rc == -EBUSY)
return 1;
dev_err(dev, "ARS: range %d ARS failed (%d)\n",
nfit_spa->spa->range_index, rc);
set_bit(ARS_FAILED, &nfit_spa->ars_state);
/* prefer short ARS requests first */
if (test_bit(ARS_REQ_SHORT, &nfit_spa->ars_state))
req_type = ARS_REQ_SHORT;
else if (test_bit(ARS_REQ_LONG, &nfit_spa->ars_state))
req_type = ARS_REQ_LONG;
else
continue;
rc = ars_start(acpi_desc, nfit_spa, req_type);
dev = nd_region_dev(nfit_spa->nd_region);
dev_dbg(dev, "ARS: range %d ARS start %s (%d)\n",
nfit_spa->spa->range_index,
req_type == ARS_REQ_SHORT ? "short" : "long",
rc);
/*
* Hmm, we raced someone else starting ARS? Try again in
* a bit.
*/
if (rc == -EBUSY)
return 1;
if (rc == 0) {
dev_WARN_ONCE(dev, acpi_desc->scrub_spa,
"scrub start while range %d active\n",
acpi_desc->scrub_spa->spa->range_index);
clear_bit(req_type, &nfit_spa->ars_state);
acpi_desc->scrub_spa = nfit_spa;
/*
* Consider this spa last for future scrub
* requests
*/
list_move_tail(&nfit_spa->list, &acpi_desc->spas);
return 1;
}
dev_err(dev, "ARS: range %d ARS failed (%d)\n",
nfit_spa->spa->range_index, rc);
set_bit(ARS_FAILED, &nfit_spa->ars_state);
}
return 0;
}
@@ -3009,6 +3043,7 @@ static void acpi_nfit_init_ars(struct acpi_nfit_desc *acpi_desc,
struct nd_cmd_ars_cap ars_cap;
int rc;
set_bit(ARS_FAILED, &nfit_spa->ars_state);
memset(&ars_cap, 0, sizeof(ars_cap));
rc = ars_get_cap(acpi_desc, &ars_cap, nfit_spa);
if (rc < 0)
@@ -3025,16 +3060,14 @@ static void acpi_nfit_init_ars(struct acpi_nfit_desc *acpi_desc,
nfit_spa->clear_err_unit = ars_cap.clear_err_unit;
acpi_desc->max_ars = max(nfit_spa->max_ars, acpi_desc->max_ars);
clear_bit(ARS_FAILED, &nfit_spa->ars_state);
set_bit(ARS_REQ, &nfit_spa->ars_state);
}
static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
{
struct nfit_spa *nfit_spa;
int rc, query_rc;
int rc;
list_for_each_entry(nfit_spa, &acpi_desc->spas, list) {
set_bit(ARS_FAILED, &nfit_spa->ars_state);
switch (nfit_spa_type(nfit_spa->spa)) {
case NFIT_SPA_VOLATILE:
case NFIT_SPA_PM:
@@ -3043,20 +3076,12 @@ static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
}
}
/*
* Reap any results that might be pending before starting new
* short requests.
*/
query_rc = acpi_nfit_query_poison(acpi_desc);
if (query_rc == 0)
ars_complete_all(acpi_desc);
list_for_each_entry(nfit_spa, &acpi_desc->spas, list)
switch (nfit_spa_type(nfit_spa->spa)) {
case NFIT_SPA_VOLATILE:
case NFIT_SPA_PM:
/* register regions and kick off initial ARS run */
rc = ars_register(acpi_desc, nfit_spa, &query_rc);
rc = ars_register(acpi_desc, nfit_spa);
if (rc)
return rc;
break;
@@ -3251,7 +3276,8 @@ static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc,
return 0;
}
int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags)
int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
enum nfit_ars_state req_type)
{
struct device *dev = acpi_desc->dev;
int scheduled = 0, busy = 0;
@@ -3271,14 +3297,10 @@ int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags)
if (test_bit(ARS_FAILED, &nfit_spa->ars_state))
continue;
if (test_and_set_bit(ARS_REQ, &nfit_spa->ars_state)) {
if (test_and_set_bit(req_type, &nfit_spa->ars_state))
busy++;
set_bit(ARS_REQ_REDO, &nfit_spa->ars_state);
} else {
if (test_bit(ARS_SHORT, &flags))
set_bit(ARS_SHORT, &nfit_spa->ars_state);
else
scheduled++;
}
}
if (scheduled) {
sched_ars(acpi_desc);
@@ -3464,10 +3486,11 @@ static void acpi_nfit_update_notify(struct device *dev, acpi_handle handle)
static void acpi_nfit_uc_error_notify(struct device *dev, acpi_handle handle)
{
struct acpi_nfit_desc *acpi_desc = dev_get_drvdata(dev);
unsigned long flags = (acpi_desc->scrub_mode == HW_ERROR_SCRUB_ON) ?
0 : 1 << ARS_SHORT;
acpi_nfit_ars_rescan(acpi_desc, flags);
if (acpi_desc->scrub_mode == HW_ERROR_SCRUB_ON)
acpi_nfit_ars_rescan(acpi_desc, ARS_REQ_LONG);
else
acpi_nfit_ars_rescan(acpi_desc, ARS_REQ_SHORT);
}
void __acpi_nfit_notify(struct device *dev, acpi_handle handle, u32 event)

View File

@@ -118,10 +118,8 @@ enum nfit_dimm_notifiers {
};
enum nfit_ars_state {
ARS_REQ,
ARS_REQ_REDO,
ARS_DONE,
ARS_SHORT,
ARS_REQ_SHORT,
ARS_REQ_LONG,
ARS_FAILED,
};
@@ -198,6 +196,7 @@ struct acpi_nfit_desc {
struct device *dev;
u8 ars_start_flags;
struct nd_cmd_ars_status *ars_status;
struct nfit_spa *scrub_spa;
struct delayed_work dwork;
struct list_head list;
struct kernfs_node *scrub_count_state;
@@ -252,7 +251,8 @@ struct nfit_blk {
extern struct list_head acpi_descs;
extern struct mutex acpi_desc_lock;
int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags);
int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
enum nfit_ars_state req_type);
#ifdef CONFIG_X86_MCE
void nfit_mce_register(void);

View File

@@ -617,15 +617,18 @@ void acpi_os_stall(u32 us)
}
/*
* Support ACPI 3.0 AML Timer operand
* Returns 64-bit free-running, monotonically increasing timer
* with 100ns granularity
* Support ACPI 3.0 AML Timer operand. Returns a 64-bit free-running,
* monotonically increasing timer with 100ns granularity. Do not use
* ktime_get() to implement this function because this function may get
* called after timekeeping has been suspended. Note: calling this function
* after timekeeping has been suspended may lead to unexpected results
* because when timekeeping is suspended the jiffies counter is not
* incremented. See also timekeeping_suspend().
*/
u64 acpi_os_get_timer(void)
{
u64 time_ns = ktime_to_ns(ktime_get());
do_div(time_ns, 100);
return time_ns;
return (get_jiffies_64() - INITIAL_JIFFIES) *
(ACPI_100NSEC_PER_SEC / HZ);
}
acpi_status acpi_os_read_port(acpi_io_address port, u32 * value, u32 width)

View File

@@ -338,9 +338,6 @@ static struct acpi_pptt_cache *acpi_find_cache_node(struct acpi_table_header *ta
return found;
}
/* total number of attributes checked by the properties code */
#define PPTT_CHECKED_ATTRIBUTES 4
/**
* update_cache_properties() - Update cacheinfo for the given processor
* @this_leaf: Kernel cache info structure being updated
@@ -357,25 +354,15 @@ static void update_cache_properties(struct cacheinfo *this_leaf,
struct acpi_pptt_cache *found_cache,
struct acpi_pptt_processor *cpu_node)
{
int valid_flags = 0;
this_leaf->fw_token = cpu_node;
if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID) {
if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID)
this_leaf->size = found_cache->size;
valid_flags++;
}
if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID) {
if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID)
this_leaf->coherency_line_size = found_cache->line_size;
valid_flags++;
}
if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID) {
if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID)
this_leaf->number_of_sets = found_cache->number_of_sets;
valid_flags++;
}
if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID) {
if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID)
this_leaf->ways_of_associativity = found_cache->associativity;
valid_flags++;
}
if (found_cache->flags & ACPI_PPTT_WRITE_POLICY_VALID) {
switch (found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY) {
case ACPI_PPTT_CACHE_POLICY_WT:
@@ -402,11 +389,17 @@ static void update_cache_properties(struct cacheinfo *this_leaf,
}
}
/*
* If the above flags are valid, and the cache type is NOCACHE
* update the cache type as well.
* If cache type is NOCACHE, then the cache hasn't been specified
* via other mechanisms. Update the type if a cache type has been
* provided.
*
* Note, we assume such caches are unified based on conventional system
* design and known examples. Significant work is required elsewhere to
* fully support data/instruction only type caches which are only
* specified in PPTT.
*/
if (this_leaf->type == CACHE_TYPE_NOCACHE &&
valid_flags == PPTT_CHECKED_ATTRIBUTES)
found_cache->flags & ACPI_PPTT_CACHE_TYPE_VALID)
this_leaf->type = CACHE_TYPE_UNIFIED;
}

View File

@@ -4553,6 +4553,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
/* These specific Samsung models/firmware-revs do not handle LPM well */
{ "SAMSUNG MZMPC128HBFU-000MV", "CXM14M1Q", ATA_HORKAGE_NOLPM, },
{ "SAMSUNG SSD PM830 mSATA *", "CXM13D1Q", ATA_HORKAGE_NOLPM, },
{ "SAMSUNG MZ7TD256HAFV-000L9", "DXT02L5Q", ATA_HORKAGE_NOLPM, },
/* devices that don't properly handle queued TRIM commands */
{ "Micron_M500IT_*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |

View File

@@ -1935,6 +1935,11 @@ static int __init atari_floppy_init (void)
unit[i].disk = alloc_disk(1);
if (!unit[i].disk)
goto Enomem;
unit[i].disk->queue = blk_init_queue(do_fd_request,
&ataflop_lock);
if (!unit[i].disk->queue)
goto Enomem;
}
if (UseTrackbuffer < 0)
@@ -1966,10 +1971,6 @@ static int __init atari_floppy_init (void)
sprintf(unit[i].disk->disk_name, "fd%d", i);
unit[i].disk->fops = &floppy_fops;
unit[i].disk->private_data = &unit[i];
unit[i].disk->queue = blk_init_queue(do_fd_request,
&ataflop_lock);
if (!unit[i].disk->queue)
goto Enomem;
set_capacity(unit[i].disk, MAX_DISK_SIZE * 2);
add_disk(unit[i].disk);
}
@@ -1984,13 +1985,17 @@ static int __init atari_floppy_init (void)
return 0;
Enomem:
while (i--) {
struct request_queue *q = unit[i].disk->queue;
do {
struct gendisk *disk = unit[i].disk;
put_disk(unit[i].disk);
if (q)
blk_cleanup_queue(q);
}
if (disk) {
if (disk->queue) {
blk_cleanup_queue(disk->queue);
disk->queue = NULL;
}
put_disk(unit[i].disk);
}
} while (i--);
unregister_blkdev(FLOPPY_MAJOR, "fd");
return -ENOMEM;

View File

@@ -887,8 +887,17 @@ static int swim_floppy_init(struct swim_priv *swd)
exit_put_disks:
unregister_blkdev(FLOPPY_MAJOR, "fd");
while (drive--)
put_disk(swd->unit[drive].disk);
do {
struct gendisk *disk = swd->unit[drive].disk;
if (disk) {
if (disk->queue) {
blk_cleanup_queue(disk->queue);
disk->queue = NULL;
}
put_disk(disk);
}
} while (drive--);
return err;
}

View File

@@ -1919,6 +1919,7 @@ static int negotiate_mq(struct blkfront_info *info)
GFP_KERNEL);
if (!info->rinfo) {
xenbus_dev_fatal(info->xbdev, -ENOMEM, "allocating ring_info structure");
info->nr_rings = 0;
return -ENOMEM;
}
@@ -2493,6 +2494,9 @@ static int blkfront_remove(struct xenbus_device *xbdev)
dev_dbg(&xbdev->dev, "%s removed", xbdev->nodename);
if (!info)
return 0;
blkif_free(info, 0);
mutex_lock(&info->mutex);

View File

@@ -324,6 +324,7 @@ static const struct bcm_subver_table bcm_uart_subver_table[] = {
{ 0x4103, "BCM4330B1" }, /* 002.001.003 */
{ 0x410e, "BCM43341B0" }, /* 002.001.014 */
{ 0x4406, "BCM4324B3" }, /* 002.004.006 */
{ 0x6109, "BCM4335C0" }, /* 003.001.009 */
{ 0x610c, "BCM4354" }, /* 003.001.012 */
{ 0x2122, "BCM4343A0" }, /* 001.001.034 */
{ 0x2209, "BCM43430A1" }, /* 001.002.009 */

View File

@@ -167,7 +167,7 @@ struct qca_serdev {
};
static int qca_power_setup(struct hci_uart *hu, bool on);
static void qca_power_shutdown(struct hci_dev *hdev);
static void qca_power_shutdown(struct hci_uart *hu);
static void __serial_clock_on(struct tty_struct *tty)
{
@@ -609,7 +609,7 @@ static int qca_close(struct hci_uart *hu)
if (hu->serdev) {
qcadev = serdev_device_get_drvdata(hu->serdev);
if (qcadev->btsoc_type == QCA_WCN3990)
qca_power_shutdown(hu->hdev);
qca_power_shutdown(hu);
else
gpiod_set_value_cansleep(qcadev->bt_en, 0);
@@ -1232,12 +1232,15 @@ static const struct qca_vreg_data qca_soc_data = {
.num_vregs = 4,
};
static void qca_power_shutdown(struct hci_dev *hdev)
static void qca_power_shutdown(struct hci_uart *hu)
{
struct hci_uart *hu = hci_get_drvdata(hdev);
struct serdev_device *serdev = hu->serdev;
unsigned char cmd = QCA_WCN3990_POWEROFF_PULSE;
host_set_baudrate(hu, 2400);
qca_send_power_pulse(hdev, QCA_WCN3990_POWEROFF_PULSE);
hci_uart_set_flow_control(hu, true);
serdev_device_write_buf(serdev, &cmd, sizeof(cmd));
hci_uart_set_flow_control(hu, false);
qca_power_setup(hu, false);
}
@@ -1413,7 +1416,7 @@ static void qca_serdev_remove(struct serdev_device *serdev)
struct qca_serdev *qcadev = serdev_device_get_drvdata(serdev);
if (qcadev->btsoc_type == QCA_WCN3990)
qca_power_shutdown(qcadev->serdev_hu.hdev);
qca_power_shutdown(&qcadev->serdev_hu);
else
clk_disable_unprepare(qcadev->susclk);

View File

@@ -606,8 +606,9 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
ssif_info->waiting_alert = true;
ssif_info->rtc_us_timer = SSIF_MSG_USEC;
mod_timer(&ssif_info->retry_timer,
jiffies + SSIF_MSG_JIFFIES);
if (!ssif_info->stopping)
mod_timer(&ssif_info->retry_timer,
jiffies + SSIF_MSG_JIFFIES);
ipmi_ssif_unlock_cond(ssif_info, flags);
return;
}
@@ -939,8 +940,9 @@ static void msg_written_handler(struct ssif_info *ssif_info, int result,
ssif_info->waiting_alert = true;
ssif_info->retries_left = SSIF_RECV_RETRIES;
ssif_info->rtc_us_timer = SSIF_MSG_PART_USEC;
mod_timer(&ssif_info->retry_timer,
jiffies + SSIF_MSG_PART_JIFFIES);
if (!ssif_info->stopping)
mod_timer(&ssif_info->retry_timer,
jiffies + SSIF_MSG_PART_JIFFIES);
ipmi_ssif_unlock_cond(ssif_info, flags);
}
}

View File

@@ -663,7 +663,8 @@ ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_space *space,
return len;
err = be32_to_cpu(header->return_code);
if (err != 0 && desc)
if (err != 0 && err != TPM_ERR_DISABLED && err != TPM_ERR_DEACTIVATED
&& desc)
dev_err(&chip->dev, "A TPM error (%d) occurred %s\n", err,
desc);
if (err)
@@ -1321,7 +1322,8 @@ int tpm_get_random(struct tpm_chip *chip, u8 *out, size_t max)
}
rlength = be32_to_cpu(tpm_cmd.header.out.length);
if (rlength < offsetof(struct tpm_getrandom_out, rng_data) +
if (rlength < TPM_HEADER_SIZE +
offsetof(struct tpm_getrandom_out, rng_data) +
recd) {
total = -EFAULT;
break;

View File

@@ -329,7 +329,9 @@ int tpm2_get_random(struct tpm_chip *chip, u8 *dest, size_t max)
&buf.data[TPM_HEADER_SIZE];
recd = min_t(u32, be16_to_cpu(out->size), num_bytes);
if (tpm_buf_length(&buf) <
offsetof(struct tpm2_get_random_out, buffer) + recd) {
TPM_HEADER_SIZE +
offsetof(struct tpm2_get_random_out, buffer) +
recd) {
err = -EFAULT;
goto out;
}

Some files were not shown because too many files have changed in this diff Show More