Files
kernel_xiaomi_sm8250/include/linux
David Woodhouse cf66bb93e0 byteorder: allow arch to opt to use GCC intrinsics for byteswapping
Since GCC 4.4, there have been __builtin_bswap32() and __builtin_bswap16()
intrinsics. A __builtin_bswap16() came a little later (4.6 for PowerPC,
48 for other platforms).

By using these instead of the inline assembler that most architectures
have in their __arch_swabXX() macros, we let the compiler see what's
actually happening. The resulting code should be at least as good, and
much *better* in the cases where it can be combined with a nearby load
or store, using a load-and-byteswap or store-and-byteswap instruction
(e.g. lwbrx/stwbrx on PowerPC, movbe on Atom).

When GCC is sufficiently recent *and* the architecture opts in to using
the intrinsics by setting CONFIG_ARCH_USE_BUILTIN_BSWAP, they will be
used in preference to the __arch_swabXX() macros. An architecture which
does not set ARCH_USE_BUILTIN_BSWAP will continue to use its own
hand-crafted macros.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
2012-12-06 01:22:31 +00:00
..
2012-10-08 13:50:20 +10:30
2012-10-16 18:49:15 -07:00
2012-10-06 03:04:56 +09:00
2012-12-03 13:05:54 +10:30
2012-10-10 01:15:44 -04:00
2012-11-16 14:33:04 -08:00
2012-10-10 20:00:55 +10:30
2012-10-17 15:53:02 -05:00
2012-10-10 22:41:05 -04:00
2012-10-05 22:23:53 +02:00
2012-11-16 14:33:04 -08:00
2012-10-06 03:05:01 +09:00
2012-10-11 20:02:04 -04:00
2012-10-07 00:40:54 -04:00
2012-10-09 16:22:55 +09:00
2012-10-22 15:16:06 -04:00
2012-10-09 16:22:32 +09:00
2012-10-07 21:19:42 +02:00
2012-10-17 11:16:13 -07:00