Recent from talks
Contribute something
Nothing was collected or created yet.
Advanced Vector Extensions
View on WikipediaAdvanced Vector Extensions (AVX, also known as Gesher New Instructions and then Sandy Bridge New Instructions) are SIMD extensions to the x86 instruction set architecture for microprocessors from Intel and Advanced Micro Devices (AMD). They were proposed by Intel in March 2008 and first supported by Intel with the Sandy Bridge[1] microarchitecture shipping in Q1 2011 and later by AMD with the Bulldozer[2] microarchitecture shipping in Q4 2011. AVX provides new features, new instructions, and a new coding scheme.
AVX2 (also known as Haswell New Instructions) expands most integer commands to 256 bits and introduces new instructions. They were first supported by Intel with the Haswell microarchitecture, which shipped in 2013.
AVX-512 expands AVX to 512-bit support using a new EVEX prefix encoding proposed by Intel in July 2013 and first supported by Intel with the Knights Landing co-processor, which shipped in 2016.[3][4] In conventional processors, AVX-512 was introduced with Skylake server and HEDT processors in 2017.
Advanced Vector Extensions
[edit]AVX uses sixteen YMM registers to perform a single instruction on multiple pieces of data (see SIMD). Each YMM register can hold and do simultaneous operations (math) on:
- eight 32-bit single-precision floating-point numbers or
- four 64-bit double-precision floating-point numbers.
The width of the SIMD registers is increased from 128 bits to 256 bits, and renamed from XMM0–XMM7 to YMM0–YMM7 (in x86-64 mode, from XMM0–XMM15 to YMM0–YMM15). The legacy SSE instructions can still be utilized via the VEX prefix to operate on the lower 128 bits of the YMM registers.
| 511 256 | 255 128 | 127 0 |
| ZMM0 | YMM0 | XMM0 |
| ZMM1 | YMM1 | XMM1 |
| ZMM2 | YMM2 | XMM2 |
| ZMM3 | YMM3 | XMM3 |
| ZMM4 | YMM4 | XMM4 |
| ZMM5 | YMM5 | XMM5 |
| ZMM6 | YMM6 | XMM6 |
| ZMM7 | YMM7 | XMM7 |
| ZMM8 | YMM8 | XMM8 |
| ZMM9 | YMM9 | XMM9 |
| ZMM10 | YMM10 | XMM10 |
| ZMM11 | YMM11 | XMM11 |
| ZMM12 | YMM12 | XMM12 |
| ZMM13 | YMM13 | XMM13 |
| ZMM14 | YMM14 | XMM14 |
| ZMM15 | YMM15 | XMM15 |
| ZMM16 | YMM16 | XMM16 |
| ZMM17 | YMM17 | XMM17 |
| ZMM18 | YMM18 | XMM18 |
| ZMM19 | YMM19 | XMM19 |
| ZMM20 | YMM20 | XMM20 |
| ZMM21 | YMM21 | XMM21 |
| ZMM22 | YMM22 | XMM22 |
| ZMM23 | YMM23 | XMM23 |
| ZMM24 | YMM24 | XMM24 |
| ZMM25 | YMM25 | XMM25 |
| ZMM26 | YMM26 | XMM26 |
| ZMM27 | YMM27 | XMM27 |
| ZMM28 | YMM28 | XMM28 |
| ZMM29 | YMM29 | XMM29 |
| ZMM30 | YMM30 | XMM30 |
| ZMM31 | YMM31 | XMM31 |
AVX introduces a three-operand SIMD instruction format called VEX coding scheme, where the destination register is distinct from the two source operands. For example, an SSE instruction using the conventional two-operand form a ← a + b can now use a non-destructive three-operand form c ← a + b, preserving both source operands. Originally, AVX's three-operand format was limited to the instructions with SIMD operands (YMM), and did not include instructions with general purpose registers (e.g. EAX). It was later used for coding new instructions on general purpose registers in later extensions, such as BMI. VEX coding is also used for instructions operating on the k0-k7 mask registers that were introduced with AVX-512.
The alignment requirement of SIMD memory operands is relaxed.[5] Unlike their non-VEX coded counterparts, most VEX coded vector instructions no longer require their memory operands to be aligned to the vector size. Notably, the VMOVDQA instruction still requires its memory operand to be aligned.
The new VEX coding scheme introduces a new set of code prefixes that extends the opcode space, allows instructions to have more than two operands, and allows SIMD vector registers to be longer than 128 bits. The VEX prefix can also be used on the legacy SSE instructions giving them a three-operand form, and making them interact more efficiently with AVX instructions without the need for VZEROUPPER and VZEROALL.
The AVX instructions support both 128-bit and 256-bit SIMD. The 128-bit versions can be useful to improve old code without needing to widen the vectorization, and avoid the penalty of going from SSE to AVX, they are also faster on some early AMD implementations of AVX. This mode is sometimes known as AVX-128.[6]
New instructions
[edit]These AVX instructions are in addition to the ones that are 256-bit extensions of the legacy 128-bit SSE instructions; most are usable on both 128-bit and 256-bit operands.
| Instruction | Description |
|---|---|
VBROADCASTSS, VBROADCASTSD, VBROADCASTF128
|
Copy a 32-bit, 64-bit or 128-bit memory operand to all elements of a XMM or YMM vector register. |
VINSERTF128
|
Replaces either the lower half or the upper half of a 256-bit YMM register with the value of a 128-bit source operand. The other half of the destination is unchanged. |
VEXTRACTF128
|
Extracts either the lower half or the upper half of a 256-bit YMM register and copies the value to a 128-bit destination operand. |
VMASKMOVPS, VMASKMOVPD
|
Conditionally reads any number of elements from a SIMD vector memory operand into a destination register, leaving the remaining vector elements unread and setting the corresponding elements in the destination register to zero. Alternatively, conditionally writes any number of elements from a SIMD vector register operand to a vector memory operand, leaving the remaining elements of the memory operand unchanged. On the AMD Jaguar processor architecture, this instruction with a memory source operand takes more than 300 clock cycles when the mask is zero, in which case the instruction should do nothing. This appears to be a design flaw.[7] |
VPERMILPS, VPERMILPD
|
Permute In-Lane. Shuffle the 32-bit or 64-bit vector elements of one input operand. These are in-lane 256-bit instructions, meaning that they operate on all 256 bits with two separate 128-bit shuffles, so they can not shuffle across the 128-bit lanes.[8] |
VPERM2F128
|
Shuffle the four 128-bit vector elements of two 256-bit source operands into a 256-bit destination operand, with an immediate constant as selector. |
VTESTPS, VTESTPD
|
Packed bit test of the packed single-precision or double-precision floating-point sign bits, setting or clearing the ZF flag based on AND and CF flag based on ANDN. |
VZEROALL
|
Set all YMM registers to zero and tag them as unused. Used when switching between 128-bit use and 256-bit use. |
VZEROUPPER
|
Set the upper half of all YMM registers to zero. Used when switching between 128-bit use and 256-bit use. |
CPUs with AVX
[edit]- Intel
- Sandy Bridge processors (Q1 2011) and newer, except models branded as Celeron and Pentium.[9]
- Pentium and Celeron branded processors starting with Tiger Lake (Q3 2020) and newer.[10]
- AMD:
Issues regarding compatibility between future Intel and AMD processors are discussed under XOP instruction set.
Compiler and assembler support
[edit]- Absoft supports with -mavx flag.
- The Free Pascal compiler supports AVX and AVX2 with the -CfAVX and -CfAVX2 switches from version 2.7.1.
- RAD studio (v11.0 Alexandria) supports AVX2 and AVX512.[12]
- The GNU Assembler (GAS) inline assembly functions support these instructions (accessible via GCC), as do Intel primitives and the Intel inline assembler (closely compatible to GAS, although more general in its handling of local references within inline code). GAS supports AVX starting with binutils version 2.19.[13]
- GCC starting with version 4.6 (although there was a 4.3 branch with certain support) and the Intel Compiler Suite starting with version 11.1 support AVX.
- The Open64 compiler version 4.5.1 supports AVX with -mavx flag.
- PathScale supports via the -mavx flag.
- The Vector Pascal compiler supports AVX via the -cpuAVX32 flag.
- The Visual Studio 2010/2012 compiler supports AVX via intrinsic and /arch:AVX switch.
- NASM starting with version 2.03 and newer. There were numerous bug fixes and updates related to AVX in version 2.04.[14]
- Other assemblers such as MASM VS2010 version, YASM,[15] FASM and JWASM.
Operating system support
[edit]AVX adds new register-state through the 256-bit wide YMM register file, so explicit operating system support is required to properly save and restore AVX's expanded registers between context switches. The following operating system versions support AVX:
- DragonFly BSD: support added in early 2013.
- FreeBSD: support added in a patch submitted on January 21, 2012,[16] which was included in the 9.1 stable release.[17]
- Linux: supported since kernel version 2.6.30,[18] released on June 9, 2009.[19]
- macOS: support added in 10.6.8 (Snow Leopard) update[20][unreliable source?] released on June 23, 2011. In fact, macOS Ventura does not support x86 processors without the AVX2 instruction set.[21]
- OpenBSD: support added on March 21, 2015.[22]
- Solaris: supported in Solaris 10 Update 10 and Solaris 11.
- Windows: supported in Windows 7 SP1, Windows Server 2008 R2 SP1,[23] Windows 8, Windows 10.
- Windows Server 2008 R2 SP1 with Hyper-V requires a hotfix to support AMD AVX (Opteron 6200 and 4200 series) processors, KB2568088
- Windows XP and Windows Server 2003 do not support AVX in both kernel drivers and user applications.
Advanced Vector Extensions 2
[edit]Advanced Vector Extensions 2 (AVX2), also known as Haswell New Instructions,[24] is an expansion of the AVX instruction set introduced in Intel's Haswell microarchitecture. AVX2 makes the following additions:
- expansion of most vector integer SSE and AVX instructions to 256 bits
- Gather support, enabling vector elements to be loaded from non-contiguous memory locations
- DWORD- and QWORD-granularity any-to-any permutes
- vector shifts.
Sometimes three-operand fused multiply-accumulate (FMA3) extension is considered part of AVX2, as it was introduced by Intel in the same processor microarchitecture. This is a separate extension using its own CPUID flag and is described on its own page and not below.
New instructions
[edit]| Instruction | Description |
|---|---|
VBROADCASTSS, VBROADCASTSD
|
Copy a 32-bit or 64-bit register operand to all elements of a XMM or YMM vector register. These are register versions of the same instructions in AVX1. There is no 128-bit version, but the same effect can be simply achieved using VINSERTF128. |
VPBROADCASTB, VPBROADCASTW, VPBROADCASTD, VPBROADCASTQ
|
Copy an 8, 16, 32 or 64-bit integer register or memory operand to all elements of a XMM or YMM vector register. |
VBROADCASTI128
|
Copy a 128-bit memory operand to all elements of a YMM vector register. |
VINSERTI128
|
Replaces either the lower half or the upper half of a 256-bit YMM register with the value of a 128-bit source operand. The other half of the destination is unchanged. |
VEXTRACTI128
|
Extracts either the lower half or the upper half of a 256-bit YMM register and copies the value to a 128-bit destination operand. |
VGATHERDPD, VGATHERQPD, VGATHERDPS, VGATHERQPS
|
Gathers single- or double-precision floating-point values using either 32- or 64-bit indices and scale. |
VPGATHERDD, VPGATHERDQ, VPGATHERQD, VPGATHERQQ
|
Gathers 32 or 64-bit integer values using either 32- or 64-bit indices and scale. |
VPMASKMOVD, VPMASKMOVQ
|
Conditionally reads any number of elements from a SIMD vector memory operand into a destination register, leaving the remaining vector elements unread and setting the corresponding elements in the destination register to zero. Alternatively, conditionally writes any number of elements from a SIMD vector register operand to a vector memory operand, leaving the remaining elements of the memory operand unchanged. |
VPERMPS, VPERMD
|
Shuffle the eight 32-bit vector elements of one 256-bit source operand into a 256-bit destination operand, with a register or memory operand as selector. |
VPERMPD, VPERMQ
|
Shuffle the four 64-bit vector elements of one 256-bit source operand into a 256-bit destination operand, with a register or memory operand as selector. |
VPERM2I128
|
Shuffle (two of) the four 128-bit vector elements of two 256-bit source operands into a 256-bit destination operand, with an immediate constant as selector. |
VPBLENDD
|
Doubleword immediate version of the PBLEND instructions from SSE4. |
VPSLLVD, VPSLLVQ
|
Shift left logical. Allows variable shifts where each element is shifted according to the packed input. |
VPSRLVD, VPSRLVQ
|
Shift right logical. Allows variable shifts where each element is shifted according to the packed input. |
VPSRAVD
|
Shift right arithmetically. Allows variable shifts where each element is shifted according to the packed input. |
CPUs with AVX2
[edit]AVX-512
[edit]AVX-512 are 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions for x86 instruction set architecture proposed by Intel in July 2013.[3]
AVX-512 instructions are encoded with the new EVEX prefix. It allows 4 operands, 8 new 64-bit opmask registers, scalar memory mode with automatic broadcast, explicit rounding control, and compressed displacement memory addressing mode. The width of the register file is increased to 512 bits and total register count increased to 32 (registers ZMM0-ZMM31) in x86-64 mode.
AVX-512 consists of multiple instruction subsets, not all of which are meant to be supported by all processors implementing them. The instruction set consists of the following:
- AVX-512 Foundation (F) – adds several new instructions and expands most 32- and 64-bit floating-point SSE-SSE4.1 and AVX/AVX2 instructions with EVEX coding scheme to support the 512-bit registers, operation masks, parameter broadcasting, and embedded rounding and exception control
- AVX-512 Conflict Detection Instructions (CD) – efficient conflict detection to allow more loops to be vectorized, supported by Knights Landing[3]
- AVX-512 Exponential and Reciprocal Instructions (ER) – exponential and reciprocal operations designed to help implement transcendental operations, supported by Knights Landing[3]
- AVX-512 Prefetch Instructions (PF) – new prefetch capabilities, supported by Knights Landing[3]
- AVX-512 Vector Length Extensions (VL) – extends most AVX-512 operations to also operate on XMM (128-bit) and YMM (256-bit) registers (including XMM16-XMM31 and YMM16-YMM31 in x86-64 mode)[25]
- AVX-512 Byte and Word Instructions (BW) – extends AVX-512 to cover 8-bit and 16-bit integer operations[25]
- AVX-512 Doubleword and Quadword Instructions (DQ) – enhanced 32-bit and 64-bit integer operations[25]
- AVX-512 Integer Fused Multiply Add (IFMA) – fused multiply add for 512-bit integers.[26]: 746
- AVX-512 Vector Byte Manipulation Instructions (VBMI) adds vector byte permutation instructions which are not present in AVX-512BW.
- AVX-512 Vector Neural Network Instructions Word variable precision (4VNNIW) – vector instructions for deep learning.
- AVX-512 Fused Multiply Accumulation Packed Single precision (4FMAPS) – vector instructions for deep learning.
- VPOPCNTDQ – count of bits set to 1.[27]
- VPCLMULQDQ – carry-less multiplication of quadwords.[27]
- AVX-512 Vector Neural Network Instructions (VNNI) – vector instructions for deep learning.[27]
- AVX-512 Galois Field New Instructions (GFNI) – vector instructions for calculating Galois field.[27]
- AVX-512 Vector AES instructions (VAES) – vector instructions for AES coding.[27]
- AVX-512 Vector Byte Manipulation Instructions 2 (VBMI2) – byte/word load, store and concatenation with shift.[27]
- AVX-512 Bit Algorithms (BITALG) – byte/word bit manipulation instructions expanding VPOPCNTDQ.[27]
- AVX-512 Bfloat16 Floating-Point Instructions (BF16) – vector instructions for AI acceleration.
- AVX-512 Half-Precision Floating-Point Instructions (FP16) – vector instructions for operating on floating-point and complex numbers with reduced precision.
Only the core extension AVX-512F (AVX-512 Foundation) is required by all implementations, though all current implementations also support CD (conflict detection). All central processors with AVX-512 also support VL, DQ and BW. The ER, PF, 4VNNIW and 4FMAPS instruction set extensions are currently only implemented in Intel computing coprocessors.
The updated SSE/AVX instructions in AVX-512F use the same mnemonics as AVX versions; they can operate on 512-bit ZMM registers, and will also support 128/256 bit XMM/YMM registers (with AVX-512VL) and byte, word, doubleword and quadword integer operands (with AVX-512BW/DQ and VBMI).[26]: 23
Discontinued subsets include:
- AVX-512 Vector Pair Intersection to a Pair of Mask Registers (VP2INTERSECT) – Compute intersection between doublewords/quadwords to a pair of mask registers. Discontinued by Intel.
- Xeon Phi ER, PF, 4FMAPS, 4VNNIW.
AVX-512 CPU compatibility table
[edit]Subset
|
F
|
CD
|
ER
|
PF
|
4FMAPS
|
4VNNIW
|
VPOPCNTDQ
|
VL
|
DQ
|
BW
|
IFMA
|
VBMI
|
VBMI2
|
BITALG
|
VNNI
|
BF16
|
VPCLMULQDQ
|
GFNI
|
VAES
|
VP2INTERSECT
|
FP16
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Intel Knights Landing (2016) | Yes | Yes | No | ||||||||||||||||||
| Intel Knights Mill (2017) | Yes | No | |||||||||||||||||||
| Intel Skylake-SP, Skylake-X (2017) | No | No | Yes | No | |||||||||||||||||
| Intel Cannon Lake (2018) | Yes | No | |||||||||||||||||||
| Intel Cascade Lake-SP (2019) | No | Yes | No | ||||||||||||||||||
| Intel Cooper Lake (2020) | No | Yes | No | ||||||||||||||||||
| Intel Ice Lake (2019) | Yes | No | Yes | No | |||||||||||||||||
| Intel Tiger Lake (2020) | Yes | No | |||||||||||||||||||
| Intel Rocket Lake (2021) | No | ||||||||||||||||||||
| Intel Alder Lake (2021) | PartialNote 1 | PartialNote 1 | |||||||||||||||||||
| AMD Zen 4 (2022) | Yes | Yes | No | ||||||||||||||||||
| Intel Sapphire Rapids (2023) | No | Yes | |||||||||||||||||||
| AMD Zen 5 (2024) | Yes | No | |||||||||||||||||||
^Note 1 : Intel does not officially support AVX-512 family of instructions on the Alder Lake microprocessors. In early 2022, Intel began disabling in silicon (fusing off) AVX-512 in Alder Lake microprocessors to prevent customers from enabling AVX-512.[29] In older Alder Lake family CPUs with some legacy combinations of BIOS and microcode revisions, it was possible to execute AVX-512 family instructions when disabling all the efficiency cores which do not contain the silicon for AVX-512.[30][31][32]
Compilers supporting AVX-512
[edit]- Clang 3.9 and newer[33]
- GCC 4.9 and newer[34]
- ICC 15.0.1 and newer[35]
- Microsoft Visual Studio 2017 C++ Compiler[36]
Assemblers supporting AVX-512
[edit]AVX-VNNI, AVX-IFMA
[edit]AVX-VNNI is a VEX-coded variant of the AVX512-VNNI instruction set extension. Similarly, AVX-IFMA is a VEX-coded variant of AVX512-IFMA. These extensions provide the same sets of operations as their AVX-512 counterparts, but are limited to 256-bit vectors and do not support any additional features of EVEX encoding, such as broadcasting, opmask registers or accessing more than 16 vector registers. These extensions allow support of VNNI and IFMA operations even when full AVX-512 support is not implemented in the processor.
CPUs with AVX-VNNI
[edit]- Intel
- Alder Lake processors (Q4 2021) and newer.
- AMD
CPUs with AVX-IFMA
[edit]- Intel
- Sierra Forest E-core-only Xeon processors (Q2 2024) and newer.
- Grand Ridge special-purpose processors and newer.
- Meteor Lake mobile processors (Q4 2023) and newer.
- Arrow Lake desktop processors (Q4 2024) and newer.
AVX10
[edit]AVX10, announced in July 2023,[38] is a new, "converged" AVX instruction set. It addresses several issues of AVX-512; in particular, that it is split into too many parts[39] (20 feature flags). The initial technical paper also made 512-bit vectors optional to support, but as of revision 3.0, vector length enumeration is removed and 512-bit vectors are mandatory.[40]
AVX10 presents a simplified CPUID interface to test for instruction support, consisting of the AVX10 version number (indicating the set of instructions supported, with later versions always being a superset of an earlier one).[41] For example, AVX10.2 indicates that a CPU is capable of the second version of AVX10.[42] Initial revisions of the AVX10 technical specifications also included maximum supported vector length as part of the ISA extension name, e.g. AVX10.2/256 would mean a second version of AVX10 with vector length up to 256 bits, but later revisions made that unnecessary.
The first version of AVX10, notated AVX10.1, does not introduce any instructions or encoding features beyond what is already in AVX-512 (specifically, in Intel Sapphire Rapids: AVX-512F, CD, VL, DQ, BW, IFMA, VBMI, VBMI2, BITALG, VNNI, GFNI, VPOPCNTDQ, VPCLMULQDQ, VAES, BF16, FP16). For CPUs supporting AVX10 and 512-bit vectors, all legacy AVX-512 feature flags will remain set to facilitate applications supporting AVX-512 to continue using AVX-512 instructions.[42]
AVX10.1 was first released in Intel Granite Rapids[42] (Q3 2024) and AVX10.2 will be available in Diamond Rapids[43] and Nova Lake.[44]
Applications
[edit]- Suitable for floating-point-intensive calculations in multimedia, scientific and financial applications (AVX2 adds support for integer operations).
- Increases parallelism and throughput in floating-point SIMD calculations.
- Reduces register load due to the non-destructive instructions.
- Improves Linux RAID software performance (requires AVX2, AVX is not sufficient)[45]
Software
[edit]- Cryptography
- BSAFE C toolkits uses AVX and AVX2 where appropriate to accelerate various cryptographic algorithms.[46]
- OpenSSL uses AVX- and AVX2-optimized cryptographic functions since version 1.0.2.[47] Support for AVX-512 was added in version 3.0.0.[48] Some of these optimizations are also present in various clones and forks, like LibreSSL.
- Multimedia
- Science, engineering an others
- Esri ArcGIS Data Store uses AVX2 for graph storage.[54]
- Prime95/MPrime, the software used for GIMPS, started using the AVX instructions since version 27.1, AVX2 since 28.6 and AVX-512 since 29.1.[55]
- Einstein@Home uses AVX in some of their distributed applications that search for gravitational waves.[56]
- TensorFlow since version 1.6 and tensorflow above versions requires CPU supporting at least AVX.[57]
- EmEditor 19.0 and above uses AVX2 to speed up processing.[58]
- Microsoft Teams uses AVX2 instructions to create a blurred or custom background behind video chat participants,[59] and for background noise suppression.[60]
- simdjson, a JSON parsing library, uses AVX2 and AVX-512 to achieve improved decoding speed.[61][62]
- x86-simd-sort, a library with sorting algorithms for 16, 32 and 64-bit numeric data types, uses AVX2 and AVX-512. The library is used in NumPy and OpenJDK to accelerate sorting algorithms.[63]
- Tesseract OCR engine uses AVX, AVX2 and AVX-512 to accelerate character recognition.[64]
Downclocking
[edit]Since AVX instructions are wider, they consume more power and generate more heat. Executing heavy AVX instructions at high CPU clock frequencies may affect CPU stability due to excessive voltage droop during load transients. Some Intel processors have provisions to reduce the Turbo Boost frequency limit when such instructions are being executed. This reduction happens even if the CPU hasn't reached its thermal and power consumption limits.
On Skylake and its derivatives, the throttling is divided into three levels:[65][66]
- L0 (100%): The normal turbo boost limit.
- L1 (~85%): The "AVX boost" limit. Soft-triggered by 256-bit "heavy" (floating-point unit: FP math and integer multiplication) instructions. Hard-triggered by "light" (all other) 512-bit instructions.
- L2 (~60%):[dubious – discuss] The "AVX-512 boost" limit. Soft-triggered by 512-bit heavy instructions.
The frequency transition can be soft or hard. Hard transition means the frequency is reduced as soon as such an instruction is spotted; soft transition means that the frequency is reduced only after reaching a threshold number of matching instructions. The limit is per-thread.[65]
In Ice Lake, only two levels persist:[67]
- L0 (100%): The normal turbo boost limit.
- L1 (~97%): Triggered by any 512-bit instructions, but only when single-core boost is active; not triggered when multiple cores are loaded.
Rocket Lake processors do not trigger frequency reduction upon executing any kind of vector instructions regardless of the vector size.[67] However, downclocking can still happen due to other reasons, such as reaching thermal and power limits.
Downclocking means that using AVX in a mixed workload with an Intel processor can incur a frequency penalty. Avoiding the use of wide and heavy instructions help minimize the impact in these cases. AVX-512VL allows for using 256-bit or 128-bit operands in AVX-512 instructions, making it a sensible default for mixed loads.[68]
On supported and unlocked variants of processors that down-clock, the clock ratio reduction offsets (typically called AVX and AVX-512 offsets) are adjustable and may be turned off entirely (set to 0x) via Intel's Overclocking / Tuning utility or in BIOS if supported there.[69]
See also
[edit]- F16C instruction set extension
- Memory Protection Extensions
- Scalable Vector Extension for ARM - a new vector instruction set (supplementing VFP and NEON) similar to AVX-512, with some additional features.
References
[edit]- ^ Kanter, David (September 25, 2010). "Intel's Sandy Bridge Microarchitecture". www.realworldtech.com. Retrieved February 17, 2018.
- ^ Hruska, Joel (October 24, 2011). "Analyzing Bulldozer: Why AMD's chip is so disappointing - Page 4 of 5 - ExtremeTech". ExtremeTech. Retrieved February 17, 2018.
- ^ a b c d e James Reinders (July 23, 2013), AVX-512 Instructions, Intel, retrieved August 20, 2013
- ^ "Intel Xeon Phi Processor 7210 (16GB, 1.30 GHz, 64 core) Product Specifications". Intel ARK (Product Specs). Retrieved March 16, 2018.
- ^ "14.9". Intel 64 and IA-32 Architectures Software Developer's Manual Volume 1: Basic Architecture (PDF) (-051US ed.). Intel Corporation. p. 349. Retrieved August 23, 2014.
Memory arguments for most instructions with VEX prefix operate normally without causing #GP(0) on any byte-granularity alignment (unlike Legacy SSE instructions).
- ^ "i386 and x86-64 Options - Using the GNU Compiler Collection (GCC)". Retrieved February 9, 2014.
- ^ "The microarchitecture of Intel, AMD and VIA CPUs: An optimization guide for assembly programmers and compiler makers" (PDF). Retrieved October 17, 2016.
- ^ "Chess programming AVX2". Archived from the original on July 10, 2017. Retrieved October 17, 2016.
- ^ "Intel Offers Peek at Nehalem and Larrabee". ExtremeTech. March 17, 2008.
- ^ a b "Intel® Celeron® 6305 Processor (4M Cache, 1.80 GHz, with IPU) Product Specifications". ark.intel.com. Retrieved November 10, 2020.
- ^ Butler, Michael; Barnes, Leslie; Das Sarma, Debjit; Gelinas, Bob (March–April 2011). "Bulldozer: An Approach to Multithreaded Compute Performance" (PDF). IEEE Micro. 31 (2): 6–15. doi:10.1109/MM.2011.23. S2CID 28236214. Archived from the original (PDF) on May 19, 2024.
- ^ "What's New - RAD Studio". docwiki.embarcadero.com. Retrieved September 17, 2021.
- ^ "GAS Changes". sourceware.org. Retrieved May 3, 2024.
- ^ a b "NASM - The Netwide Assembler, Appendix C: NASM Version History". nasm.us. Retrieved May 3, 2024.
- ^ "YASM 0.7.0 Release Notes". yasm.tortall.net.
- ^ Add support for the extended FPU states on amd64, both for native 64bit and 32bit ABIs, svnweb.freebsd.org, January 21, 2012, retrieved January 22, 2012
- ^ "FreeBSD 9.1-RELEASE Announcement". Retrieved May 20, 2013.
- ^ x86: add linux kernel support for YMM state, retrieved July 13, 2009
- ^ Linux 2.6.30 - Linux Kernel Newbies, retrieved July 13, 2009
- ^ Twitter, retrieved June 23, 2010
- ^ "Devs are making progress getting macOS Ventura to run on unsupported, decade-old Macs". August 23, 2022.
- ^ Add support for saving/restoring FPU state using the XSAVE/XRSTOR., retrieved March 25, 2015
- ^ Floating-Point Support for 64-Bit Drivers, retrieved December 6, 2009
- ^ Haswell New Instruction Descriptions Now Available, Software.intel.com, retrieved January 17, 2012
- ^ a b c James Reinders (July 17, 2014). "Additional AVX-512 instructions". Intel. Retrieved August 3, 2014.
- ^ a b "Intel Architecture Instruction Set Extensions Programming Reference" (PDF). Intel. Retrieved January 29, 2014.
- ^ a b c d e f g "Intel® Architecture Instruction Set Extensions and Future Features Programming Reference". Intel. Retrieved October 16, 2017.
- ^ "Intel® Software Development Emulator | Intel® Software". software.intel.com. Retrieved June 11, 2016.
- ^ Alcorn, Paul (March 2, 2022). "Intel Nukes Alder Lake's AVX-512 Support, Now Fuses It Off in Silicon". Tom's Hardware. Retrieved March 7, 2022.
- ^ Cutress, Ian; Frumusanu, Andrei (August 19, 2021). "Intel Architecture Day 2021: Alder Lake, Golden Cove, and Gracemont Detailed". AnandTech. Archived from the original on August 25, 2021. Retrieved August 25, 2021.
- ^ Alcorn, Paul (August 19, 2021). "Intel Architecture Day 2021: Alder Lake Chips, Golden Cove and Gracemont Cores". Tom's Hardware. Retrieved August 21, 2021.
- ^ Cutress, Ian; Frumusanu, Andrei. "The Intel 12th Gen Core i9-12900K Review: Hybrid Performance Brings Hybrid Complexity". www.anandtech.com. Archived from the original on November 4, 2021. Retrieved November 5, 2021.
- ^ "LLVM 3.9 Release Notes — LLVM 3.9 documentation". releases.llvm.org. Retrieved April 3, 2017.
- ^ "GCC 4.9 Release Series — Changes, New Features, and Fixes – GNU Project - Free Software Foundation (FSF)". gcc.gnu.org. Retrieved April 3, 2017.
- ^ "Intel® Parallel Studio XE 2015 Composer Edition C++ Release Notes | Intel® Software". software.intel.com. Retrieved April 3, 2017.
- ^ "Microsoft Visual Studio 2017 Supports Intel® AVX-512". July 11, 2017.
- ^ "AMD Zen 5 Compiler Support Posted For GCC - Confirms New AVX Features & More". www.phoronix.com. Retrieved February 10, 2024.
- ^ Bonshor, Gavin (July 25, 2023). "Intel Unveils AVX10 and APX Instruction Sets: Unifying AVX-512 For Hybrid Architectures". AnandTech. Archived from the original on July 25, 2023. Retrieved August 21, 2024.
- ^ Mann, Tobias (August 15, 2023). "Intel's AVX10 promises benefits of AVX-512 without baggage". www.theregister.com. Retrieved August 20, 2023.
- ^ Larabel, Michael (March 19, 2025). "Intel AVX10 Drops Optional 512-bit: No AVX10 256-bit Only E-Cores In The Future". Phoronix. Retrieved March 19, 2025.
- ^ "The Converged Vector ISA: Intel® Advanced Vector Extensions 10 Technical Paper". Intel.
- ^ a b c "Intel® Advanced Vector Extensions 10 (Intel® AVX10) Architecture Specification". Intel.
- ^ Larabel, Michael (October 23, 2024). "Intel Preps GCC Compiler For New AMX & ISA Features Ahead Of Diamond Rapids". Phoronix. Retrieved October 23, 2024.
- ^ "Intel Now Confirms Nova Lake Will Support AVX10.2 & APX Extensions". www.phoronix.com. Retrieved November 13, 2025.
- ^ "Linux RAID". LWN. February 17, 2013. Archived from the original on April 15, 2013.
- ^ "Comparison of BSAFE cryptographic library implementations". July 25, 2023.
- ^ "Improving OpenSSL Performance". May 26, 2015. Retrieved February 28, 2017.
- ^ "OpenSSL 3.0.0 release notes". GitHub. September 7, 2021.
- ^ Jaroš, Milan; Strakoš, Petr; Říha, Lubomír (May 28, 2022). "Rendering in Blender using AVX-512 Vectorization" (PDF). Intel eXtreme Performance Users Group. Technical University of Ostrava. Retrieved October 28, 2022.
- ^ "MASSIVE X Requires AVX Compatible Processor". Native Instruments. Retrieved November 29, 2019.
- ^ "dav1d: performance and completion of the first release". November 21, 2018. Retrieved November 22, 2018.
- ^ "dav1d 0.6.0 release notes". March 6, 2020.
- ^ "SVT-AV1 0.7.0 release notes". September 26, 2019.
- ^ "ArcGIS Data Store 11.2 System Requirements". ArcGIS Enterprise. Retrieved January 24, 2024.
- ^ "Prime95 release notes". Retrieved July 10, 2022.
- ^ "Einstein@Home Applications".
- ^ "Tensorflow 1.6". GitHub.
- ^ New in Version 19.0 – EmEditor (Text Editor)
- ^ "Hardware requirements for Microsoft Teams". Microsoft. Retrieved April 17, 2020.
- ^ "Reduce background noise in Teams meetings". Microsoft Support. Retrieved January 5, 2021.
- ^ Langdale, Geoff; Lemire, Daniel (2019). "Parsing Gigabytes of JSON per Second". The VLDB Journal. 28 (6): 941–960. arXiv:1902.08318. doi:10.1007/s00778-019-00578-5. S2CID 67856679.
- ^ "simdjson 2.1.0 release notes". GitHub. June 30, 2022.
- ^ Larabel, Michael (October 6, 2023). "OpenJDK Merges Intel's x86-simd-sort For Speeding Up Data Sorting 7~15x". Phoronix.
- ^ Larabel, Michael (July 7, 2022). "Tesseract OCR 5.2 Engine Finds Success With AVX-512F". Phoronix.
- ^ a b Lemire, Daniel (September 7, 2018). "AVX-512: when and how to use these new instructions". Daniel Lemire's blog.
- ^ BeeOnRope. "SIMD instructions lowering CPU frequency". Stack Overflow.
- ^ a b Downs, Travis (August 19, 2020). "Ice Lake AVX-512 Downclocking". Performance Matters blog.
- ^ "x86 - AVX 512 vs AVX2 performance for simple array processing loops". Stack Overflow.
- ^ "Intel® Extreme Tuning Utility (Intel® XTU) Guide to Overclocking : Advanced Tuning". Intel. Retrieved July 18, 2021.
See image in linked section, where AVX2 ratio has been set to 0.
External links
[edit]Advanced Vector Extensions
View on GrokipediaIntroduction
Background and Motivation
Single Instruction, Multiple Data (SIMD) is a parallel computing paradigm that enables a single instruction to operate simultaneously on multiple data elements stored in vector registers, thereby accelerating compute-intensive tasks such as multimedia processing, scientific simulations, and artificial intelligence workloads.[5] This approach exploits data-level parallelism inherent in applications where the same operation is applied across arrays of data, improving throughput without requiring complex thread management.[5] In the x86 architecture, SIMD extensions have evolved to support wider vectors and more efficient operations, addressing the demands of increasingly data-parallel software. Advanced Vector Extensions (AVX) were proposed by Intel in March 2008 to overcome the limitations of prior Streaming SIMD Extensions (SSE), which were constrained to 128-bit vector widths that proved insufficient for emerging workloads requiring higher parallelism.[6] The motivation stemmed from the growing prevalence of data-intensive applications in fields like video encoding and numerical simulations, where SSE's narrower registers limited the number of elements processed per instruction and its two-operand format often required temporary storage to preserve source data.[5] By expanding to 256-bit vectors, AVX enabled broader data paths, reducing the instruction count and enhancing performance for floating-point operations central to these domains.[5] AVX builds on x86's vector register foundation, utilizing 16 XMM registers (128 bits each) from SSE while introducing 16 YMM registers (256 bits each) that alias the XMM registers as their lower halves, with provisions for future extensions like 512-bit ZMM registers.[5] This architecture maintains backward compatibility while allowing developers to leverage wider vectors for up to 2x the floating-point performance of SSE in suitable workloads, such as parallel arithmetic on arrays of single- or double-precision values.[5] Subsequent developments, including AVX-512 with its 512-bit vectors, further extended this capability for even more demanding parallel computations.[7]Evolution and Versions
Advanced Vector Extensions (AVX) were first introduced by Intel in March 2008 as a proposal to extend the x86 instruction set with 256-bit vector operations, debuting in hardware with the Sandy Bridge microarchitecture in 2011. This marked the initial step in broadening SIMD capabilities beyond the 128-bit vectors of SSE and AVX predecessors, targeting high-performance computing and multimedia workloads. The evolution continued with AVX2 in 2013, integrated into Intel's Haswell processors, which expanded integer operations to 256 bits and added gather instructions for irregular data access. In 2016, Intel released AVX-512 with the Knights Landing Xeon Phi coprocessor, doubling vector width to 512 bits and introducing foundational subsets like F, CD, ER, and PF for floating-point, conflict detection, and prefetching, driven by demands in scientific simulations and data analytics.[7] AMD began supporting AVX and AVX2 with its Zen architecture in 2017, aligning consumer processors with Intel's extensions for broader ecosystem compatibility. Subsequent refinements addressed specialized needs, with AVX-512 VNNI (Vector Neural Network Instructions) appearing in 2019 on Cascade Lake processors to accelerate deep learning convolutions via low-precision integer multiply-accumulate operations.[8] Similarly, AVX-512 IFMA (Integer Fused Multiply-Add) emerged around 2017 in Knights Mill for high-throughput integer math in cryptography and compression, though wider adoption came later with server chips like Ice Lake in 2019. These extensions reflected growing AI and HPC pressures, incorporating formats like BF16 for machine learning efficiency. In July 2023, Intel announced AVX10 as a converged instruction set to unify the fragmented AVX-512 subsets, simplifying software detection and enabling consistent vector lengths across cores, with initial support in Xeon processors via AVX10.1. This transition aimed to resolve compatibility issues in hybrid architectures, where AVX-512 had been inconsistently implemented since its disablement in consumer Alder Lake chips in 2021. AMD extended AVX-512 support to its Zen 4 architecture in 2022, using double-pumped 256-bit units, and enhanced it with native 512-bit paths in Zen 5 by 2024. The AVX10.2 specification, released on May 8, 2025, mandated 512-bit vector support across all cores, including E-cores, while incorporating BF16 and FP8 for AI acceleration; its July 2025 update further aligned with APX for register expansions.[9] Advanced Performance Extensions (APX), also announced in July 2023, complement AVX10 by doubling general-purpose registers to 32 and adding features like conditional moves, with specifications finalized in July 2025 to boost scalar performance in vector-heavy workloads.[10] As of November 2025, Intel has confirmed support for AVX10.2, including mandatory 512-bit vector operations across all cores (P-cores and E-cores), and APX in its upcoming Nova Lake processors, scheduled for release in 2026.[11] These developments underscore the ongoing push to balance AI/HPC demands with efficient, unified ISAs across Intel and AMD platforms.[12]AVX and AVX2
AVX Features and Instructions
Advanced Vector Extensions (AVX) introduced a significant expansion of the x86 SIMD register set by adding 256-bit YMM registers, labeled YMM0 through YMM15, which overlay the existing 128-bit XMM registers used in SSE instructions.[13] These YMM registers enable processing of wider vectors, such as eight single-precision floating-point values or four double-precision values in parallel, doubling the throughput compared to SSE's 128-bit operations.[1] To maintain compatibility with legacy SSE code, the upper 128 bits of each YMM register are automatically zeroed upon entry into AVX mode or explicitly cleared using dedicated instructions, preventing unintended data mixing or state corruption.[13] AVX employs a new Vector Extension (VEX) prefix for instruction encoding, available in either 2-byte or 3-byte formats, which replaces the legacy SSE encoding scheme and supports the expanded register file and vector widths.[13] The 2-byte VEX prefix begins with the byte 0xC5, while the 3-byte version starts with 0xC4 followed by fields for register specification (R, X, B), embedded opcode (m-mmmm), vector length (W), source operand specifier (vvvv), and SIMD prefix (pp).[13] This encoding allows AVX instructions to use a three-operand syntax—where the destination register is distinct from the two source operands (e.g., dest = src1 op src2)—unlike SSE's two-operand form that overwrites one source, thereby reducing register pressure and improving code efficiency.[1] A core feature of AVX is its support for 256-bit floating-point operations, exemplified by instructions like VADDPS, which performs packed single-precision addition across a full YMM register: for instance, VADDPS YMM1, YMM2, YMM3 computes the sum of corresponding elements in YMM2 and YMM3, storing the result in YMM1 and processing eight 32-bit floats in parallel without carry-over between the two 128-bit lanes.[13] Other key instructions include VBROADCASTSS and VBROADCASTSD, which load a scalar single- or double-precision value from memory and replicate it across all eight or four elements of a YMM register, respectively (encoded as VEX.256.66.0F38.W0 18 /r for SS and 19 /r for SD).[13] VINSERTF128 and VEXTRACTF128 facilitate lane-level manipulation by inserting or extracting a 128-bit value into or from a specific half of a YMM register (VEX.256.66.0F3A.W0 18 /r and 19 /r), enabling efficient construction or decomposition of 256-bit vectors from 128-bit sources.[13] Permutation and data reorganization are handled by VPERMILPS and VPERMILPD, which rearrange single- or double-precision elements within a YMM register based on an immediate control mask or another register (VEX.256.66.0F38.W0 0C /r for PS and 0D /r for PD), allowing arbitrary reordering within each 128-bit lane.[13] Masked memory operations are provided by VMASKMOVPS and VMASKMOVPD, which conditionally load or store packed floats using a writemask derived from the sign bits of a YMM register (VEX.256.66.0F38.W0 2C /r for PS load and 28 /r for store), useful for sparse or conditional vector processing.[13] For register management, VZEROALL zeros all 256 bits of every YMM register (VEX.NP.0F.W0 77), while VZEROUPPER clears only the upper 128 bits across all YMM registers (VEX.NP.0F.W0 76), ensuring clean transitions between AVX and SSE execution.[13] Additionally, VTESTPS and VTESTPD test the sign bits of packed single- or double-precision values against a mask register, setting processor flags based on equality, greater-than, or zero conditions without modifying the operands.[13] These features collectively enable AVX to operate in a dedicated 256-bit mode, distinct from SSE's 128-bit processing, while preserving backward compatibility through explicit zeroing and VEX-distinguished opcodes.[1] AVX2 later extended similar vector capabilities to integer operations, but the original AVX focused primarily on floating-point acceleration.[13]AVX2 Enhancements
AVX2 extends the 256-bit single instruction, multiple data (SIMD) floating-point operations introduced in AVX by adding comprehensive support for integer vector processing, enabling more efficient handling of data-intensive workloads such as image processing and cryptography.[3] This integer extension operates on YMM registers and includes packed arithmetic and shift instructions across various data granularities, significantly broadening the applicability of vectorization beyond floating-point domains.[3] The core integer operations encompass addition (e.g., VPADDB for bytes, VPADDW for words, VPADDD for doublewords, VPADDQ for quadwords), subtraction (e.g., VPSUBB, VPSUBW, VPSUBD, VPSUBQ), multiplication (e.g., VPMULLW for words, VPMULLD for doublewords, VPMULLQ for quadwords), and shifts (e.g., VPSLLW/D/Q for logical left shifts, VPSRLW/D/Q for logical right shifts, VPSRAW/D for arithmetic right shifts).[3] Variable-shift variants like VPSLLVD, VPSLLVQ, VPSRLVD, and VPSRLVQ allow per-element control for more flexible data manipulation.[3] Additional integer instructions include averages (VPAVGB, VPAVGW), multiply-adds (VPMADDWD, VPMADDUBSW), maximums (VPMAXSB, VPMAXSW, VPMAXSD, VPMAXSQ), and unpack operations (VPUNPCKHBW, VPUNPCKLBW, etc.) for interleaving data.[3] These operations process twice as many elements per instruction as AVX's 128-bit integers, yielding up to 2x throughput for integer-heavy computations.[3] AVX2 also introduces gather instructions to vectorize non-contiguous memory accesses, a capability absent in AVX.[3] These include floating-point gathers VGATHERDPD (double-precision with dword indices), VGATHERQPS (single-precision with qword indices), and integer gathers VPGATHERDD (doublewords with dword indices), VPGATHERQD (doublewords with qword indices), which load scattered elements into a 256-bit register using a base address, index vector, and optional scale factor.[3] By enabling indexed loads without prior sorting or alignment, these instructions accelerate algorithms like sparse matrix operations and database queries.[3] For enhanced floating-point performance, AVX2 incorporates FMA3 instructions that fuse multiplication and addition into a single operation, reducing latency and improving precision.[3] Key examples are VFMADDPS (packed single-precision, processing 8 elements: dest = a * b + c) and VFMADDPD (packed double-precision, processing 4 elements: dest = a * b + c), with variants like VFNMADD132PD for negated multiply-add and VFNMSUB231PS for negated multiply-subtract.[3] These build on AVX's floating-point foundation by enabling more efficient linear algebra and signal processing tasks.[3] Additional instructions in AVX2 facilitate advanced data rearrangement and bit manipulation.[3] VPERM2I128 permutes two 128-bit lanes between source operands to form a 256-bit result, while VINSERTI128 inserts a 128-bit integer vector into a selected position of a 256-bit destination.[3] For bit-level operations, PEXT extracts specified bits from a source into contiguous positions in the destination, and PDEP deposits bits from contiguous positions into specified locations in the destination, aiding compression and population count algorithms.[3] Overall, these enhancements double integer processing width and introduce targeted primitives, providing substantial performance gains for vectorized integer workloads over AVX.[3]Hardware and Software Support for AVX and AVX2
Advanced Vector Extensions (AVX) were first introduced in Intel's Sandy Bridge microarchitecture processors in 2011, providing 256-bit floating-point vector operations.[14] Subsequent Intel architectures, including Ivy Bridge (2012), Haswell (2013, which added AVX2 for enhanced integer operations), Broadwell (2014), Skylake (2015), Coffee Lake (2017), Comet Lake (2019), Tiger Lake (2020), Alder Lake (2021), Raptor Lake (2022), Meteor Lake (2023), Arrow Lake (2024), Lunar Lake (2024), and later generations, all include full support for both AVX and AVX2.[14] AMD's Bulldozer family (2011) offered partial AVX support with 256-bit floating-point capabilities but limited integer handling, while full AVX2 integration began with the Zen microarchitecture in Ryzen processors starting in 2017, extending through Zen 2 (2019), Zen 3 (2020), Zen 4 (2022), Zen 5 (2024), and later.[15] VIA Technologies provided early AVX support in its Nano QuadCore series around 2013, but did not implement AVX2; Zhaoxin CPUs, based on VIA designs, incorporated AVX around 2013 and AVX2 starting with models like the KX-5000 series in 2018.[16][17] Software ecosystems have broadly adopted AVX and AVX2 through compiler and operating system integrations. The GNU Compiler Collection (GCC) added AVX support in version 4.6 via the -mavx flag, enabling automatic vectorization for compatible code, with AVX2 following in version 4.7 using -mavx2.[18] Clang/LLVM introduced AVX intrinsics and code generation in version 3.0 (2011), supporting -mavx for Sandy Bridge-level optimization and later -mavx2 for Haswell.[19] Microsoft Visual Studio's C++ compiler (MSVC) provided AVX intrinsics in Visual Studio 2010 Service Pack 1, with full /arch:AVX2 option available from Visual Studio 2013 Update 2 for generating AVX2 instructions and auto-vectorization in loops like those for matrix multiplication.[20] Operating systems facilitate AVX/AVX2 usage via CPU feature detection and state management. Windows 7 and later versions, starting with Service Pack 1 (SP1) in 2011, include kernel-level support for AVX through XSAVE extensions, allowing user-mode applications to query and utilize the extensions without crashes.[21] Linux kernels from version 3.6 (2012) onward expose AVX and AVX2 features via the /proc/cpuinfo interface, relying on CPUID queries for runtime detection and enabling optimized libraries like those in glibc.[22] macOS 10.7 Lion (2011) and subsequent releases support AVX on compatible hardware through the XNU kernel's XSAVE handling, ensuring seamless integration for developer tools and applications.[23] Runtime detection of AVX and AVX2 typically involves the CPUID instruction: for AVX, check leaf 1 with ECX bit 28 set; for AVX2, verify leaf 7 (subleaf 0) with EBX bit 5 set, followed by XGETBV to confirm OS support for YMM register states.[24] By 2025, AVX and AVX2 remain ubiquitous in x86 ecosystems, with near-universal adoption in consumer and server hardware from Intel and AMD, though emerging ARM-based alternatives like Apple's M-series incorporate analogous vector extensions without direct x86 compatibility.[25]| Vendor | AVX Introduction | AVX2 Introduction | Key Architectures |
|---|---|---|---|
| Intel | 2011 (Sandy Bridge) | 2013 (Haswell) | Sandy Bridge to Lunar Lake and later |
| AMD | 2011 (Bulldozer, partial) | 2017 (Zen) | Bulldozer to Zen 5 and later |
| VIA | ~2013 (Nano QuadCore) | N/A | Nano series |
| Zhaoxin | ~2013 (early KaiXian) | 2018 (KX-5000) | KaiXian KX-5000 and later |
AVX-512
Core Architecture and Subsets
AVX-512 establishes a 512-bit SIMD foundation that doubles the vector width of AVX2's 256-bit YMM registers, enabling parallel processing of up to 16 single-precision floating-point values or 8 double-precision values per instruction.[26] This architecture introduces 32 dedicated 512-bit ZMM registers (ZMM0 through ZMM31) in 64-bit mode, which subsume the lower 256 bits of YMM registers and the lower 128 bits of XMM registers for backward compatibility.[26] Masking capabilities are provided by 8 dedicated 64-bit opmask registers (K0 through K7), allowing conditional execution of vector elements without branching, which reduces overhead compared to scalar conditional code.[26] For instance, the {z} suffix in instructions enables zeroing masking, where unselected elements are set to zero, while merging masking preserves original values in unselected positions.[26] The EVEX prefix, a 4-byte encoding scheme, underpins AVX-512's flexibility by supporting vector length independence across 128-bit, 256-bit, and 512-bit operations through a single instruction encoding.[26] This prefix embeds opmask selection, zeroing/merging control, and broadcast functionality for immediate values, allowing a unified opcode to scale across vector lengths via the AVX512VL subset.[26] Unlike AVX2's VEX prefix, EVEX facilitates embedded rounding control and suppression of exceptions, enhancing precision in floating-point computations.[26] Broadcast support, for example, replicates a scalar value across the entire vector, optimizing gather/scatter patterns common in data-parallel workloads.[26] AVX-512's modularity is achieved through specialized subsets, each extending the core with targeted instructions while maintaining the 512-bit vector framework.[26] The foundational AVX512F subset provides basic arithmetic, logical, and data movement operations for floating-point and integer vectors, serving as the baseline for all AVX-512 implementations.[26] AVX512CD adds conflict detection instructions to identify and resolve intra-vector dependencies, such as duplicate indices in gather operations, improving efficiency in irregular data access patterns.[26] The AVX512ER subset introduces high-accuracy approximations for exponential and reciprocal functions, delivering results with reduced error margins suitable for scientific simulations, though it is primarily available on Intel Xeon Phi processors.[26] Complementing these, AVX512VL enables the same instruction set across shorter vector lengths (128/256 bits), allowing code to run on legacy hardware without recompilation.[26] For granular integer handling, AVX512BW supports byte and word-level operations, including permutations and comparisons, extending AVX2's integer capabilities to 512-bit scales.[26] Similarly, AVX512DQ focuses on doubleword (32-bit) and quadword (64-bit) integer instructions, such as population count and bitwise shifts, optimizing for workloads like cryptography and compression.[26] These subsets collectively enable up to twice the throughput of AVX2 for vectorized code, primarily through wider parallelism and branch-free conditionals, while building on AVX2's integer extensions for seamless migration.[26]Key Instructions and Encoding
AVX-512 instructions are encoded using the EVEX prefix, a four-byte encoding scheme that extends the VEX prefix used in earlier AVX versions to support 512-bit vector operations, advanced masking, and additional features like broadcasting.[3] The EVEX prefix includes fields such as EVEX.L'L (bits 1-2 in the prefix) to specify vector length: 00b for 128-bit, 01b for 256-bit, and 10b for 512-bit operations on ZMM registers.[3] This allows instructions to operate on up to 16 single-precision floating-point elements or 8 quadwords in a 512-bit vector.[3] Writemasking is a core feature enabled by the EVEX.aaa field (bits 18-16), which selects one of eight opmask registers (k0 through k7), where k0 disables masking and the others provide per-element control.[3] Masking supports merging (preserving original values in masked lanes) or zeroing (setting masked lanes to zero via the EVEX.z bit at position 23).[3] Broadcasting, controlled by the EVEX.b bit (position 20), replicates a single memory operand across the vector, denoted in syntax as {1to16} for 512-bit single-precision operations.[3] Subsets like AVX-512BW extend these encodings to support byte and word operations, such as packing instructions.[3] Representative floating-point instructions include VADDPS, which performs packed single-precision addition on 512-bit vectors (16 elements), adding corresponding elements from two source operands and storing the result in the destination.[3] For example, the syntaxVADDPS zmm1 {k1}{z}, zmm2, zmm3 adds zmm2 and zmm3 element-wise, applying the k1 mask to update only selected lanes in zmm1, with {z} zeroing masked lanes.[3] Another key instruction is VFMADD132PS, a fused multiply-add operation that computes (zmm1 * zmm3) + zmm2 for 16 single-precision elements, enabling efficient computation in loops with indexing.[3] Its syntax, such as VFMADD132PS zmm1 {k1}, zmm2, zmm3/m512/m32bcst, supports memory broadcasting for scalar inputs.[3]
Integer instructions exemplify 512-bit parallelism, such as VPADDQ, which adds packed 64-bit quadwords (8 elements) from two sources, useful for vectorized integer arithmetic.[3] The masked form VPADDQ zmm1 {k1}, zmm2, zmm3 merges results into zmm1 based on the k1 mask.[3] VPMOVDB packs and truncates 512-bit doublewords (16 elements) to bytes (64 elements) with saturation, converting to narrower formats for storage or further processing; for instance, VPMOVDB xmm1 {k1}{z}, zmm2 stores the result in a 128-bit memory operand, masking unused lanes.[3]
Gather and scatter operations facilitate non-contiguous memory access, critical for irregular data structures. VPGATHERQD gathers quadwords using 32-bit indices scaled by the memory address, loading up to 8 elements into a 512-bit destination based on the index vector.[3] Syntax like VPGATHERQD zmm1 {k1}, vm32z uses a vector of indices (vm32z) to fetch data, with k1 controlling which gathers occur.[3] Conversely, VSCATTERDPS scatters 16 single-precision floats from a 512-bit source to memory locations determined by dword indices.[3] An example is VSCATTERDPS vm32k {k1}, zmm1, where vm32k provides the index vector and k1 masks the scatters to avoid unnecessary writes.[3] These EVEX-encoded instructions collectively enable conditional, scalable vector processing unique to AVX-512.[3]
Hardware and Software Support for AVX-512
AVX-512 was first implemented in hardware with Intel's Knights Landing processors in 2016, providing full support for the foundational subsets including F (foundation), CD (conflict detection), ER (exponential and reciprocal), and PF (prefetch). Subsequent Intel server and high-end desktop processors introduced partial implementations, with Skylake-SP and Skylake-X in 2017 supporting subsets such as F, CD, VL (vector length extensions), BW (byte and word), and DQ (doubleword and quadword). Cascade Lake processors in 2019 added specialized subsets like VNNI (vector neural network instructions) to F, CD, and VL, enhancing deep learning workloads. By 2023, Sapphire Rapids extended support to include advanced features like BF16 (bfloat16) instructions alongside core subsets, maintaining 512-bit vector processing across two FMA units per core. The following table summarizes key Intel processor families and their supported AVX-512 subsets, highlighting the fragmentation across implementations:| Processor Family | Release Year | Supported Subsets |
|---|---|---|
| Knights Landing (Xeon Phi x200) | 2016 | F, CD, ER, PF |
| Skylake-SP/X (Xeon W) | 2017 | F, CD, VL, BW, DQ |
| Cascade Lake (Xeon Scalable) | 2019 | F, CD, VL, BW, DQ, VNNI |
| Ice Lake-SP (3rd Gen Xeon) | 2021 | F, CD, VL, BW, DQ, VNNI, IFMA, VBMI |
| Sapphire Rapids (4th Gen Xeon) | 2023 | F, CD, VL, BW, DQ, VNNI, BF16, FP16 |
Specialized Vector Extensions
AVX-VNNI for Neural Networks
AVX-VNNI, or Vector Neural Network Instructions within the Advanced Vector Extensions framework, provides specialized instructions for accelerating low-precision integer dot products in neural network inference tasks. These instructions target quantized deep learning models, where INT8 and INT16 data types replace higher-precision formats to reduce memory footprint and boost computational throughput while maintaining acceptable accuracy. By fusing multiply and accumulate operations, AVX-VNNI optimizes the core matrix multiplication kernels prevalent in convolutional neural networks, enabling faster AI inference on CPUs.[13] These are subsets of the AVX-512 instruction set. The VNNI subset features key instructions such as VPDPBUSD for unsigned byte (INT8) dot products and VPDPWSSD for signed word (INT16) dot products, supporting 512-bit vectors using EVEX encoding, with a later AVX2 VNNI extension providing 256-bit VEX-encoded versions for broader compatibility. The VPDPBUSD instruction multiplies unsigned bytes from one source with signed bytes from another, sums four such products per 32-bit lane, and accumulates the result into a signed doubleword destination. Similarly, VPDPWSSD performs signed word multiplications and summations in the same fused manner. This design accumulates four multiplies per instruction, streamlining what would otherwise require multiple separate multiply and add operations.[13][27] Introduced in late 2017 with the Intel Knights Mill processor as part of its deep learning optimizations, AVX-VNNI was announced in 2017 to support broader hardware adoption. In Knights Mill, these instructions deliver up to four times the deep learning peak performance compared to prior Xeon Phi generations, primarily through enhanced integer throughput for training and inference workloads. The extension builds on the fused multiply-add capabilities of the AVX-512 F subset, adapting them for integer neural network computations.[28]AVX-IFMA for Integer Operations
AVX-IFMA, or Advanced Vector Extensions Integer Fused Multiply-Add, is a specialized subset of the AVX-512 instruction set designed to accelerate high-throughput integer arithmetic through fused multiply-accumulate operations on fixed-point numbers.[13] This extension enables precise computations without intermediate rounding, making it suitable for applications requiring exact integer results.[13] Introduced in 2017 with the Intel Xeon Phi processor based on the Knights Mill microarchitecture, AVX-IFMA provides hardware support for efficient processing of large datasets in integer domains.[29] The core instructions in AVX-IFMA are VPMADD52LUQ and VPMADD52HUQ, which perform unsigned 52-bit multiply-accumulate operations on 512-bit vectors.[13] VPMADD52LUQ multiplies the lower 52 bits of each 64-bit element from two source vectors and accumulates the lower 64 bits of the 104-bit product into the destination vector, while VPMADD52HUQ handles the higher 64 bits of the product for the same elements.[13] Operating on 512-bit wide ZMM registers, these instructions process eight 64-bit quadwords simultaneously, allowing for 8 independent 52-bit multiplications per instruction, with a pair enabling full 104-bit precision accumulation for 8 multiplications.[13] The 52-bit operand width serves as a scale factor to control overflow in the 104-bit intermediate products, ensuring they fit within two 64-bit accumulators without loss of precision.[13] Unlike floating-point operations, AVX-IFMA avoids rounding errors entirely, as it performs exact integer arithmetic without exponent handling or denormalized values.[13] In contrast to the floating-point FMA3 instructions introduced in earlier AVX versions, AVX-IFMA is exclusively for integers and complements FMA3 by targeting fixed-point workloads where precision is paramount.[13] It supports masking from the broader AVX-512 framework to enable conditional execution on vector elements.[13] Primary applications include cryptography, such as modular multiplication in RSA and elliptic curve cryptography (ECC), as well as hashing algorithms like SHA-512, where high-speed integer operations enhance throughput for multi-buffer processing.[29] These capabilities have been leveraged in optimized libraries for secure data streaming and financial computations requiring robust integer precision.[29]AVX10
Design Goals and Changes from AVX-512
Intel announced AVX10 in July 2023 as a successor to AVX-512, aiming to unify the fragmented vector instruction set architecture across its processors and enable consistent 512-bit vector support in hybrid architectures featuring both performance (P-) and efficiency (E-) cores.[30] The primary design goals included reducing developer complexity by converging all major AVX-512 subsets into a single, mandatory ISA without optional fragments, thereby addressing slow adoption of AVX-512 due to power consumption issues and compatibility challenges in heterogeneous core designs.[31] This unification simplifies feature detection through a single CPUID leaf (Leaf 24H), which provides versioned enumeration for supported vector widths (128, 256, or 512 bits), eliminating the need for over 20 discrete AVX-512 feature flags.[31] Key changes from AVX-512 involve mandating AVX10/256 support on all processors while making AVX10/512 optional initially on P-cores only, ensuring backward compatibility with all existing SSE, AVX, AVX2, and AVX-512 instructions via VEX and EVEX encodings.[31] AVX10 deprecates AVX-512-specific modes by freezing their CPUID flags and routing all future vector extensions through the AVX10 versioning scheme, such as AVX10.1 (introduced in 2024 with initial support in Granite Rapids processors) and AVX10.2 (specification released in July 2024).[32] A significant revision occurred in March 2025, introducing a breaking change that removed the 256-bit-only mode for AVX10, mandating full 512-bit support across all cores capable of AVX10.2 to further streamline hybrid core compatibility and boost performance portability.[33] These updates directly tackle AVX-512's adoption barriers, including high power draw leading to downclocking on non-Server SKUs and inconsistent support across core types, by providing a converged ISA that prioritizes efficiency and broad applicability.[34] Overall, AVX10 maintains full backward compatibility for legacy applications while evolving the architecture to support modern workloads like AI and HPC without the fragmentation of prior extensions.[35]New Instructions and Datatypes
AVX10 introduces support for low-precision floating-point datatypes optimized for artificial intelligence and media processing workloads, including FP8 formats in E4M3 and E5M2 variants. The E4M3 format allocates 1 sign bit, 4 exponent bits, and 3 mantissa bits, while E5M2 uses 1 sign bit, 5 exponent bits, and 2 mantissa bits, adhering to the Open Compute Project's Open Floating Point 8 specification for enhanced memory efficiency and computational density in neural networks.[36] These datatypes enable reduced precision operations without significant accuracy loss in training and inference tasks. Additionally, AVX10 expands BFloat16 (BF16) support, a 16-bit format with an 8-bit exponent and 7-bit mantissa, to facilitate seamless integration in AI accelerators by providing direct vectorized arithmetic and conversions.[31] Key conversions include VCVTBF162PS, which transforms packed BF16 elements to single-precision FP32 across 128-, 256-, or 512-bit vectors, supporting writemasks for selective updates and aiding precision scaling in mixed-format computations.[31] This instruction operates via EVEX encoding, allowing merging or zeroing of masked elements, and is essential for accumulating low-precision results into higher-precision accumulators.[31] Among the novel arithmetic instructions, VADDBF16 performs packed addition on BF16 vectors, computing dest = src1 + src2 for each element while preserving the BF16 format, with support for vector lengths up to 512 bits and writemasking.[31] Similarly, VMULBF16 executes packed multiplication, yielding dest = src1 * src2, enabling efficient element-wise operations in matrix multiplications for deep learning models.[31] For dot product computations, VDPPHPS computes the vector neural network instruction (VNNI) dot product of FP16 pairs into FP32 accumulators, inheriting from prior VNNI designs, via dest += (src1[2i] * src2[2i]) + (src1[2i+1] * src2[2i+1]).[31] In media applications, VMPSADBW supports 512-bit multiple sum of absolute differences on byte elements, useful for motion estimation in video encoding, by accumulating shuffled absolute differences controlled by an immediate operand.[31] Minimum and maximum instructions adhere to IEEE-754-2019 semantics for handling NaNs and infinities. VMINMAXPH operates on packed half-precision FP16 elements, selecting the minimum or maximum per pair while propagating NaNs appropriately.[36] Likewise, VMINMAXBF16 applies to BF16 vectors, ensuring consistent behavior across precisions in AI normalization tasks.[36] Scalar comparison instructions simplify floating-point comparisons without raising exceptions. VCOMXSD compares scalar double-precision values and updates EFLAGS accordingly, while VCOMXSS and VCOMXSH handle single- and half-precision scalars, respectively, providing exception-free status reporting for control flow in vectorized code.[31] Data movement enhancements include VMOVD and VMOVW, which copy 32-bit doubleword or 16-bit word data to XMM registers, zero-extending the upper bits for partial vector loads that maintain compatibility with wider operations.[36] For BF16 dot products in 512-bit vectors, which process 32 elements, software can implement accumulation as follows:acc += ∑_{i=0}^{31} (a[i] * b[i])
acc += ∑_{i=0}^{31} (a[i] * b[i])
Hardware Implementation and Compatibility
The initial rollout of AVX10 began with Intel's Granite Rapids processors, launched in September 2024 as part of the sixth-generation Xeon Scalable family, which introduced AVX10.1 support exclusively in 512-bit form for server workloads.[30] Subsequent expansion is expected with the Diamond Rapids processors in 2026, also Xeon-based, adding AVX10.2 features including new instructions for AI and media processing while maintaining 512-bit vector execution as the maximum width.[37] In line with this evolution, Intel has mandated 512-bit vector support across all performance (P) and efficiency (E) cores in AVX10 implementations, eliminating prior options for 256-bit-only modes to ensure uniform ISA convergence.[33] Software enumeration of AVX10 capabilities relies on the CPUID instruction, where leaf 07H with subleaf ECX=01H sets EDX bit 19 to indicate general AVX10 support, while the dedicated converged vector ISA leaf 24H (EAX=24H, ECX=00H) provides EBX[7:0] ≥ 2 for AVX10.2 versioning and enumerates maximum vector lengths via the CPU_SUPPORTED_VECTOR_LENGTHS field.[36] Support for specialized datatypes like BF16 and FP8 (in E4M3 and E5M2 formats) is similarly detected through these AVX10.2 feature flags in leaf 24H, enabling runtime verification of instructions such as VADDNEPBF16 or FP8 conversions across 128-, 256-, and 512-bit widths.[36] AVX10 maintains backward compatibility with prior vector extensions by retaining EVEX encoding for all operations, allowing seamless execution of 128-bit (XMM), 256-bit (YMM), and 512-bit (ZMM) instructions on capable hardware without architectural breaks from AVX-512.[36] Runtime checks via CPUID leaf 24H ensure software can query supported vector lengths and features dynamically, preventing invalid executions on mismatched cores. As of November 2025, Intel has confirmed AVX10.2 support, including alongside APX and AMX extensions, in Nova Lake processors, anticipated for 2026.[4] Toolchain advancements have progressed, with the Netwide Assembler (NASM) version 3.0, released in October 2025, providing full syntactic and encoding support for AVX10 instructions to facilitate development.[38] Regarding power efficiency, AVX10 implementations demonstrate improvements over AVX-512 by decoupling optional features like masking and broadcasting from mandatory wide-vector execution, reducing thermal overhead and downclocking penalties in mixed P/E-core environments.[39] For AMD platforms, potential integration of AVX10 remains under consideration for the Zen 6 architecture, expected around 2026 or later, though no firm commitments have been announced as of November 2025.[40] This convergence from fragmented AVX-512 subsets positions AVX10 as a unified extension for future x86 designs.[36]APX
Core Features and Register Expansions
Intel® Advanced Performance Extensions (APX) primarily innovates by doubling the number of general-purpose registers (GPRs) from 16 to 32, adding extended GPRs R16 through R31, each 64-bit wide and accessible only in 64-bit mode. These additional registers are encoded using a new REX2 prefix and leverage space previously allocated to deprecated features like Intel® MPX, enabling compilers to retain more values in registers and reduce memory accesses. The vector registers remain unchanged as the existing ZMM set, but APX extends access to up to 32 of them via an enhanced EVEX prefix, preserving compatibility with prior AVX instructions while supporting scalable vector operations.[10][41] APX introduces specialized instruction qualifiers to enhance efficiency, including NF (No Flags), which suppresses updates to EFLAGS status flags for arithmetic operations like ADD and SUB, avoiding unnecessary flag computations in pipelines. Complementing this is ZU (Zero Upper), which automatically clears the upper 32 bits of 32-bit GPR destinations, eliminating explicit zeroing instructions and reducing code size. For control flow optimization, APX adds CCMP and CTEST instructions, which perform conditional compares and tests based on a source condition code (SCC), updating flags without branches and facilitating if-conversion to minimize misprediction costs. These features collectively support non-branching conditional moves, improving branch-heavy code paths.[41][42] The core design goals of APX target a 10% reduction in loads and over 20% fewer stores in compiled code, as measured in simulations of the SPEC CPU® 2017 Integer benchmark, achieved through expanded register pressure and instructions like PUSH2/POP2 for dual-register transfers. This register expansion also promotes scalar-vector fusion by enabling three-operand forms for legacy scalar integer instructions via the EVEX prefix, allowing seamless integration of scalar and vector computations without intermediate register spills. APX was first announced by Intel in July 2023, with the complete specification published in July 2025 (Revision 7.0).[10][41] Compiler support for APX is comprehensive as of November 2025; GCC 15 enables APX features, including CCMP/CTEST, NF, and ZU, through the -mapxf flag. These enhancements position APX to improve overall instruction-level parallelism and code density in general-purpose workloads.[43][41]Encoding and Instruction Semantics
The Intel® Advanced Performance Extensions (APX) introduce new encoding formats to support an expanded set of 32 general-purpose registers (GPRs) and enhanced scalar instruction capabilities in 64-bit mode, utilizing the REX2 prefix and extensions to the EVEX prefix. The REX2 prefix, a 2-byte encoding starting with 0xD5, provides additional bits (R4, X4, B4) to address the extended GPRs (R16–R31), enabling instructions to reference up to 32 registers without legacy conflicts. This is complemented by EVEX map 4, which repurposes bits in the EVEX prefix for APX-specific payloads, including controls like ND (New Destination) and NF (No Flags) to modify instruction behavior while maintaining compatibility with existing x86-64 encodings.[41] APX supports three-operand integer operations through the NDD (New Destination/Displacement) format, allowing instructions such as ADD, SUB, and OR to specify a distinct destination register separate from the source operands, which reduces the need for temporary registers and lowers micro-op (uop) counts for common arithmetic patterns. For example, the instruction ADD R16, R17, R18 encodes the addition of R17 and R18 into R16 using REX2 or EVEX prefixes, with EVEX.ND=1 ensuring the upper bits of the destination are zeroed if specified. The ZU suffix further optimizes this by explicitly zeroing the upper 64 bits of the destination register in operations like SETcc.zu or IMUL, eliminating manual clearing steps and improving code density. These encodings draw from MPX-like mechanisms for conditionals, using EVEX payload bits to encode source condition codes (SCC) in instructions like CCMP and CTEST.[41] In terms of instruction semantics, APX emphasizes fault suppression and efficiency in conditional execution. The CFCMOV (flag-conditional move) instructions, such as CFCMOVB rv, rv/mv, perform moves based on flag conditions (e.g., below for CFCMOVB) while suppressing exceptions like debug or memory faults if the condition is false, enabling safer speculative execution patterns with reduced uops compared to traditional CMOV. Similarly, PUSH2 and POP2 semantics allow pushing or popping two GPRs in a single instruction with 16-byte stack alignment, optimizing register save/restore sequences and cutting uops for function prologs/epilogs. No legacy mode conflicts arise, as APX features require the APX_F bit in CPUID and XCR0[44] to be enabled, restricting them exclusively to 64-bit mode.[41] By November 2025, full APX support has been integrated into assemblers, with NASM version 3.00 providing syntax for these encodings, including three-operand forms and ZU suffixes, facilitating developer adoption without custom tooling. This scalar encoding foundation briefly enhances vector code integration by streamlining register pressure in mixed workloads.[41]Integration with AVX10
The integration of Advanced Performance Extensions (APX) with AVX10 leverages the Extended Vector Extension (EVEX) prefix to unify scalar and vector operations, allowing access to 32 general-purpose registers (GPRs) within AVX10's 512-bit vector framework. This synergy enables developers to write hybrid code that benefits from expanded scalar register availability during vector-heavy loops, reducing the need for memory spills and reloads that commonly occur in mixed scalar-vector workloads. By promoting legacy integer instructions to EVEX encoding, APX facilitates seamless if-conversion and conditional execution alongside AVX10's scalable vector instructions, minimizing branch mispredictions without requiring separate code paths.[10][45] Hardware implementations combining APX and AVX10 are confirmed for Intel's Nova Lake processors, expected in the second half of 2026. Feature detection occurs via CPUID leaf 7 (EAX=7, ECX=1), where EDX bit 21 indicates APX_F support, alongside EDX bit 19 for EVEX-encoded 32 GPRs and CPUID leaf 24 for vector width compatibility with AVX10. Panther Lake, announced in 2025 and expected in 2026, does not incorporate APX or AVX10, focusing instead on prior-generation vector capabilities.[11][46][45][47][48] Software ecosystems have advanced rapidly to support this integration, with GCC 15 providing full APX and AVX10.2 enablement through the -mapxf compiler flag, alongside enhanced auto-vectorization for mixed workloads. Clang/LLVM similarly incorporates APX via EVEX and REX2 prefixes, with recompilation alone sufficient for most applications without source modifications. Operating system kernels, such as Linux, detect APX via extended CPUID enumeration and XSAVE management for the additional 16 extended GPRs (R16-R31), with patches merged in early 2025 to handle context switching and deprecate conflicting features like MPX.[49][50][51][45] In mixed scalar-vector workloads, APX-AVX10 integration yields performance uplifts of approximately 10-20% through reduced loads (by 10%) and stores (by over 20%), as simulated on SPEC CPU 2017 integer benchmarks, by alleviating register pressure in vector loops. These gains stem from fewer instructions overall (about 10% reduction) and improved power efficiency, though real-world benchmarks remain sparse due to the absence of shipping hardware in 2025. Early projections indicate particular benefits for dynamic languages and high-performance computing applications that blend general-purpose and vector processing.[10][52]Applications and Performance
Major Use Cases
Advanced Vector Extensions (AVX) have become integral to accelerating artificial intelligence and machine learning workloads, particularly through instructions like AVX-512 VNNI and BF16 support in frameworks such as TensorFlow and PyTorch. These extensions enable efficient low-precision computations for neural network inference, where VNNI instructions perform dot-product accumulations on 8-bit integers, providing significant speedups in quantized models compared to prior generations without such hardware acceleration. For instance, PyTorch leverages AVX-512 BF16 for mixed-precision training and inference, reducing memory usage while maintaining accuracy in large language models. TensorFlow similarly benefits from these optimizations, with integrated support for VNNI enabling faster matrix multiplications in convolutional neural networks. In high-performance computing (HPC) and scientific simulations, AVX extensions enhance dense linear algebra and particle-based modeling. The AVX-512 FMA instructions double the throughput of floating-point multiply-accumulate operations, significantly boosting performance in benchmarks like LINPACK, where systems with AVX-512 achieve up to 2x higher floating-point operations per second compared to AVX2-equipped processors in dense matrix solving. For molecular dynamics simulations, such as those in NAMD software, AVX-512's gather and scatter instructions facilitate efficient non-contiguous memory access for atomic coordinate updates, accelerating trajectory computations on supported hardware through optimized vectorization of force calculations. Multimedia processing and cryptographic applications also rely on AVX for parallel data handling. In video encoding, AVX2's gather instructions improve performance by loading scattered pixel data into vectors for SIMD operations, as seen in codecs like x264, where they reduce encoding time for high-resolution streams by enabling faster motion estimation without contiguous memory layouts. For cryptography, OpenSSL incorporates AES-NI extensions alongside AVX-512 IFMA for integer arithmetic in big-number operations, providing faster bulk encryption in TLS handshakes and secure communications compared to software-only implementations. As of 2025, AVX10 instructions, implemented in server processors since 2024, introduce FP8 datatypes tailored for generative AI, enabling compact representations in transformer models to reduce memory bandwidth while sustaining inference throughput in tools like Intel's OpenVINO for large-scale text generation. Similarly, the upcoming Advanced Performance Extensions (APX), expected in 2026, will expand the register file to support more efficient vector operations in database engines, accelerating analytical queries on columnar stores by minimizing register spills during predicate evaluations and aggregations. Notable software adoptions include Microsoft Teams, which utilizes AVX2 for real-time virtual background effects and noise suppression in video calls, ensuring smooth performance on compatible CPUs. Blender's Cycles renderer incorporates AVX-512 for accelerated ray tracing and denoising, delivering improved render times on multi-core systems for complex scenes involving volumetric simulations.Power Consumption and Downclocking Effects
Advanced Vector Extensions (AVX) instructions, particularly AVX-512, significantly increase power consumption compared to earlier SIMD extensions like SSE, leading to thermal constraints and frequency throttling on Intel processors starting from Skylake architectures. AVX-512 workloads can draw up to 2.5 times the power of SSE baselines due to the wider 512-bit vector operations and higher computational density, which elevate current demands and heat generation. This power surge triggers dynamic frequency scaling to maintain thermal limits, with L1 throttling reducing clock speeds to 85% of the base frequency and L2 throttling dropping them further to 70%, especially in sustained heavy vector computations. AVX2 instructions exhibit a milder effect, consuming approximately 1.5 times the power of SSE while applying similar but less severe throttling levels.[53] These throttling mechanisms activate based on instruction width and duration, with AVX-512 engaging after brief periods of upper register usage (e.g., bits 511:256), causing temporary halts of 10-20 microseconds during voltage and frequency adjustments. In mixed workloads, such as those common in high-performance computing (HPC), this results in unpredictable performance variability, as non-AVX code on the same core experiences collateral downclocking. Benchmarks of sustained AVX-512 operations, like dense matrix multiplications, demonstrate 20-30% overall performance degradation from thermal limits, even after accounting for vectorization gains, highlighting the trade-off between peak throughput and sustained efficiency.[54][55] To mitigate these effects, operating systems and tools provide frequency management options, such as Linux's msr-tools for adjusting Model-Specific Registers (MSRs) like IA32_TURBO_RATIO_LIMIT to apply AVX offsets, allowing manual tuning of throttling thresholds per instruction level. Compilers, including Intel's oneAPI and LLVM-based tools, support auto-dispatch mechanisms that runtime-select narrower vector paths (e.g., AVX2 over AVX-512) on throttling-prone hardware, preserving higher frequencies for scalar or lighter SIMD code without sacrificing compatibility.[56] AVX10, implemented starting in 2024 for server processors and expanding to consumer lines in 2026, addresses these challenges through refined EVEX encoding, which streamlines 512-bit operations for both performance (P) and efficiency (E) cores, reducing overhead and enabling more power-efficient vector execution compared to legacy AVX-512. By unifying vector lengths up to 512 bits with backward compatibility, AVX10 minimizes transition stalls and dynamic power spikes, potentially lowering consumption in vector-heavy tasks by optimizing register masking and length suppression. Complementing this, Advanced Performance Extensions (APX), expected in 2026, will reduce scalar overhead in hybrid workloads by expanding general-purpose registers from 16 to 32, cutting loads by 10% and stores by over 20% in compiled code, which translates to lower dynamic power usage since register operations are more efficient than memory accesses.[57][10][39]References
- https://en.wikichip.org/wiki/x86/avx512_vnni
- Apr 27, 2021 · Level 1: No action is necessary for applications to use Intel AVX. Level 2: Applications in this category have the option to access and ...Missing: kernel | Show results with:kernel
