Hubbry Logo
Advanced Vector ExtensionsAdvanced Vector ExtensionsMain
Open search
Advanced Vector Extensions
Community hub
Advanced Vector Extensions
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Advanced Vector Extensions
Advanced Vector Extensions
from Wikipedia

Advanced Vector Extensions (AVX, also known as Gesher New Instructions and then Sandy Bridge New Instructions) are SIMD extensions to the x86 instruction set architecture for microprocessors from Intel and Advanced Micro Devices (AMD). They were proposed by Intel in March 2008 and first supported by Intel with the Sandy Bridge[1] microarchitecture shipping in Q1 2011 and later by AMD with the Bulldozer[2] microarchitecture shipping in Q4 2011. AVX provides new features, new instructions, and a new coding scheme.

AVX2 (also known as Haswell New Instructions) expands most integer commands to 256 bits and introduces new instructions. They were first supported by Intel with the Haswell microarchitecture, which shipped in 2013.

AVX-512 expands AVX to 512-bit support using a new EVEX prefix encoding proposed by Intel in July 2013 and first supported by Intel with the Knights Landing co-processor, which shipped in 2016.[3][4] In conventional processors, AVX-512 was introduced with Skylake server and HEDT processors in 2017.

Advanced Vector Extensions

[edit]

AVX uses sixteen YMM registers to perform a single instruction on multiple pieces of data (see SIMD). Each YMM register can hold and do simultaneous operations (math) on:

  • eight 32-bit single-precision floating-point numbers or
  • four 64-bit double-precision floating-point numbers.

The width of the SIMD registers is increased from 128 bits to 256 bits, and renamed from XMM0–XMM7 to YMM0–YMM7 (in x86-64 mode, from XMM0–XMM15 to YMM0–YMM15). The legacy SSE instructions can still be utilized via the VEX prefix to operate on the lower 128 bits of the YMM registers.

AVX-512 register scheme as extension from the AVX (YMM0-YMM15) and SSE (XMM0-XMM15) registers
511 256 255 128 127 0
  ZMM0     YMM0     XMM0  
ZMM1 YMM1 XMM1
ZMM2 YMM2 XMM2
ZMM3 YMM3 XMM3
ZMM4 YMM4 XMM4
ZMM5 YMM5 XMM5
ZMM6 YMM6 XMM6
ZMM7 YMM7 XMM7
ZMM8 YMM8 XMM8
ZMM9 YMM9 XMM9
ZMM10 YMM10 XMM10
ZMM11 YMM11 XMM11
ZMM12 YMM12 XMM12
ZMM13 YMM13 XMM13
ZMM14 YMM14 XMM14
ZMM15 YMM15 XMM15
ZMM16 YMM16 XMM16
ZMM17 YMM17 XMM17
ZMM18 YMM18 XMM18
ZMM19 YMM19 XMM19
ZMM20 YMM20 XMM20
ZMM21 YMM21 XMM21
ZMM22 YMM22 XMM22
ZMM23 YMM23 XMM23
ZMM24 YMM24 XMM24
ZMM25 YMM25 XMM25
ZMM26 YMM26 XMM26
ZMM27 YMM27 XMM27
ZMM28 YMM28 XMM28
ZMM29 YMM29 XMM29
ZMM30 YMM30 XMM30
ZMM31 YMM31 XMM31

AVX introduces a three-operand SIMD instruction format called VEX coding scheme, where the destination register is distinct from the two source operands. For example, an SSE instruction using the conventional two-operand form aa + b can now use a non-destructive three-operand form ca + b, preserving both source operands. Originally, AVX's three-operand format was limited to the instructions with SIMD operands (YMM), and did not include instructions with general purpose registers (e.g. EAX). It was later used for coding new instructions on general purpose registers in later extensions, such as BMI. VEX coding is also used for instructions operating on the k0-k7 mask registers that were introduced with AVX-512.

The alignment requirement of SIMD memory operands is relaxed.[5] Unlike their non-VEX coded counterparts, most VEX coded vector instructions no longer require their memory operands to be aligned to the vector size. Notably, the VMOVDQA instruction still requires its memory operand to be aligned.

The new VEX coding scheme introduces a new set of code prefixes that extends the opcode space, allows instructions to have more than two operands, and allows SIMD vector registers to be longer than 128 bits. The VEX prefix can also be used on the legacy SSE instructions giving them a three-operand form, and making them interact more efficiently with AVX instructions without the need for VZEROUPPER and VZEROALL.

The AVX instructions support both 128-bit and 256-bit SIMD. The 128-bit versions can be useful to improve old code without needing to widen the vectorization, and avoid the penalty of going from SSE to AVX, they are also faster on some early AMD implementations of AVX. This mode is sometimes known as AVX-128.[6]

New instructions

[edit]

These AVX instructions are in addition to the ones that are 256-bit extensions of the legacy 128-bit SSE instructions; most are usable on both 128-bit and 256-bit operands.

Instruction Description
VBROADCASTSS, VBROADCASTSD, VBROADCASTF128 Copy a 32-bit, 64-bit or 128-bit memory operand to all elements of a XMM or YMM vector register.
VINSERTF128 Replaces either the lower half or the upper half of a 256-bit YMM register with the value of a 128-bit source operand. The other half of the destination is unchanged.
VEXTRACTF128 Extracts either the lower half or the upper half of a 256-bit YMM register and copies the value to a 128-bit destination operand.
VMASKMOVPS, VMASKMOVPD Conditionally reads any number of elements from a SIMD vector memory operand into a destination register, leaving the remaining vector elements unread and setting the corresponding elements in the destination register to zero. Alternatively, conditionally writes any number of elements from a SIMD vector register operand to a vector memory operand, leaving the remaining elements of the memory operand unchanged. On the AMD Jaguar processor architecture, this instruction with a memory source operand takes more than 300 clock cycles when the mask is zero, in which case the instruction should do nothing. This appears to be a design flaw.[7]
VPERMILPS, VPERMILPD Permute In-Lane. Shuffle the 32-bit or 64-bit vector elements of one input operand. These are in-lane 256-bit instructions, meaning that they operate on all 256 bits with two separate 128-bit shuffles, so they can not shuffle across the 128-bit lanes.[8]
VPERM2F128 Shuffle the four 128-bit vector elements of two 256-bit source operands into a 256-bit destination operand, with an immediate constant as selector.
VTESTPS, VTESTPD Packed bit test of the packed single-precision or double-precision floating-point sign bits, setting or clearing the ZF flag based on AND and CF flag based on ANDN.
VZEROALL Set all YMM registers to zero and tag them as unused. Used when switching between 128-bit use and 256-bit use.
VZEROUPPER Set the upper half of all YMM registers to zero. Used when switching between 128-bit use and 256-bit use.

CPUs with AVX

[edit]

Issues regarding compatibility between future Intel and AMD processors are discussed under XOP instruction set.

  • VIA:
    • Nano QuadCore
    • Eden X4
  • Zhaoxin:
    • WuDaoKou-based processors (KX-5000 and KH-20000)

Compiler and assembler support

[edit]
  • Absoft supports with -mavx flag.
  • The Free Pascal compiler supports AVX and AVX2 with the -CfAVX and -CfAVX2 switches from version 2.7.1.
  • RAD studio (v11.0 Alexandria) supports AVX2 and AVX512.[12]
  • The GNU Assembler (GAS) inline assembly functions support these instructions (accessible via GCC), as do Intel primitives and the Intel inline assembler (closely compatible to GAS, although more general in its handling of local references within inline code). GAS supports AVX starting with binutils version 2.19.[13]
  • GCC starting with version 4.6 (although there was a 4.3 branch with certain support) and the Intel Compiler Suite starting with version 11.1 support AVX.
  • The Open64 compiler version 4.5.1 supports AVX with -mavx flag.
  • PathScale supports via the -mavx flag.
  • The Vector Pascal compiler supports AVX via the -cpuAVX32 flag.
  • The Visual Studio 2010/2012 compiler supports AVX via intrinsic and /arch:AVX switch.
  • NASM starting with version 2.03 and newer. There were numerous bug fixes and updates related to AVX in version 2.04.[14]
  • Other assemblers such as MASM VS2010 version, YASM,[15] FASM and JWASM.

Operating system support

[edit]

AVX adds new register-state through the 256-bit wide YMM register file, so explicit operating system support is required to properly save and restore AVX's expanded registers between context switches. The following operating system versions support AVX:

Advanced Vector Extensions 2

[edit]

Advanced Vector Extensions 2 (AVX2), also known as Haswell New Instructions,[24] is an expansion of the AVX instruction set introduced in Intel's Haswell microarchitecture. AVX2 makes the following additions:

  • expansion of most vector integer SSE and AVX instructions to 256 bits
  • Gather support, enabling vector elements to be loaded from non-contiguous memory locations
  • DWORD- and QWORD-granularity any-to-any permutes
  • vector shifts.

Sometimes three-operand fused multiply-accumulate (FMA3) extension is considered part of AVX2, as it was introduced by Intel in the same processor microarchitecture. This is a separate extension using its own CPUID flag and is described on its own page and not below.

New instructions

[edit]
Instruction Description
VBROADCASTSS, VBROADCASTSD Copy a 32-bit or 64-bit register operand to all elements of a XMM or YMM vector register. These are register versions of the same instructions in AVX1. There is no 128-bit version, but the same effect can be simply achieved using VINSERTF128.
VPBROADCASTB, VPBROADCASTW, VPBROADCASTD, VPBROADCASTQ Copy an 8, 16, 32 or 64-bit integer register or memory operand to all elements of a XMM or YMM vector register.
VBROADCASTI128 Copy a 128-bit memory operand to all elements of a YMM vector register.
VINSERTI128 Replaces either the lower half or the upper half of a 256-bit YMM register with the value of a 128-bit source operand. The other half of the destination is unchanged.
VEXTRACTI128 Extracts either the lower half or the upper half of a 256-bit YMM register and copies the value to a 128-bit destination operand.
VGATHERDPD, VGATHERQPD, VGATHERDPS, VGATHERQPS Gathers single- or double-precision floating-point values using either 32- or 64-bit indices and scale.
VPGATHERDD, VPGATHERDQ, VPGATHERQD, VPGATHERQQ Gathers 32 or 64-bit integer values using either 32- or 64-bit indices and scale.
VPMASKMOVD, VPMASKMOVQ Conditionally reads any number of elements from a SIMD vector memory operand into a destination register, leaving the remaining vector elements unread and setting the corresponding elements in the destination register to zero. Alternatively, conditionally writes any number of elements from a SIMD vector register operand to a vector memory operand, leaving the remaining elements of the memory operand unchanged.
VPERMPS, VPERMD Shuffle the eight 32-bit vector elements of one 256-bit source operand into a 256-bit destination operand, with a register or memory operand as selector.
VPERMPD, VPERMQ Shuffle the four 64-bit vector elements of one 256-bit source operand into a 256-bit destination operand, with a register or memory operand as selector.
VPERM2I128 Shuffle (two of) the four 128-bit vector elements of two 256-bit source operands into a 256-bit destination operand, with an immediate constant as selector.
VPBLENDD Doubleword immediate version of the PBLEND instructions from SSE4.
VPSLLVD, VPSLLVQ Shift left logical. Allows variable shifts where each element is shifted according to the packed input.
VPSRLVD, VPSRLVQ Shift right logical. Allows variable shifts where each element is shifted according to the packed input.
VPSRAVD Shift right arithmetically. Allows variable shifts where each element is shifted according to the packed input.

CPUs with AVX2

[edit]
  • Intel
    • Haswell processors (Q2 2013) and newer, except models branded as Celeron and Pentium.
    • Celeron and Pentium branded processors starting with Tiger Lake (Q3 2020) and newer.[10]
  • AMD
  • VIA:
    • Nano QuadCore
    • Eden X4

AVX-512

[edit]

AVX-512 are 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions for x86 instruction set architecture proposed by Intel in July 2013.[3]

AVX-512 instructions are encoded with the new EVEX prefix. It allows 4 operands, 8 new 64-bit opmask registers, scalar memory mode with automatic broadcast, explicit rounding control, and compressed displacement memory addressing mode. The width of the register file is increased to 512 bits and total register count increased to 32 (registers ZMM0-ZMM31) in x86-64 mode.

AVX-512 consists of multiple instruction subsets, not all of which are meant to be supported by all processors implementing them. The instruction set consists of the following:

  • AVX-512 Foundation (F) – adds several new instructions and expands most 32- and 64-bit floating-point SSE-SSE4.1 and AVX/AVX2 instructions with EVEX coding scheme to support the 512-bit registers, operation masks, parameter broadcasting, and embedded rounding and exception control
  • AVX-512 Conflict Detection Instructions (CD) – efficient conflict detection to allow more loops to be vectorized, supported by Knights Landing[3]
  • AVX-512 Exponential and Reciprocal Instructions (ER) – exponential and reciprocal operations designed to help implement transcendental operations, supported by Knights Landing[3]
  • AVX-512 Prefetch Instructions (PF) – new prefetch capabilities, supported by Knights Landing[3]
  • AVX-512 Vector Length Extensions (VL) – extends most AVX-512 operations to also operate on XMM (128-bit) and YMM (256-bit) registers (including XMM16-XMM31 and YMM16-YMM31 in x86-64 mode)[25]
  • AVX-512 Byte and Word Instructions (BW) – extends AVX-512 to cover 8-bit and 16-bit integer operations[25]
  • AVX-512 Doubleword and Quadword Instructions (DQ) – enhanced 32-bit and 64-bit integer operations[25]
  • AVX-512 Integer Fused Multiply Add (IFMA) – fused multiply add for 512-bit integers.[26]: 746 
  • AVX-512 Vector Byte Manipulation Instructions (VBMI) adds vector byte permutation instructions which are not present in AVX-512BW.
  • AVX-512 Vector Neural Network Instructions Word variable precision (4VNNIW) – vector instructions for deep learning.
  • AVX-512 Fused Multiply Accumulation Packed Single precision (4FMAPS) – vector instructions for deep learning.
  • VPOPCNTDQ – count of bits set to 1.[27]
  • VPCLMULQDQ – carry-less multiplication of quadwords.[27]
  • AVX-512 Vector Neural Network Instructions (VNNI) – vector instructions for deep learning.[27]
  • AVX-512 Galois Field New Instructions (GFNI) – vector instructions for calculating Galois field.[27]
  • AVX-512 Vector AES instructions (VAES) – vector instructions for AES coding.[27]
  • AVX-512 Vector Byte Manipulation Instructions 2 (VBMI2) – byte/word load, store and concatenation with shift.[27]
  • AVX-512 Bit Algorithms (BITALG) – byte/word bit manipulation instructions expanding VPOPCNTDQ.[27]
  • AVX-512 Bfloat16 Floating-Point Instructions (BF16) – vector instructions for AI acceleration.
  • AVX-512 Half-Precision Floating-Point Instructions (FP16) – vector instructions for operating on floating-point and complex numbers with reduced precision.

Only the core extension AVX-512F (AVX-512 Foundation) is required by all implementations, though all current implementations also support CD (conflict detection). All central processors with AVX-512 also support VL, DQ and BW. The ER, PF, 4VNNIW and 4FMAPS instruction set extensions are currently only implemented in Intel computing coprocessors.

The updated SSE/AVX instructions in AVX-512F use the same mnemonics as AVX versions; they can operate on 512-bit ZMM registers, and will also support 128/256 bit XMM/YMM registers (with AVX-512VL) and byte, word, doubleword and quadword integer operands (with AVX-512BW/DQ and VBMI).[26]: 23 

Discontinued subsets include:

  • AVX-512 Vector Pair Intersection to a Pair of Mask Registers (VP2INTERSECT) – Compute intersection between doublewords/quadwords to a pair of mask registers. Discontinued by Intel.
  • Xeon Phi ER, PF, 4FMAPS, 4VNNIW.

AVX-512 CPU compatibility table

[edit]
Subset
F
CD
ER
PF
4FMAPS
4VNNIW
VPOPCNTDQ
VL
DQ
BW
IFMA
VBMI
VBMI2
BITALG
VNNI
BF16
VPCLMULQDQ
GFNI
VAES
VP2INTERSECT
FP16
Intel Knights Landing (2016) Yes Yes No
Intel Knights Mill (2017) Yes No
Intel Skylake-SP, Skylake-X (2017) No No Yes No
Intel Cannon Lake (2018) Yes No
Intel Cascade Lake-SP (2019) No Yes No
Intel Cooper Lake (2020) No Yes No
Intel Ice Lake (2019) Yes No Yes No
Intel Tiger Lake (2020) Yes No
Intel Rocket Lake (2021) No
Intel Alder Lake (2021) PartialNote 1 PartialNote 1
AMD Zen 4 (2022) Yes Yes No
Intel Sapphire Rapids (2023) No Yes
AMD Zen 5 (2024) Yes No

[28]

^Note 1 : Intel does not officially support AVX-512 family of instructions on the Alder Lake microprocessors. In early 2022, Intel began disabling in silicon (fusing off) AVX-512 in Alder Lake microprocessors to prevent customers from enabling AVX-512.[29] In older Alder Lake family CPUs with some legacy combinations of BIOS and microcode revisions, it was possible to execute AVX-512 family instructions when disabling all the efficiency cores which do not contain the silicon for AVX-512.[30][31][32]

Compilers supporting AVX-512

[edit]

Assemblers supporting AVX-512

[edit]

AVX-VNNI, AVX-IFMA

[edit]

AVX-VNNI is a VEX-coded variant of the AVX512-VNNI instruction set extension. Similarly, AVX-IFMA is a VEX-coded variant of AVX512-IFMA. These extensions provide the same sets of operations as their AVX-512 counterparts, but are limited to 256-bit vectors and do not support any additional features of EVEX encoding, such as broadcasting, opmask registers or accessing more than 16 vector registers. These extensions allow support of VNNI and IFMA operations even when full AVX-512 support is not implemented in the processor.

CPUs with AVX-VNNI

[edit]

CPUs with AVX-IFMA

[edit]
  • Intel
    • Sierra Forest E-core-only Xeon processors (Q2 2024) and newer.
    • Grand Ridge special-purpose processors and newer.
    • Meteor Lake mobile processors (Q4 2023) and newer.
    • Arrow Lake desktop processors (Q4 2024) and newer.

AVX10

[edit]

AVX10, announced in July 2023,[38] is a new, "converged" AVX instruction set. It addresses several issues of AVX-512; in particular, that it is split into too many parts[39] (20 feature flags). The initial technical paper also made 512-bit vectors optional to support, but as of revision 3.0, vector length enumeration is removed and 512-bit vectors are mandatory.[40]

AVX10 presents a simplified CPUID interface to test for instruction support, consisting of the AVX10 version number (indicating the set of instructions supported, with later versions always being a superset of an earlier one).[41] For example, AVX10.2 indicates that a CPU is capable of the second version of AVX10.[42] Initial revisions of the AVX10 technical specifications also included maximum supported vector length as part of the ISA extension name, e.g. AVX10.2/256 would mean a second version of AVX10 with vector length up to 256 bits, but later revisions made that unnecessary.

The first version of AVX10, notated AVX10.1, does not introduce any instructions or encoding features beyond what is already in AVX-512 (specifically, in Intel Sapphire Rapids: AVX-512F, CD, VL, DQ, BW, IFMA, VBMI, VBMI2, BITALG, VNNI, GFNI, VPOPCNTDQ, VPCLMULQDQ, VAES, BF16, FP16). For CPUs supporting AVX10 and 512-bit vectors, all legacy AVX-512 feature flags will remain set to facilitate applications supporting AVX-512 to continue using AVX-512 instructions.[42]

AVX10.1 was first released in Intel Granite Rapids[42] (Q3 2024) and AVX10.2 will be available in Diamond Rapids[43] and Nova Lake.[44]

Applications

[edit]
  • Suitable for floating-point-intensive calculations in multimedia, scientific and financial applications (AVX2 adds support for integer operations).
  • Increases parallelism and throughput in floating-point SIMD calculations.
  • Reduces register load due to the non-destructive instructions.
  • Improves Linux RAID software performance (requires AVX2, AVX is not sufficient)[45]

Software

[edit]
  • Cryptography
    • BSAFE C toolkits uses AVX and AVX2 where appropriate to accelerate various cryptographic algorithms.[46]
    • OpenSSL uses AVX- and AVX2-optimized cryptographic functions since version 1.0.2.[47] Support for AVX-512 was added in version 3.0.0.[48] Some of these optimizations are also present in various clones and forks, like LibreSSL.
  • Science, engineering an others
    • Esri ArcGIS Data Store uses AVX2 for graph storage.[54]
    • Prime95/MPrime, the software used for GIMPS, started using the AVX instructions since version 27.1, AVX2 since 28.6 and AVX-512 since 29.1.[55]
    • Einstein@Home uses AVX in some of their distributed applications that search for gravitational waves.[56]
    • TensorFlow since version 1.6 and tensorflow above versions requires CPU supporting at least AVX.[57]
    • EmEditor 19.0 and above uses AVX2 to speed up processing.[58]
    • Microsoft Teams uses AVX2 instructions to create a blurred or custom background behind video chat participants,[59] and for background noise suppression.[60]
    • simdjson, a JSON parsing library, uses AVX2 and AVX-512 to achieve improved decoding speed.[61][62]
    • x86-simd-sort, a library with sorting algorithms for 16, 32 and 64-bit numeric data types, uses AVX2 and AVX-512. The library is used in NumPy and OpenJDK to accelerate sorting algorithms.[63]
    • Tesseract OCR engine uses AVX, AVX2 and AVX-512 to accelerate character recognition.[64]

Downclocking

[edit]

Since AVX instructions are wider, they consume more power and generate more heat. Executing heavy AVX instructions at high CPU clock frequencies may affect CPU stability due to excessive voltage droop during load transients. Some Intel processors have provisions to reduce the Turbo Boost frequency limit when such instructions are being executed. This reduction happens even if the CPU hasn't reached its thermal and power consumption limits.

On Skylake and its derivatives, the throttling is divided into three levels:[65][66]

  • L0 (100%): The normal turbo boost limit.
  • L1 (~85%): The "AVX boost" limit. Soft-triggered by 256-bit "heavy" (floating-point unit: FP math and integer multiplication) instructions. Hard-triggered by "light" (all other) 512-bit instructions.
  • L2 (~60%):[dubiousdiscuss] The "AVX-512 boost" limit. Soft-triggered by 512-bit heavy instructions.

The frequency transition can be soft or hard. Hard transition means the frequency is reduced as soon as such an instruction is spotted; soft transition means that the frequency is reduced only after reaching a threshold number of matching instructions. The limit is per-thread.[65]

In Ice Lake, only two levels persist:[67]

  • L0 (100%): The normal turbo boost limit.
  • L1 (~97%): Triggered by any 512-bit instructions, but only when single-core boost is active; not triggered when multiple cores are loaded.

Rocket Lake processors do not trigger frequency reduction upon executing any kind of vector instructions regardless of the vector size.[67] However, downclocking can still happen due to other reasons, such as reaching thermal and power limits.

Downclocking means that using AVX in a mixed workload with an Intel processor can incur a frequency penalty. Avoiding the use of wide and heavy instructions help minimize the impact in these cases. AVX-512VL allows for using 256-bit or 128-bit operands in AVX-512 instructions, making it a sensible default for mixed loads.[68]

On supported and unlocked variants of processors that down-clock, the clock ratio reduction offsets (typically called AVX and AVX-512 offsets) are adjustable and may be turned off entirely (set to 0x) via Intel's Overclocking / Tuning utility or in BIOS if supported there.[69]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Advanced Vector Extensions (AVX) are a family of instruction set extensions to the x86 architecture developed by , introducing 256-bit (SIMD) registers known as YMM registers to enable more efficient parallel processing of floating-point and integer operations. These extensions build on prior SIMD technologies like (SSE) by doubling the vector width from 128 bits to 256 bits, supporting up to eight single-precision or four double-precision floating-point elements per instruction, which significantly boosts in applications such as encoding, scientific simulations, image processing, and financial analytics. AVX was first implemented in 's Sandy Bridge processors, released in the first quarter of 2011. A core innovation of AVX is the use of a Vector Extension (VEX) prefix for instruction encoding, which allows for three-operand syntax (e.g., C = A op B without overwriting A or B) and supports up to four source operands, reducing register pressure and improving code density compared to earlier two-operand formats. It also relaxes memory alignment requirements for vector loads and stores, enabling more flexible data access patterns, and includes new instructions for operations like vector addition (VADDPS), multiplication (VMULPD), and permutations (VPERM2F128). In 64-bit mode, AVX provides 16 YMM registers, while legacy modes offer 8, ensuring compatibility with existing x86 software while extending capabilities for . The AVX family evolved with AVX2, introduced in 2013 with the Haswell microarchitecture, which expanded support to 256-bit integer operations, added fused multiply-add (FMA) instructions for higher precision and throughput, and introduced gather operations for non-contiguous memory access to accelerate data-parallel workloads. These enhancements, including instructions like VPADDD for integer addition and VPGATHERDD for vector gathering, made AVX2 foundational for later developments such as AVX-512, which further scales to 512-bit registers in subsequent processor generations, and more recent extensions like AVX10 and APX confirmed for implementation in 2026 processors as of November 2025. Overall, AVX and its extensions have become essential for optimizing power efficiency and computational density in modern processors from Intel and AMD, with broad adoption in software libraries and frameworks for vectorized code.

Introduction

Background and Motivation

Single Instruction, Multiple Data (SIMD) is a paradigm that enables a single instruction to operate simultaneously on multiple data elements stored in vector registers, thereby accelerating compute-intensive tasks such as multimedia processing, scientific simulations, and workloads. This approach exploits data-level parallelism inherent in applications where the same operation is applied across arrays of data, improving throughput without requiring complex thread management. In the x86 architecture, SIMD extensions have evolved to support wider vectors and more efficient operations, addressing the demands of increasingly data-parallel software. Advanced Vector Extensions (AVX) were proposed by Intel in March 2008 to overcome the limitations of prior Streaming SIMD Extensions (SSE), which were constrained to 128-bit vector widths that proved insufficient for emerging workloads requiring higher parallelism. The motivation stemmed from the growing prevalence of data-intensive applications in fields like video encoding and numerical simulations, where SSE's narrower registers limited the number of elements processed per instruction and its two-operand format often required temporary storage to preserve source data. By expanding to 256-bit vectors, AVX enabled broader data paths, reducing the instruction count and enhancing performance for floating-point operations central to these domains. AVX builds on x86's vector register foundation, utilizing 16 XMM registers (128 bits each) from SSE while introducing 16 YMM registers (256 bits each) that alias the XMM registers as their lower halves, with provisions for future extensions like 512-bit ZMM registers. This architecture maintains backward compatibility while allowing developers to leverage wider vectors for up to 2x the floating-point performance of SSE in suitable workloads, such as parallel arithmetic on arrays of single- or double-precision values. Subsequent developments, including with its 512-bit vectors, further extended this capability for even more demanding parallel computations.

Evolution and Versions

Advanced Vector Extensions (AVX) were first introduced by in March 2008 as a proposal to extend the x86 instruction set with 256-bit vector operations, debuting in hardware with the in 2011. This marked the initial step in broadening SIMD capabilities beyond the 128-bit vectors of SSE and AVX predecessors, targeting and workloads. The evolution continued with AVX2 in 2013, integrated into 's Haswell processors, which expanded integer operations to 256 bits and added gather instructions for irregular data access. In 2016, released with the Knights Landing coprocessor, doubling vector width to 512 bits and introducing foundational subsets like F, CD, ER, and PF for floating-point, conflict detection, and prefetching, driven by demands in scientific simulations and data analytics. began supporting AVX and AVX2 with its architecture in 2017, aligning consumer processors with 's extensions for broader ecosystem compatibility. Subsequent refinements addressed specialized needs, with AVX-512 VNNI (Vector Neural Network Instructions) appearing in 2019 on processors to accelerate convolutions via low-precision integer multiply-accumulate operations. Similarly, AVX-512 IFMA (Integer Fused Multiply-Add) emerged around 2017 in Knights Mill for high-throughput integer math in and compression, though wider adoption came later with server chips like Ice Lake in 2019. These extensions reflected growing AI and HPC pressures, incorporating formats like BF16 for efficiency. In July 2023, announced AVX10 as a converged instruction set to unify the fragmented subsets, simplifying software detection and enabling consistent vector lengths across cores, with initial support in processors via AVX10.1. This transition aimed to resolve compatibility issues in hybrid architectures, where had been inconsistently implemented since its disablement in consumer chips in 2021. extended support to its architecture in 2022, using double-pumped 256-bit units, and enhanced it with native 512-bit paths in by 2024. The AVX10.2 specification, released on May 8, 2025, mandated 512-bit vector support across all cores, including E-cores, while incorporating BF16 and FP8 for AI acceleration; its July 2025 update further aligned with APX for register expansions. Advanced Performance Extensions (APX), also announced in July 2023, complement AVX10 by doubling general-purpose registers to 32 and adding features like conditional moves, with specifications finalized in July 2025 to boost scalar performance in vector-heavy workloads. As of November 2025, has confirmed support for AVX10.2, including mandatory 512-bit vector operations across all cores (P-cores and E-cores), and APX in its upcoming Nova Lake processors, scheduled for release in 2026. These developments underscore the ongoing push to balance AI/HPC demands with efficient, unified ISAs across and platforms.

AVX and AVX2

AVX Features and Instructions

Advanced Vector Extensions (AVX) introduced a significant expansion of the x86 SIMD register set by adding 256-bit YMM registers, labeled YMM0 through YMM15, which overlay the existing 128-bit XMM registers used in SSE instructions. These YMM registers enable processing of wider vectors, such as eight single-precision floating-point values or four double-precision values in parallel, doubling the throughput compared to SSE's 128-bit operations. To maintain compatibility with legacy SSE code, the upper 128 bits of each YMM register are automatically zeroed upon entry into AVX mode or explicitly cleared using dedicated instructions, preventing unintended data mixing or state corruption. AVX employs a new Vector Extension (VEX) prefix for instruction encoding, available in either 2-byte or 3-byte formats, which replaces the legacy SSE encoding scheme and supports the expanded and vector widths. The 2-byte VEX prefix begins with the byte 0xC5, while the 3-byte version starts with 0xC4 followed by fields for register specification (R, X, B), embedded (m-mmmm), vector length (W), source operand specifier (vvvv), and SIMD prefix (pp). This encoding allows AVX instructions to use a three-operand syntax—where the destination register is distinct from the two source operands (e.g., dest = src1 op src2)—unlike SSE's two-operand form that overwrites one source, thereby reducing register pressure and improving code efficiency. A core feature of AVX is its support for 256-bit floating-point operations, exemplified by instructions like VADDPS, which performs packed single-precision addition across a full YMM register: for instance, VADDPS YMM1, YMM2, YMM3 computes the sum of corresponding elements in YMM2 and YMM3, storing the result in YMM1 and processing eight 32-bit floats in parallel without carry-over between the two 128-bit lanes. Other key instructions include VBROADCASTSS and VBROADCASTSD, which load a scalar single- or double-precision value from and replicate it across all eight or four elements of a YMM register, respectively (encoded as VEX.256.66.0F38.W0 18 /r for SS and 19 /r for SD). VINSERTF128 and VEXTRACTF128 facilitate lane-level manipulation by inserting or extracting a 128-bit value into or from a specific half of a YMM register (VEX.256.66.0F3A.W0 18 /r and 19 /r), enabling efficient construction or decomposition of 256-bit vectors from 128-bit sources. Permutation and data reorganization are handled by VPERMILPS and VPERMILPD, which rearrange single- or double-precision elements within a YMM register based on an immediate control or another register (VEX.256.66.0F38.W0 0C /r for PS and 0D /r for PD), allowing arbitrary reordering within each 128-bit . Masked memory operations are provided by VMASKMOVPS and VMASKMOVPD, which conditionally load or store packed floats using a writemask derived from the sign bits of a YMM register (VEX.256.66.0F38.W0 2C /r for PS load and 28 /r for store), useful for sparse or conditional vector processing. For register management, VZEROALL zeros all 256 bits of every YMM register (VEX.NP.0F.W0 77), while VZEROUPPER clears only the upper 128 bits across all YMM registers (VEX.NP.0F.W0 76), ensuring clean transitions between AVX and SSE execution. Additionally, VTESTPS and VTESTPD test the sign bits of packed single- or double-precision values against a register, setting processor flags based on equality, greater-than, or zero conditions without modifying the operands. These features collectively enable AVX to operate in a dedicated 256-bit mode, distinct from SSE's 128-bit processing, while preserving through explicit zeroing and VEX-distinguished opcodes. AVX2 later extended similar vector capabilities to operations, but the original AVX focused primarily on floating-point acceleration.

AVX2 Enhancements

AVX2 extends the 256-bit (SIMD) floating-point operations introduced in AVX by adding comprehensive support for vector processing, enabling more efficient handling of data-intensive workloads such as image processing and . This extension operates on YMM registers and includes packed arithmetic and shift instructions across various granularities, significantly broadening the applicability of vectorization beyond floating-point domains. The core integer operations encompass (e.g., VPADDB for bytes, VPADDW for words, VPADDD for doublewords, VPADDQ for quadwords), (e.g., VPSUBB, VPSUBW, VPSUBD, VPSUBQ), (e.g., VPMULLW for words, VPMULLD for doublewords, VPMULLQ for quadwords), and shifts (e.g., VPSLLW/D/Q for logical left shifts, VPSRLW/D/Q for logical right shifts, VPSRAW/D for arithmetic right shifts). Variable-shift variants like VPSLLVD, VPSLLVQ, VPSRLVD, and VPSRLVQ allow per-element control for more flexible data manipulation. Additional integer instructions include averages (VPAVGB, VPAVGW), multiply-adds (VPMADDWD, VPMADDUBSW), maximums (VPMAXSB, VPMAXSW, VPMAXSD, VPMAXSQ), and unpack operations (VPUNPCKHBW, VPUNPCKLBW, etc.) for interleaving data. These operations process twice as many elements per instruction as AVX's 128-bit , yielding up to 2x throughput for integer-heavy computations. AVX2 also introduces gather instructions to vectorize non-contiguous memory accesses, a capability absent in AVX. These include floating-point gathers VGATHERDPD (double-precision with dword indices), VGATHERQPS (single-precision with qword indices), and integer gathers VPGATHERDD (doublewords with dword indices), VPGATHERQD (doublewords with qword indices), which load scattered elements into a 256-bit register using a base address, index vector, and optional scale factor. By enabling indexed loads without prior sorting or alignment, these instructions accelerate algorithms like operations and database queries. For enhanced floating-point performance, AVX2 incorporates FMA3 instructions that fuse multiplication and addition into a single operation, reducing latency and improving precision. Key examples are VFMADDPS (packed single-precision, processing 8 elements: dest = a * b + c) and VFMADDPD (packed double-precision, processing 4 elements: dest = a * b + c), with variants like VFNMADD132PD for negated multiply-add and VFNMSUB231PS for negated multiply-subtract. These build on AVX's floating-point foundation by enabling more efficient linear algebra and tasks. Additional instructions in AVX2 facilitate advanced data rearrangement and bit manipulation. VPERM2I128 permutes two 128-bit lanes between source operands to form a 256-bit result, while VINSERTI128 inserts a 128-bit integer vector into a selected position of a 256-bit destination. For bit-level operations, PEXT extracts specified bits from a source into contiguous positions in the destination, and PDEP deposits bits from contiguous positions into specified locations in the destination, aiding compression and population count algorithms. Overall, these enhancements double integer processing width and introduce targeted primitives, providing substantial performance gains for vectorized integer workloads over AVX.

Hardware and Software Support for AVX and AVX2

Advanced Vector Extensions (AVX) were first introduced in Intel's Sandy Bridge microarchitecture processors in 2011, providing 256-bit floating-point vector operations. Subsequent Intel architectures, including Ivy Bridge (2012), Haswell (2013, which added AVX2 for enhanced integer operations), Broadwell (2014), Skylake (2015), Coffee Lake (2017), Comet Lake (2019), Tiger Lake (2020), Alder Lake (2021), Raptor Lake (2022), Meteor Lake (2023), Arrow Lake (2024), Lunar Lake (2024), and later generations, all include full support for both AVX and AVX2. AMD's Bulldozer family (2011) offered partial AVX support with 256-bit floating-point capabilities but limited integer handling, while full AVX2 integration began with the Zen microarchitecture in Ryzen processors starting in 2017, extending through Zen 2 (2019), Zen 3 (2020), Zen 4 (2022), Zen 5 (2024), and later. VIA Technologies provided early AVX support in its Nano QuadCore series around 2013, but did not implement AVX2; Zhaoxin CPUs, based on VIA designs, incorporated AVX around 2013 and AVX2 starting with models like the KX-5000 series in 2018. Software ecosystems have broadly adopted AVX and AVX2 through compiler and operating system integrations. The GNU Compiler Collection (GCC) added AVX support in version 4.6 via the -mavx flag, enabling for compatible code, with AVX2 following in version 4.7 using -mavx2. Clang/LLVM introduced AVX intrinsics and code generation in version 3.0 (2011), supporting -mavx for Sandy Bridge-level optimization and later -mavx2 for Haswell. Visual Studio's C++ compiler (MSVC) provided AVX intrinsics in Visual Studio 2010 Service Pack 1, with full /arch:AVX2 option available from Visual Studio 2013 Update 2 for generating AVX2 instructions and auto-vectorization in loops like those for . Operating systems facilitate AVX/AVX2 usage via CPU feature detection and state management. Windows 7 and later versions, starting with Service Pack 1 (SP1) in 2011, include kernel-level support for AVX through XSAVE extensions, allowing user-mode applications to query and utilize the extensions without crashes. Linux kernels from version 3.6 (2012) onward expose AVX and AVX2 features via the /proc/cpuinfo interface, relying on queries for runtime detection and enabling optimized libraries like those in . macOS 10.7 (2011) and subsequent releases support AVX on compatible hardware through the kernel's XSAVE handling, ensuring seamless integration for developer tools and applications. Runtime detection of AVX and AVX2 typically involves the instruction: for AVX, check leaf 1 with ECX bit 28 set; for AVX2, verify leaf 7 (subleaf 0) with EBX bit 5 set, followed by XGETBV to confirm OS support for YMM register states. By 2025, AVX and AVX2 remain ubiquitous in x86 ecosystems, with near-universal adoption in consumer and server hardware from and , though emerging ARM-based alternatives like Apple's M-series incorporate analogous vector extensions without direct x86 compatibility.
VendorAVX IntroductionAVX2 IntroductionKey Architectures
2011 (Sandy Bridge)2013 (Haswell)Sandy Bridge to Lunar Lake and later
2011 (Bulldozer, partial)2017 ()Bulldozer to and later
VIA~2013 (Nano QuadCore)N/ANano series
~2013 (early KaiXian)2018 (KX-5000)KaiXian KX-5000 and later

AVX-512

Core Architecture and Subsets

AVX-512 establishes a 512-bit SIMD foundation that doubles the vector width of AVX2's 256-bit YMM registers, enabling parallel processing of up to 16 single-precision floating-point values or 8 double-precision values per instruction. This architecture introduces 32 dedicated 512-bit ZMM registers (ZMM0 through ZMM31) in 64-bit mode, which subsume the lower 256 bits of YMM registers and the lower 128 bits of XMM registers for . Masking capabilities are provided by 8 dedicated 64-bit opmask registers (K0 through K7), allowing conditional execution of vector elements without branching, which reduces overhead compared to scalar conditional code. For instance, the {z} suffix in instructions enables zeroing masking, where unselected elements are set to zero, while merging masking preserves original values in unselected positions. The EVEX prefix, a 4-byte encoding scheme, underpins AVX-512's flexibility by supporting vector length independence across 128-bit, 256-bit, and 512-bit operations through a single instruction encoding. This prefix embeds opmask selection, zeroing/merging control, and broadcast functionality for immediate values, allowing a unified to scale across vector lengths via the AVX512VL . Unlike AVX2's VEX prefix, EVEX facilitates embedded control and suppression of exceptions, enhancing precision in floating-point computations. Broadcast support, for example, replicates a scalar value across the entire vector, optimizing gather/scatter patterns common in data-parallel workloads. AVX-512's modularity is achieved through specialized subsets, each extending the core with targeted instructions while maintaining the 512-bit vector framework. The foundational subset provides basic arithmetic, logical, and movement operations for floating-point and integer vectors, serving as the baseline for all implementations. adds conflict detection instructions to identify and resolve intra-vector dependencies, such as duplicate indices in gather operations, improving in irregular access patterns. The subset introduces high-accuracy approximations for exponential and reciprocal functions, delivering results with reduced margins suitable for scientific simulations, though it is primarily available on processors. Complementing these, AVX512VL enables the same instruction set across shorter vector lengths (128/256 bits), allowing code to run on legacy hardware without recompilation. For granular integer handling, AVX512BW supports byte and word-level operations, including permutations and comparisons, extending AVX2's integer capabilities to 512-bit scales. Similarly, AVX512DQ focuses on doubleword (32-bit) and quadword (64-bit) integer instructions, such as population count and bitwise shifts, optimizing for workloads like cryptography and compression. These subsets collectively enable up to twice the throughput of AVX2 for vectorized code, primarily through wider parallelism and branch-free conditionals, while building on AVX2's integer extensions for seamless migration.

Key Instructions and Encoding

AVX-512 instructions are encoded using the EVEX prefix, a four-byte encoding scheme that extends the used in earlier AVX versions to support 512-bit vector operations, advanced masking, and additional features like . The EVEX prefix includes fields such as EVEX.L'L (bits 1-2 in the prefix) to specify vector length: 00b for 128-bit, 01b for 256-bit, and 10b for 512-bit operations on ZMM registers. This allows instructions to operate on up to 16 single-precision floating-point elements or 8 quadwords in a 512-bit vector. Writemasking is a core feature enabled by the EVEX.aaa field (bits 18-16), which selects one of eight opmask registers (k0 through k7), where k0 disables masking and the others provide per-element control. Masking supports merging (preserving original values in masked lanes) or zeroing (setting masked lanes to zero via the EVEX.z bit at position 23). , controlled by the EVEX.b bit (position 20), replicates a single operand across the vector, denoted in syntax as {1to16} for 512-bit single-precision operations. Subsets like AVX-512BW extend these encodings to support byte and word operations, such as packing instructions. Representative floating-point instructions include VADDPS, which performs packed single-precision addition on 512-bit vectors (16 elements), adding corresponding elements from two source operands and storing the result in the destination. For example, the syntax VADDPS zmm1 {k1}{z}, zmm2, zmm3 adds zmm2 and zmm3 element-wise, applying the k1 mask to update only selected lanes in zmm1, with {z} zeroing masked lanes. Another key instruction is VFMADD132PS, a fused multiply-add operation that computes (zmm1 * zmm3) + zmm2 for 16 single-precision elements, enabling efficient computation in loops with indexing. Its syntax, such as VFMADD132PS zmm1 {k1}, zmm2, zmm3/m512/m32bcst, supports memory broadcasting for scalar inputs. Integer instructions exemplify 512-bit parallelism, such as VPADDQ, which adds packed 64-bit quadwords (8 elements) from two sources, useful for vectorized arithmetic. The masked form VPADDQ zmm1 {k1}, zmm2, zmm3 merges results into zmm1 based on the k1 mask. VPMOVDB packs and truncates 512-bit doublewords (16 elements) to bytes (64 elements) with saturation, converting to narrower formats for storage or further processing; for instance, VPMOVDB xmm1 {k1}{z}, zmm2 stores the result in a 128-bit operand, masking unused lanes. Gather and scatter operations facilitate non-contiguous memory access, critical for irregular data structures. VPGATHERQD gathers quadwords using 32-bit indices scaled by the memory address, loading up to 8 elements into a 512-bit destination based on the index vector. Syntax like VPGATHERQD zmm1 {k1}, vm32z uses a vector of indices (vm32z) to fetch data, with k1 controlling which gathers occur. Conversely, VSCATTERDPS scatters 16 single-precision floats from a 512-bit source to memory locations determined by dword indices. An example is VSCATTERDPS vm32k {k1}, zmm1, where vm32k provides the index vector and k1 masks the scatters to avoid unnecessary writes. These EVEX-encoded instructions collectively enable conditional, scalable vector processing unique to AVX-512.

Hardware and Software Support for AVX-512

AVX-512 was first implemented in hardware with Intel's Knights Landing processors in 2016, providing full support for the foundational subsets including F (foundation), (conflict detection), ER (exponential and reciprocal), and PF (prefetch). Subsequent Intel server and high-end desktop processors introduced partial implementations, with Skylake-SP and Skylake-X in 2017 supporting subsets such as F, , VL (vector length extensions), BW (byte and word), and DQ (doubleword and quadword). processors in 2019 added specialized subsets like VNNI (vector neural network instructions) to F, , and VL, enhancing deep learning workloads. By 2023, extended support to include advanced features like BF16 (bfloat16) instructions alongside core subsets, maintaining 512-bit vector processing across two FMA units per core. The following table summarizes key Intel processor families and their supported AVX-512 subsets, highlighting the fragmentation across implementations:
Processor FamilyRelease YearSupported Subsets
Knights Landing (Xeon Phi x200)2016F, CD, ER, PF
Skylake-SP/X (Xeon W)2017F, CD, VL, BW, DQ
Cascade Lake (Xeon Scalable)2019F, CD, VL, BW, DQ, VNNI
Ice Lake-SP (3rd Gen Xeon)2021F, CD, VL, BW, DQ, VNNI, IFMA, VBMI
Sapphire Rapids (4th Gen Xeon)2023F, CD, VL, BW, DQ, VNNI, BF16, FP16
AMD entered the AVX-512 space later, with Zen 4 processors in 2022 offering partial support by double-pumping the existing 256-bit AVX2 pipelines to emulate 512-bit operations, covering core subsets like F, VL, BW, and DQ but with reduced throughput compared to native implementations. Zen 5 processors, released in 2024, expanded this to native 512-bit datapaths across all execution units, providing full support for the standard instruction set including F, CD, VL, BW, DQ, and additional extensions like VNNI and BF16, significantly improving performance in HPC and AI applications. Compiler support for AVX-512 emerged concurrently with hardware availability. The GNU Collection (GCC) introduced initial AVX-512F support in version 4.9 (2014), with flags like -mavx512f enabling code generation, and subsequent releases adding subset-specific options such as -mavx512vl and -mavx512vnnI; runtime dispatch mechanisms allow selective use of subsets based on CPU detection. LLVM-based , starting from version 7 (2018), provides comprehensive AVX-512 intrinsics and auto-vectorization, with ongoing enhancements for and variants up to version 20 in 2025. 's oneAPI DPC++/C++ (successor to ICC) offers full AVX-512 optimization, including subset dispatching and integration with libraries like oneDNN for vectorized code. Operating system support ensures proper detection and execution of AVX-512 instructions. Linux kernel version 4.10 (2017) introduced full CPU feature detection for AVX-512 via the cpuid mechanism, enabling user-space applications to query and utilize supported subsets without compatibility issues. (2015) and later versions provide native AVX-512 execution through the x86 instruction decoder, with updates in enhancing power management for vector workloads. Early implementations, such as Skylake-SP, suffered from downclocking issues where AVX-512 instructions triggered automatic frequency reductions (up to 250-500 MHz offsets) to manage power and thermal limits, potentially degrading performance in mixed workloads; later processors like and mitigate this through improved turbo behaviors and wider pipelines. As of 2025, AVX-512 adoption has accelerated in data centers and scientific computing, driven by Zen 5's native implementation providing full support across server lines, though fragmentation persists due to varying subsets across vendors—Intel's newer designs discontinue niche extensions like ER and PF, focusing on broadly applicable ones such as F, VL, and VNNI. This selective evolution addresses historical inconsistencies while promoting wider software portability.

Specialized Vector Extensions

AVX-VNNI for Neural Networks

AVX-VNNI, or Vector Neural Network Instructions within the Advanced Vector Extensions framework, provides specialized instructions for accelerating low-precision dot products in tasks. These instructions target quantized models, where INT8 and INT16 data types replace higher-precision formats to reduce memory footprint and boost computational throughput while maintaining acceptable accuracy. By fusing multiply and accumulate operations, AVX-VNNI optimizes the core kernels prevalent in convolutional s, enabling faster AI on CPUs. These are subsets of the AVX-512 instruction set. The VNNI subset features key instructions such as VPDPBUSD for unsigned byte (INT8) dot products and VPDPWSSD for signed word (INT16) dot products, supporting 512-bit vectors using EVEX encoding, with a later AVX2 VNNI extension providing 256-bit VEX-encoded versions for broader compatibility. The VPDPBUSD instruction multiplies unsigned bytes from one source with signed bytes from another, sums four such products per 32-bit , and accumulates the result into a signed doubleword destination. Similarly, VPDPWSSD performs signed word multiplications and summations in the same fused manner. This design accumulates four multiplies per instruction, streamlining what would otherwise require multiple separate multiply and add operations. Introduced in late 2017 with the Knights Mill processor as part of its optimizations, AVX-VNNI was announced in 2017 to support broader hardware adoption. In Knights Mill, these instructions deliver up to four times the peak performance compared to prior generations, primarily through enhanced integer throughput for and workloads. The extension builds on the fused multiply-add capabilities of the F subset, adapting them for integer computations.

AVX-IFMA for Integer Operations

AVX-IFMA, or Advanced Vector Extensions Integer Fused Multiply-Add, is a specialized subset of the instruction set designed to accelerate high-throughput arithmetic through fused multiply-accumulate operations on fixed-point numbers. This extension enables precise computations without intermediate rounding, making it suitable for applications requiring exact results. Introduced in 2017 with the processor based on the Knights Mill microarchitecture, AVX-IFMA provides hardware support for efficient processing of large datasets in domains. The core instructions in AVX-IFMA are VPMADD52LUQ and VPMADD52HUQ, which perform unsigned 52-bit multiply-accumulate operations on 512-bit vectors. VPMADD52LUQ multiplies the lower 52 bits of each 64-bit element from two source vectors and accumulates the lower 64 bits of the 104-bit product into the destination vector, while VPMADD52HUQ handles the higher 64 bits of the product for the same elements. Operating on 512-bit wide ZMM registers, these instructions process eight 64-bit quadwords simultaneously, allowing for 8 independent 52-bit multiplications per instruction, with a pair enabling full 104-bit precision accumulation for 8 multiplications. The 52-bit width serves as a scale factor to control overflow in the 104-bit intermediate products, ensuring they fit within two 64-bit accumulators without loss of precision. Unlike floating-point operations, AVX-IFMA avoids rounding errors entirely, as it performs exact integer arithmetic without exponent handling or denormalized values. In contrast to the floating-point FMA3 instructions introduced in earlier AVX versions, AVX-IFMA is exclusively for integers and complements FMA3 by targeting fixed-point workloads where precision is paramount. It supports masking from the broader AVX-512 framework to enable conditional execution on vector elements. Primary applications include cryptography, such as modular multiplication in RSA and elliptic curve cryptography (ECC), as well as hashing algorithms like SHA-512, where high-speed integer operations enhance throughput for multi-buffer processing. These capabilities have been leveraged in optimized libraries for secure data streaming and financial computations requiring robust integer precision.

AVX10

Design Goals and Changes from AVX-512

Intel announced AVX10 in July 2023 as a successor to , aiming to unify the fragmented vector instruction set architecture across its processors and enable consistent 512-bit vector support in hybrid architectures featuring both performance (P-) and (E-) cores. The primary design goals included reducing developer complexity by converging all major subsets into a single, mandatory ISA without optional fragments, thereby addressing slow adoption of due to power consumption issues and compatibility challenges in heterogeneous core designs. This unification simplifies feature detection through a single leaf (Leaf 24H), which provides versioned enumeration for supported vector widths (128, 256, or 512 bits), eliminating the need for over 20 discrete feature flags. Key changes from AVX-512 involve mandating AVX10/256 support on all processors while making AVX10/512 optional initially on P-cores only, ensuring backward compatibility with all existing SSE, AVX, AVX2, and AVX-512 instructions via VEX and EVEX encodings. AVX10 deprecates AVX-512-specific modes by freezing their CPUID flags and routing all future vector extensions through the AVX10 versioning scheme, such as AVX10.1 (introduced in 2024 with initial support in Granite Rapids processors) and AVX10.2 (specification released in July 2024). A significant revision occurred in March 2025, introducing a breaking change that removed the 256-bit-only mode for AVX10, mandating full 512-bit support across all cores capable of AVX10.2 to further streamline hybrid core compatibility and boost performance portability. These updates directly tackle AVX-512's adoption barriers, including high power draw leading to downclocking on non-Server SKUs and inconsistent support across core types, by providing a converged ISA that prioritizes efficiency and broad applicability. Overall, AVX10 maintains full for legacy applications while evolving the architecture to support modern workloads like AI and HPC without the fragmentation of prior extensions.

New Instructions and Datatypes

AVX10 introduces support for low-precision floating-point datatypes optimized for and media processing workloads, including FP8 formats in E4M3 and E5M2 variants. The E4M3 format allocates 1 , 4 exponent bits, and 3 mantissa bits, while E5M2 uses 1 , 5 exponent bits, and 2 mantissa bits, adhering to the Open Compute Project's Open Floating Point 8 specification for enhanced memory efficiency and computational density in neural networks. These datatypes enable reduced precision operations without significant accuracy loss in and tasks. Additionally, AVX10 expands BFloat16 (BF16) support, a 16-bit format with an 8-bit exponent and 7-bit mantissa, to facilitate seamless integration in AI accelerators by providing direct vectorized arithmetic and conversions. Key conversions include VCVTBF162PS, which transforms packed BF16 elements to single-precision FP32 across 128-, 256-, or 512-bit vectors, supporting writemasks for selective updates and aiding precision scaling in mixed-format computations. This instruction operates via EVEX encoding, allowing merging or zeroing of masked elements, and is essential for accumulating low-precision results into higher-precision accumulators. Among the novel arithmetic instructions, VADDBF16 performs packed addition on BF16 vectors, computing dest = src1 + src2 for each element while preserving the BF16 format, with support for vector lengths up to 512 bits and writemasking. Similarly, VMULBF16 executes packed , yielding dest = src1 * src2, enabling efficient element-wise operations in matrix multiplications for models. For dot product computations, VDPPHPS computes the vector neural network instruction (VNNI) of FP16 pairs into FP32 accumulators, inheriting from prior VNNI designs, via dest += (src1[2i] * src2[2i]) + (src1[2i+1] * src2[2i+1]). In media applications, VMPSADBW supports 512-bit multiple sum of absolute differences on byte elements, useful for in video encoding, by accumulating shuffled absolute differences controlled by an immediate . Minimum and maximum instructions adhere to IEEE-754-2019 semantics for handling NaNs and infinities. VMINMAXPH operates on packed half-precision FP16 elements, selecting the minimum or maximum per pair while propagating NaNs appropriately. Likewise, VMINMAXBF16 applies to BF16 vectors, ensuring consistent behavior across precisions in AI normalization tasks. Scalar comparison instructions simplify floating-point comparisons without raising exceptions. VCOMXSD compares scalar double-precision values and updates EFLAGS accordingly, while VCOMXSS and VCOMXSH handle single- and half-precision scalars, respectively, providing exception-free status reporting for control flow in vectorized code. Data movement enhancements include VMOVD and VMOVW, which copy 32-bit doubleword or 16-bit word data to XMM registers, zero-extending the upper bits for partial vector loads that maintain compatibility with wider operations. For BF16 dot products in 512-bit vectors, which process 32 elements, software can implement accumulation as follows:

acc += ∑_{i=0}^{31} (a[i] * b[i])

acc += ∑_{i=0}^{31} (a[i] * b[i])

This leverages VMULBF16 for followed by horizontal reduction or VADDBF16 for , optimizing throughput in AI layers.

Hardware Implementation and Compatibility

The initial rollout of AVX10 began with Intel's Granite Rapids processors, launched in September 2024 as part of the sixth-generation Scalable family, which introduced AVX10.1 support exclusively in 512-bit form for server workloads. Subsequent expansion is expected with the Diamond Rapids processors in 2026, also -based, adding AVX10.2 features including new instructions for AI and media processing while maintaining 512-bit vector execution as the maximum width. In line with this evolution, has mandated 512-bit vector support across all performance (P) and efficiency (E) cores in AVX10 implementations, eliminating prior options for 256-bit-only modes to ensure uniform ISA convergence. Software enumeration of AVX10 capabilities relies on the instruction, where leaf 07H with subleaf ECX=01H sets EDX bit 19 to indicate general AVX10 support, while the dedicated converged vector ISA leaf 24H (EAX=24H, ECX=00H) provides EBX[7:0] ≥ 2 for AVX10.2 versioning and enumerates maximum vector lengths via the CPU_SUPPORTED_VECTOR_LENGTHS field. Support for specialized datatypes like BF16 and FP8 (in E4M3 and E5M2 formats) is similarly detected through these AVX10.2 feature flags in leaf 24H, enabling runtime verification of instructions such as VADDNEPBF16 or FP8 conversions across 128-, 256-, and 512-bit widths. AVX10 maintains with prior vector extensions by retaining EVEX encoding for all operations, allowing seamless execution of 128-bit (XMM), 256-bit (YMM), and 512-bit (ZMM) instructions on capable hardware without architectural breaks from AVX-512. Runtime checks via leaf 24H ensure software can query supported vector lengths and features dynamically, preventing invalid executions on mismatched cores. As of November 2025, has confirmed AVX10.2 support, including alongside APX and AMX extensions, in Nova Lake processors, anticipated for 2026. Toolchain advancements have progressed, with the (NASM) version 3.0, released in October 2025, providing full syntactic and encoding support for AVX10 instructions to facilitate development. Regarding power efficiency, AVX10 implementations demonstrate improvements over AVX-512 by decoupling optional features like masking and broadcasting from mandatory wide-vector execution, reducing thermal overhead and downclocking penalties in mixed P/E-core environments. For AMD platforms, potential integration of AVX10 remains under consideration for the Zen 6 architecture, expected around 2026 or later, though no firm commitments have been announced as of November 2025. This convergence from fragmented AVX-512 subsets positions AVX10 as a unified extension for future x86 designs.

APX

Core Features and Register Expansions

Intel® Advanced Performance Extensions (APX) primarily innovates by doubling the number of general-purpose registers (GPRs) from 16 to 32, adding extended GPRs R16 through R31, each 64-bit wide and accessible only in 64-bit mode. These additional registers are encoded using a new REX2 prefix and leverage space previously allocated to deprecated features like , enabling compilers to retain more values in registers and reduce memory accesses. The vector registers remain unchanged as the existing ZMM set, but APX extends access to up to 32 of them via an enhanced EVEX prefix, preserving compatibility with prior AVX instructions while supporting scalable vector operations. APX introduces specialized instruction qualifiers to enhance efficiency, including NF (No Flags), which suppresses updates to EFLAGS status flags for arithmetic operations like ADD and SUB, avoiding unnecessary flag computations in pipelines. Complementing this is ZU (Zero Upper), which automatically clears the upper 32 bits of 32-bit GPR destinations, eliminating explicit zeroing instructions and reducing code size. For control flow optimization, APX adds CCMP and CTEST instructions, which perform conditional compares and tests based on a source condition code (SCC), updating flags without branches and facilitating if-conversion to minimize misprediction costs. These features collectively support non-branching conditional moves, improving branch-heavy code paths. The core design goals of APX target a 10% reduction in loads and over 20% fewer stores in compiled code, as measured in simulations of the SPEC CPU® 2017 Integer benchmark, achieved through expanded register pressure and instructions like PUSH2/POP2 for dual-register transfers. This register expansion also promotes scalar-vector fusion by enabling three-operand forms for legacy scalar integer instructions via the EVEX prefix, allowing seamless integration of scalar and vector computations without intermediate register spills. APX was first announced by Intel in July 2023, with the complete specification published in July 2025 (Revision 7.0). Compiler support for APX is comprehensive as of November 2025; GCC 15 enables APX features, including CCMP/CTEST, NF, and ZU, through the -mapxf flag. These enhancements position APX to improve overall and code density in general-purpose workloads.

Encoding and Instruction Semantics

The Intel® Advanced Performance Extensions (APX) introduce new encoding formats to support an expanded set of 32 general-purpose registers (GPRs) and enhanced scalar instruction capabilities in 64-bit mode, utilizing the REX2 prefix and extensions to the EVEX prefix. The REX2 prefix, a 2-byte encoding starting with 0xD5, provides additional bits (R4, X4, B4) to address the extended GPRs (R16–R31), enabling instructions to reference up to 32 registers without legacy conflicts. This is complemented by EVEX map 4, which repurposes bits in the EVEX prefix for APX-specific payloads, including controls like ND (New Destination) and NF (No Flags) to modify instruction behavior while maintaining compatibility with existing encodings. APX supports three-operand integer operations through the NDD (New Destination/Displacement) format, allowing instructions such as ADD, SUB, and OR to specify a distinct destination register separate from the source operands, which reduces the need for temporary registers and lowers micro-op (uop) counts for common arithmetic patterns. For example, the instruction ADD R16, R17, R18 encodes the addition of R17 and R18 into R16 using REX2 or EVEX prefixes, with EVEX.ND=1 ensuring the upper bits of the destination are zeroed if specified. The ZU suffix further optimizes this by explicitly zeroing the upper 64 bits of the destination register in operations like SETcc.zu or IMUL, eliminating manual clearing steps and improving code density. These encodings draw from MPX-like mechanisms for conditionals, using EVEX payload bits to encode source condition codes (SCC) in instructions like CCMP and CTEST. In terms of instruction semantics, APX emphasizes fault suppression and efficiency in conditional execution. The CFCMOV (flag-conditional move) instructions, such as CFCMOVB rv, rv/mv, perform moves based on flag conditions (e.g., below for CFCMOVB) while suppressing exceptions like debug or faults if the condition is false, enabling safer patterns with reduced uops compared to traditional CMOV. Similarly, PUSH2 and POP2 semantics allow pushing or popping two GPRs in a single instruction with 16-byte stack alignment, optimizing register save/restore sequences and cutting uops for function prologs/epilogs. No legacy mode conflicts arise, as APX features require the APX_F bit in and XCR0 to be enabled, restricting them exclusively to 64-bit mode. By November 2025, full APX support has been integrated into assemblers, with NASM version 3.00 providing syntax for these encodings, including three-operand forms and ZU suffixes, facilitating developer adoption without custom tooling. This scalar encoding foundation briefly enhances vector code integration by streamlining register pressure in mixed workloads.

Integration with AVX10

The integration of Advanced Performance Extensions (APX) with AVX10 leverages the Extended Vector Extension (EVEX) prefix to unify scalar and vector operations, allowing access to 32 general-purpose registers (GPRs) within AVX10's 512-bit vector framework. This synergy enables developers to write hybrid code that benefits from expanded scalar register availability during vector-heavy loops, reducing the need for spills and reloads that commonly occur in mixed scalar-vector workloads. By promoting legacy instructions to EVEX encoding, APX facilitates seamless if-conversion and conditional execution alongside AVX10's scalable vector instructions, minimizing branch mispredictions without requiring separate code paths. Hardware implementations combining APX and AVX10 are confirmed for Intel's Nova Lake processors, expected in the second half of 2026. Feature detection occurs via leaf 7 (EAX=7, ECX=1), where EDX bit 21 indicates APX_F support, alongside EDX bit 19 for EVEX-encoded 32 GPRs and leaf 24 for vector width compatibility with AVX10. Panther Lake, announced in 2025 and expected in 2026, does not incorporate APX or AVX10, focusing instead on prior-generation vector capabilities. Software ecosystems have advanced rapidly to support this integration, with GCC 15 providing full APX and AVX10.2 enablement through the -mapxf flag, alongside enhanced auto-vectorization for mixed workloads. / similarly incorporates APX via EVEX and REX2 prefixes, with recompilation alone sufficient for most applications without source modifications. Operating system kernels, such as , detect APX via extended enumeration and XSAVE management for the additional 16 extended GPRs (R16-R31), with patches merged in early 2025 to handle context switching and deprecate conflicting features like MPX. In mixed scalar-vector workloads, APX-AVX10 integration yields performance uplifts of approximately 10-20% through reduced loads (by 10%) and stores (by over 20%), as simulated on SPEC CPU 2017 integer benchmarks, by alleviating register pressure in vector loops. These gains stem from fewer instructions overall (about 10% reduction) and improved power efficiency, though real-world benchmarks remain sparse due to the absence of shipping hardware in 2025. Early projections indicate particular benefits for dynamic languages and applications that blend general-purpose and vector processing.

Applications and Performance

Major Use Cases

Advanced Vector Extensions (AVX) have become integral to accelerating and workloads, particularly through instructions like VNNI and BF16 support in frameworks such as and . These extensions enable efficient low-precision computations for inference, where VNNI instructions perform dot-product accumulations on 8-bit integers, providing significant speedups in quantized models compared to prior generations without such . For instance, leverages BF16 for mixed-precision training and , reducing memory usage while maintaining accuracy in large language models. similarly benefits from these optimizations, with integrated support for VNNI enabling faster matrix multiplications in convolutional s. In (HPC) and scientific simulations, AVX extensions enhance dense linear algebra and particle-based modeling. The AVX-512 FMA instructions double the throughput of floating-point multiply-accumulate operations, significantly boosting performance in benchmarks like LINPACK, where systems with achieve up to 2x higher floating-point operations per second compared to AVX2-equipped processors in dense matrix solving. For simulations, such as those in NAMD software, AVX-512's gather and scatter instructions facilitate efficient non-contiguous memory access for atomic coordinate updates, accelerating trajectory computations on supported hardware through optimized vectorization of force calculations. Multimedia processing and cryptographic applications also rely on AVX for parallel data handling. In video encoding, AVX2's gather instructions improve performance by loading scattered pixel data into vectors for SIMD operations, as seen in codecs like x264, where they reduce encoding time for high-resolution streams by enabling faster motion estimation without contiguous memory layouts. For cryptography, OpenSSL incorporates AES-NI extensions alongside AVX-512 IFMA for integer arithmetic in big-number operations, providing faster bulk encryption in TLS handshakes and secure communications compared to software-only implementations. As of 2025, AVX10 instructions, implemented in server processors since 2024, introduce FP8 datatypes tailored for generative AI, enabling compact representations in transformer models to reduce while sustaining inference throughput in tools like Intel's for large-scale text generation. Similarly, the upcoming Advanced Performance Extensions (APX), expected in 2026, will expand the register file to support more efficient vector operations in database engines, accelerating analytical queries on columnar stores by minimizing register spills during predicate evaluations and aggregations. Notable software adoptions include , which utilizes AVX2 for real-time virtual background effects and noise suppression in video calls, ensuring smooth performance on compatible CPUs. Blender's Cycles renderer incorporates for accelerated ray tracing and denoising, delivering improved render times on multi-core systems for complex scenes involving volumetric simulations.

Power Consumption and Downclocking Effects

Advanced Vector Extensions (AVX) instructions, particularly , significantly increase power consumption compared to earlier SIMD extensions like SSE, leading to thermal constraints and frequency throttling on processors starting from Skylake architectures. AVX-512 workloads can draw up to 2.5 times the power of SSE baselines due to the wider 512-bit vector operations and higher computational density, which elevate current demands and heat generation. This power surge triggers to maintain thermal limits, with L1 throttling reducing clock speeds to 85% of the base frequency and L2 throttling dropping them further to 70%, especially in sustained heavy vector computations. AVX2 instructions exhibit a milder effect, consuming approximately 1.5 times the power of SSE while applying similar but less severe throttling levels. These throttling mechanisms activate based on instruction width and duration, with AVX-512 engaging after brief periods of upper register usage (e.g., bits 511:256), causing temporary halts of 10-20 microseconds during voltage and frequency adjustments. In mixed workloads, such as those common in (HPC), this results in unpredictable variability, as non-AVX code on the same core experiences collateral downclocking. Benchmarks of sustained AVX-512 operations, like dense matrix multiplications, demonstrate 20-30% overall degradation from limits, even after accounting for vectorization gains, highlighting the trade-off between peak throughput and sustained efficiency. To mitigate these effects, operating systems and tools provide frequency management options, such as Linux's msr-tools for adjusting Model-Specific Registers (MSRs) like IA32_TURBO_RATIO_LIMIT to apply AVX offsets, allowing manual tuning of throttling thresholds per instruction level. Compilers, including Intel's oneAPI and LLVM-based tools, support auto-dispatch mechanisms that runtime-select narrower vector paths (e.g., over ) on throttling-prone hardware, preserving higher frequencies for scalar or lighter SIMD code without sacrificing compatibility. AVX10, implemented starting in 2024 for server processors and expanding to lines in 2026, addresses these challenges through refined EVEX encoding, which streamlines 512-bit operations for both (P) and (E) cores, reducing overhead and enabling more power-efficient vector execution compared to legacy AVX-512. By unifying vector lengths up to 512 bits with , AVX10 minimizes transition stalls and dynamic power spikes, potentially lowering consumption in vector-heavy tasks by optimizing register masking and length suppression. Complementing this, Advanced Extensions (APX), expected in 2026, will reduce scalar overhead in hybrid workloads by expanding general-purpose registers from 16 to 32, cutting loads by 10% and stores by over 20% in compiled code, which translates to lower dynamic power usage since register operations are more efficient than memory accesses.

References

  1. https://en.wikichip.org/wiki/x86/avx512_vnni
  2. Apr 27, 2021 · Level 1: No action is necessary for applications to use Intel AVX. Level 2: Applications in this category have the option to access and ...Missing: kernel | Show results with:kernel
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.