Recent from talks
Nothing was collected or created yet.
Intel Graphics Technology
View on Wikipedia| Release date | 2010 |
|---|---|
| Manufactured by | Intel and TSMC |
| Designed by | Intel |
| API support | |
| OpenCL | 1.2+ (Depending on version, see capabilities)[1] |
| OpenGL | 2.1+ (Depending on version, see capabilities)[1][2][3] |
| Vulkan | 1.4+ (Depending on version, see capabilities) |
| History | |
| Predecessor | Intel GMA |
| Successor | Intel Xe |
| Support status | |
| Supported | |
Intel Graphics Technology (GT)[a] is a series of integrated graphics processors (IGP) designed by Intel and manufactured by Intel and under contract by TSMC. These GPUs are built into the same chip as the central processing unit (CPU) and are included in most Intel-based laptops and desktops. The series was introduced in 2010 as Intel HD Graphics, later renamed Intel UHD Graphics in 2017. It succeeded the earlier Graphics Media Accelerator (GMA) series.
Intel also offers higher-performance variants under the Iris, Iris Pro, and Iris Plus brands, introduced beginning in 2013. These versions include features such as increased execution units and, in some models, embedded memory (eDRAM).
Intel Graphics Technology is sold alongside Intel Arc, the company’s line of discrete graphics cards aimed at gaming and high-performance applications.
History
[edit]
Before the introduction of Intel HD Graphics, Intel integrated graphics were built into the motherboard's northbridge, as part of the Intel's Hub Architecture. They were known as Intel Extreme Graphics and Intel GMA. As part of the Platform Controller Hub (PCH) design, the northbridge was eliminated and graphics processing was moved to the same die as the central processing unit (CPU).[citation needed]
The previous Intel integrated graphics solution, Intel GMA, had a reputation of lacking performance and features, and therefore was not considered to be a good choice for more demanding graphics applications, such as 3D gaming. The performance increases brought by Intel's HD Graphics made the products competitive with integrated graphics adapters made by its rivals, Nvidia and ATI/AMD.[4] Intel HD Graphics, featuring minimal power consumption that is important in laptops, was capable enough that PC manufacturers often stopped offering discrete graphics options in both low-end and high-end laptop lines, where reduced dimensions and low power consumption are important.[citation needed]
Generations
[edit]Intel HD and Iris Graphics are divided into generations, and within each generation are divided into 'tiers' of increasing performance, denominated by the 'GTx' label. Each generation corresponds to the implementation of a Gen[5] graphics microarchitecture with a corresponding GEN instruction set architecture[6][7][8] since Gen4.[9]
Gen5 architecture
[edit]
Westmere
[edit]In January 2010, Clarkdale and Arrandale processors with Ironlake graphics were released, and branded as Celeron, Pentium, or Core with HD Graphics. There was only one specification:[10] 12 execution units, up to 43.2 GFLOPS at 900 MHz. It can decode a H.264 1080p video at up to 40 fps.
Its direct predecessor, the GMA X4500, featured 10 EUs at 800 MHz, but it lacked some capabilities.[11]
| Model number | Execution units | Shading units | Base clock (MHz) | Boost clock (MHz) | GFLOPS (FP32) |
|---|---|---|---|---|---|
| HD Graphics | 12 | 24 | 500 | 900 | 24.0–43.2 |
Gen6 architecture
[edit]
Sandy Bridge
[edit]In January 2011, the Sandy Bridge processors were released, introducing the "second generation" HD Graphics:
| Model number | Tier | Execution units | Boost clock (MHz) |
Max GFLOPS | ||
|---|---|---|---|---|---|---|
| FP16 | FP32 | FP64 | ||||
| HD Graphics | GT1 | 6 | 1000 | 192 | 96 | 24 |
| HD Graphics 2000 | 1350 | 259 | 129.6 | 32 | ||
| HD Graphics 3000 | GT2 | 12 | 1350 | 518 | 259.2 | 65 |
| HD Graphics P3000 | GT2 | 12 | 1350 | 518 | 259.2 | 65 |
Sandy Bridge Celeron and Pentium have Intel HD, while Core i3 and above have either HD 2000 or HD 3000. HD Graphics 2000 and 3000 include hardware video encoding and HD postprocessing effects.[citation needed]
Gen7 architecture
[edit]
Ivy Bridge
[edit]On 24 April 2012, Ivy Bridge was released, introducing the "third generation" of Intel's HD graphics:[12]
| Model number | Tier | Execution units | Shading units | Boost clock (MHz) | Max GFLOPS (FP32) |
|---|---|---|---|---|---|
| HD Graphics [Mobile] | GT1 | 6 | 48 | 1050 | 100.8 |
| HD Graphics 2500 | 1150 | 110.4 | |||
| HD Graphics 4000 | GT2 | 16 | 128 | 1300 | 332.8 |
| HD Graphics P4000 | GT2 | 16 | 128 | 1300 | 332.8 |
Ivy Bridge Celeron and Pentium have Intel HD, while Core i3 and above have either HD 2500 or HD 4000. HD Graphics 2500 and 4000 include hardware video encoding and HD postprocessing effects.
For some low-power mobile CPUs there is limited video decoding support, while none of the desktop CPUs have this limitation. HD P4000 is featured on the Ivy Bridge E3 Xeon processors with the 12X5 v2 descriptor, and supports unbuffered ECC RAM.
Gen7.5 architecture
[edit]
Haswell
[edit]
In June 2013, Haswell CPUs were announced, with four tiers of integrated GPUs:
| Model number | Tier | Execution units |
Shading units |
eDRAM (MB) |
Boost clock (MHz) |
Max GFLOPS | ||
|---|---|---|---|---|---|---|---|---|
| FP16 | FP32 | FP64 | ||||||
| Consumer | ||||||||
| HD Graphics | GT1 | 10 | 80 | N/A | 1150 | 384 | 192 | 48 |
| HD Graphics 4200 | GT2 | 20 | 160 | 850 | 544 | 272 | 68 | |
| HD Graphics 4400 | 950–1150 | 608-736 | 304–368 | 76-92 | ||||
| HD Graphics 4600 | 900–1350 | 576-864 | 288–432 | 72-108 | ||||
| HD Graphics 5000 | GT3 | 40 | 320 | 1000–1100 | 1280–1408 | 640–704 | 160-176 | |
| Iris Graphics 5100 | 1100–1200 | 1408–1536 | 704–768 | 176-192 | ||||
| Iris Pro Graphics 5200 | GT3e | 128 | 1300 | 1280–1728 | 640-864 | 160-216 | ||
| Professional | ||||||||
| HD Graphics P4600 | GT2 | 20 | 160 | N/A | 1200–1250 | 768-800 | 384–400 | 96-100 |
| HD Graphics P4700 | 1250–1300 | 800-832 | 400–416 | 100-104 | ||||
The 128 MB of eDRAM in the Iris Pro GT3e is in the same package as the CPU, but on a separate die manufactured in a different process. Intel refers to this as a Level 4 cache, available to both CPU and GPU, naming it Crystalwell. The Linux drm/i915 driver is aware and capable of using this eDRAM since kernel version 3.12.[13][14][15]
Gen8 architecture
[edit]
Broadwell
[edit]In November 2013, it was announced that Broadwell-K desktop processors (aimed at enthusiasts) would also carry Iris Pro Graphics.[16]
The following models of integrated GPU are announced for Broadwell processors:[17][better source needed]
| Model number | Tier | Execution units |
Shading units |
eDRAM (MB) |
Boost clock (MHz) |
Max GFLOPS (FP32) |
|---|---|---|---|---|---|---|
| Consumer | ||||||
| HD Graphics | GT1 | 12 | 96 | — | 850 | 163.2 |
| HD Graphics 5300 | GT2 | 24 | 192 | 900 | 345.6 | |
| HD Graphics 5500 | 950 | 364.8 | ||||
| HD Graphics 5600 | 1050 | 403.2 | ||||
| HD Graphics 6000 | GT3 | 48 | 384 | 1000 | 768 | |
| Iris Graphics 6100 | 1100 | 844.8 | ||||
| Iris Pro Graphics 6200 | GT3e | 128 | 1150 | 883.2 | ||
| Professional | ||||||
| HD Graphics P5700 | GT2 | 24 | 192 | – | 1000 | 384 |
| Iris Pro Graphics P6300 | GT3e | 48 | 384 | 128 | 1150 | 883.2 |
Braswell
[edit]| Model number | CPU model |
Tier | Execution units |
Clock speed (MHz) |
|---|---|---|---|---|
| HD Graphics 400 | E8000 | GT1 | 12 | 320 |
| N30xx | 320–600 | |||
| N31xx | 320–640 | |||
| J3xxx | 320–700 | |||
| HD Graphics 405 | N37xx | 16 | 400–700 | |
| J37xx | 18 | 400–740 |
Gen9 architecture
[edit]
Skylake
[edit]The Skylake line of processors, launched in August 2015, retires VGA support, while supporting multi-monitor setups of up to three monitors connected via HDMI 1.4, DisplayPort 1.2 or Embedded DisplayPort (eDP) 1.3 interfaces.[18][19]
The following models of integrated GPU are available or announced for the Skylake processors:[20][21][better source needed]
New features: Vulkan 1.3 (1.4 with Mesa) and DirectX 12 Feature Level 12_2
| Model number | Tier | Execution units |
Shading units |
eDRAM (MB) |
Boost clock (MHz) |
Max GFLOPS (FP32) |
|---|---|---|---|---|---|---|
| Consumer | ||||||
| HD Graphics 510 | GT1 | 12 | 96 | — | 1050 | 201.6 |
| HD Graphics 515 | GT2 | 24 | 192 | 1000 | 384 | |
| HD Graphics 520 | 1050 | 403.2 | ||||
| HD Graphics 530 | 1150[18] | 441.6 | ||||
| Iris Graphics 540 | GT3e | 48 | 384 | 64 | 1050 | 806.4 |
| Iris Graphics 550 | 1100 | 844.8 | ||||
| Iris Pro Graphics 580 | GT4e | 72 | 576 | 128 | 1000 | 1152 |
| Professional | ||||||
| HD Graphics P530 | GT2 | 24 | 192 | – | 1150 | 441.6 |
| Iris Pro Graphics P555 | GT3e | 48 | 384 | 128 | 1000[22] | 768 |
| Iris Pro Graphics P580 | GT4e | 72 | 576 | 1000 | 1152 | |
Apollo Lake
[edit]The Apollo Lake line of processors was launched in August 2016.
| Model number | CPU model |
Tier | Execution units |
Shading units |
Clock speed (MHz) |
|---|---|---|---|---|---|
| HD Graphics 500 | E3930 | GT1 | 12 | 96 | 400 – 550 |
| E3940 | 400–600 | ||||
| N3350 | 200–650 | ||||
| N3450 | 200–700 | ||||
| J3355 | 250–700 | ||||
| J3455 | 250–750 | ||||
| HD Graphics 505 | E3950 | 18 | 144 | 500–650 | |
| N4200 | 200–750 | ||||
| J4205 | 250–800 |
Gen9.5 architecture
[edit]
Kaby Lake
[edit]The Kaby Lake line of processors was introduced in August 2016. New features: speed increases, support for 4K UHD "premium" (DRM encoded) streaming services, media engine with full hardware acceleration of 8- and 10-bit HEVC and VP9 decode.[23][24]
| Model number | Tier | Execution units |
Shading units |
eDRAM (MB) |
Base clock (MHz) |
Boost clock (MHz) |
Max GFLOPS (FP32) |
Used in | |
|---|---|---|---|---|---|---|---|---|---|
| Consumer | |||||||||
| HD Graphics 610 | GT1 | 12 | 96 | — | 300−350 | 900−1100 | 172.8–211.2 | Desktop Celeron, Desktop Pentium G4560, i3-7101 | |
| HD Graphics 615 | GT2 | 24 | 192 | 300 | 900 – 1050 | 345.6 – 403.2 | m3-7Y30/32, i5-7Y54/57, i7-7Y75, Pentium 4415Y | ||
| HD Graphics 620 | 1000–1050 | 384–403.2 | i3-7100U, i5-7200U, i5-7300U, i7-7500U, i7-7600U | ||||||
| HD Graphics 630 | 350 | 1000–1150 | 384−441.6 | Desktop Pentium G46**, i3, i5 and i7, and Laptop H-series i3, i5 and i7 | |||||
| Iris Plus Graphics 640 | GT3e | 48 | 384 | 64 | 300 | 950–1050 | 729.6−806.4 | i5-7260U, i5-7360U, i7-7560U, i7-7660U | |
| Iris Plus Graphics 650 | 1050–1150 | 806.4−883.2 | i3-7167U, i5-7267U, i5-7287U, i7-7567U | ||||||
| Professional | |||||||||
| HD Graphics P630 | GT2 | 24 | 192 | – | 350 | 1000–1150 | 384−441.6 | Xeon E3-**** v6 | |
Kaby Lake Refresh / Amber Lake / Coffee Lake / Coffee Lake Refresh / Whiskey Lake / Comet Lake
[edit]The Kaby Lake Refresh line of processors was introduced in October 2017. New features: HDCP 2.2 support[25]
| Model number | Tier | Execution units |
Shading units |
eDRAM (MB) |
Base clock (MHz) |
Boost clock (MHz) |
Max GFLOPS (FP32) |
Used in |
|---|---|---|---|---|---|---|---|---|
| Consumer | ||||||||
| UHD Graphics 610 | GT1 | 12 | 96 | – | 350 | 1050 | 201.6 | Pentium Gold G54**, Celeron G49**
i5-10200H |
| UHD Graphics 615 | GT2 | 24 | 192 | 300 | 900–1050 | 345.6–403.2 | i7-8500Y, i5-8200Y, m3-8100Y | |
| UHD Graphics 617 | 1050 | 403.2 | i7-8510Y, i5-8310Y, i5-8210Y | |||||
| UHD Graphics 620 | 1000–1150 | 422.4–441.6 | i3-8130U, i5-8250U, i5-8350U, i7-8550U, i7-8650U, i3-8145U, i5-8265U, i5-8365U, i7-8565U, i7-8665U
i3-10110U, i5-10210U, i5-10310U, i7-10510U i7-10610U i7-10810U | |||||
| UHD Graphics 630 | 23[26] | 184 | 350 | 1100–1150 | 404.8–423.2 | i3-8350K, i3-8100 with stepping B0 | ||
| 24 | 192 | 1050–1250 | 403.2–480 | i9, i7, i5, i3, Pentium Gold G56**, G55**
i5-10300H, i5-10400H, i5-10500H, i7-10750H, i7-10850H, i7-10870H, i7-10875H, i9-10885H, i9-10980HK | ||||
| Iris Plus Graphics 645 | GT3e | 48 | 384 | 128 | 300 | 1050–1150 | 806.4-883.2 | i7-8557U, i5-8257U |
| Iris Plus Graphics 655 | 1050–1200 | 806.4–921.6 | i7-8559U, i5-8269U, i5-8259U, i3-8109U | |||||
| Professional | ||||||||
| UHD Graphics P630 | GT2 | 24 | 192 | – | 350 | 1100–1200 | 422.4–460.8 | Xeon E 21**G, 21**M, 22**G, 22**M, Xeon W-108**M |
Gemini Lake/Gemini Lake Refresh
[edit]New features: HDMI 2.0 support, VP9 10-bit Profile2 hardware decoder[27]
| Model number | Tier | Execution units |
Shading units |
CPU model |
Clock speed (MHz) |
GFLOPS (FP32) |
|---|---|---|---|---|---|---|
| UHD Graphics 600 | GT1 | 12 | 96 | N4000 | 200–650 | 38.4–124.8 |
| N4100 | 200–700 | 38.4–134.4 | ||||
| J4005 | 250–700 | 48.0–134.4 | ||||
| J4105 | 250–750 | 48.0–144.0 | ||||
| J4125 | 250–750 | 48.0–144.0 | ||||
| UHD Graphics 605 | GT1.5 | 18 | N5000 | 200–750 | 57.6–216 | |
| J5005 | 250–800 | 72.0–230.4 |
Gen11 architecture
[edit]
Ice Lake
[edit]New features: 10 nm Gen 11 GPU microarchitecture, two HEVC 10-bit encode pipelines, three 4K display pipelines (or 2× 5K60, 1× 4K120), variable rate shading (VRS),[28][29][30] and integer scaling.[31]
While the microarchitecture continues to support double-precision floating-point as previous versions did, the mobile configurations of it do not include the feature and therefore on these it is supported only through emulation.[32]
| Name | Tier | Execution units |
Shading units |
Base clock (MHz) |
Boost clock (MHz) |
GFLOPS | Used in | ||
|---|---|---|---|---|---|---|---|---|---|
| FP16 | FP32 | FP64 | |||||||
| Consumer | |||||||||
| UHD Graphics | G1 | 32 | 256 | 300 | 900–1050 | 921.6–1075.2 | 460.8–537.6 | 115.2 | Core i3-10**G1, i5-10**G1 |
| Iris Plus Graphics | G4 | 48 | 384 | 300 | 900–1050 | 1382.4–1612.8[33] | 691.2–806.4 | 96-202 | Core i3-10**G4, i5-10**G4 |
| G7 | 64 | 512 | 300 | 1050–1100 | 2150.4–2252.8[33] | 1075.2–1126.4 | 128-282 | Core i5-10**G7, i7-10**G7 | |
Xe-LP architecture (Gen12)
[edit]
| Model | Process | Execution units |
Shading units |
Max boost clock (MHz) |
Processing power (GFLOPS) | Notes | |||
|---|---|---|---|---|---|---|---|---|---|
| FP16 | FP32 | FP64 | INT8 | ||||||
| Intel UHD Graphics 730 | Intel 14++ nm | 24 | 192 | 1200–1300 | 922–998 | 461–499 | — | 1843–1997 | Used in Rocket Lake-S |
| Intel UHD Graphics 750 | 32 | 256 | 1200–1300 | 1228–1332 | 614–666 | 2457–2662 | |||
| Intel UHD Graphics P750 | 32 | 256 | 1300 | 1332 | 666 | 2662 | Used in Xeon W-1300 series | ||
| Intel UHD Graphics 710 | Intel 7 (previously 10ESF) |
16 | 128 | 1300–1350 | 666–692 | 333–346 | 1331–1382 | Used in Alder Lake-S/HX & Raptor Lake-S/HX/S-R/HX-R | |
| Intel UHD Graphics 730 | 24 | 192 | 1400–1450 | 1076–1114 | 538–557 | 2150–2227 | |||
| Intel UHD Graphics 770 | 32 | 256 | 1450–1550 | 1484–1588 | 742–794 | 2970–3174 | |||
| Intel UHD Graphics for 11th Gen Intel Processors | Intel 10SF | 32 | 256 | 1400–1450 | 1434–1484 | 717–742 | 2867–2970 | Used in Tiger Lake-H | |
| Intel UHD Graphics for 11th Gen Intel Processors G4 | 48 | 384 | 1100–1250 | 1690–1920 | 845–960 | 3379–3840 | Used in Tiger Lake-U | ||
| Iris Xe Graphics G7 | 80 | 640 | 1100–1300 | 2816–3328 | 1408–1664 | 5632–6656 | |||
| Iris Xe Graphics G7 | 96 | 768 | 1050–1450 | 3379–4454 | 1690–2227 | 6758–8909 | |||
| Intel UHD Graphics for 12th Gen Intel Processors Intel UHD Graphics for 13th Gen Intel Processors |
Intel 7 (previously 10ESF) |
48 | 384 | 700–1200 | 1075–1843 | 538–922 | 2151–3686 | Used in Alder Lake-H/P/U & Raptor Lake-H/P/U | |
| Intel UHD Graphics for 12th Gen Intel Processors Intel UHD Graphics for 13th Gen Intel Processors Intel Graphics[34] |
64 | 512 | 850–1400 | 1741–2867 | 870–1434 | 3482–5734 | |||
| Iris Xe Graphics Intel Graphics[35] |
80 | 640 | 900–1400 | 2304–3584 | 1152–1792 | 4608–7168 | |||
| Iris Xe Graphics Intel Graphics[36] |
96 | 768 | 900–1450 | 2765–4454 | 1382–2227 | 5530–8909 | |||
These are based on the Intel Xe-LP microarchitecture, the low power variant of the Intel Xe GPU architecture[37] also known as Gen 12.[38][39] New features include Sampler Feedback,[40] Dual Queue Support,[40] DirectX12 View Instancing Tier2,[40] and AV1 8-bit and 10-bit fixed-function hardware decoding.[41] Support for FP64 was removed.[42]
Arc Alchemist Tile GPU (Gen12.7)
[edit]Intel Meteor Lake and Arrow Lake[43] use Intel Arc Alchemist Tile GPU microarchitecture.[44][45]
New features: DirectX 12 Ultimate Feature Level 12_2 support, 8K 10-bit AV1 hardware encoder, HDMI 2.1 48Gbps native support[46]
Meteor Lake
[edit]| Model | Execution units | Shading units | Max boost clock (MHz) | GFLOPS (FP32) |
|---|---|---|---|---|
| Arc Graphics 48EU Mobile | 48 | 384 | 1800 | 1382 |
| Arc Graphics 64EU Mobile | 64 | 512 | 1750–2000 | 1792 |
| Arc Graphics 112EU Mobile | 112 | 896 | 2200 | 3942 |
| Arc Graphics 128EU Mobile | 128 | 1024 | 2200-2350 | 4608 |
Arc Battlemage Tile GPU
[edit]Intel Lunar Lake[43] will use Intel Arc Battlemage Tile GPU microarchitecture.[47]
Features
[edit]Intel Insider
[edit]Beginning with Sandy Bridge, the graphics processors include a form of digital copy protection and digital rights management (DRM) called Intel Insider, which allows decryption of protected media within the processor.[48][49] Previously there was a similar technology called Protected Audio Video Path (PAVP).
HDCP
[edit]Intel Graphics Technology supports the HDCP technology, but the actual HDCP support depends on the computer's motherboard.[citation needed]
Intel Quick Sync Video
[edit]Intel Quick Sync Video is Intel's hardware video encoding and decoding technology, which is integrated into some of the Intel CPUs. The name "Quick Sync" refers to the use case of quickly transcoding ("syncing") a video from, for example, a DVD or Blu-ray Disc to a format appropriate to, for example, a smartphone. Quick Sync was introduced with the Gen 6 in Sandy Bridge microprocessors on 9 January 2011.
Graphics Virtualization Technology
[edit]Graphics Virtualization Technology (GVT) was announced 1 January 2014 and introduced at the same time as Intel Iris Pro. Intel integrated GPUs support the following sharing methods:[50][51]
- Direct passthrough (GVT-d): the GPU is available for a single virtual machine without sharing with other machines
- Paravirtualized API forwarding (GVT-s): the GPU is shared by multiple virtual machines using a virtual graphics driver; few supported graphics APIs (OpenGL, DirectX), no support for GPGPU
- Full GPU virtualization (GVT-g): the GPU is shared by multiple virtual machines (and by the host machine) on a time-sharing basis using a native graphics driver; similar to AMD's MxGPU and Nvidia's vGPU, which are available only on professional line cards (Radeon Pro and Nvidia Quadro)
- Full GPU virtualization in hardware (SR-IOV): The GPU can be partitioned and used/shared by multiple virtual machines and the host with support built-in hardware, unlike GVT-g that does this in software(driver).[52]
Gen9 (i.e. Graphics powering 6th through 9th generation Intel processors) is the last generation of the software-based vGPU solution GVT-G (Intel® Graphics Virtualization Technology –g). SR-IOV (Single Root IO Virtualization) is supported only on platforms with 11th Generation Intel® Core™ "G" Processors (products formerly known as Tiger Lake) or newer. This leaves Rocket Lake (11th Gen Intel Processors) without support for GVT-g and/or SR-IOV. This means Rocket Lake has no full virtualization support.[53] Started from 12th Generation Intel® Core™ Processors, both desktop and laptop Intel CPUs have GVT-g and SR-IOV support.
Multiple monitors
[edit]Ivy Bridge
[edit]HD 2500 and HD 4000 GPUs in Ivy Bridge CPUs are advertised as supporting three active monitors, but this only works if two of the monitors are configured identically, which covers many[54] but not all three-monitor configurations. The reason for this is that the chipsets only include two phase-locked loops (PLLs) for generating the pixel clocks timing the data being transferred to the displays.[55]
Therefore, three simultaneously active monitors can only be achieved when at least two of them share the same pixel clock, such as:
- Using two or three DisplayPort connections, as they require only a single pixel clock for all connections.[56] Passive adapters from DisplayPort to some other connector do not count as a DisplayPort connection, as they rely on the chipset being able to emit a non-DisplayPort signal through the DisplayPort connector. Active adapters that contain additional logic to convert the DisplayPort signal to some other format count as a DisplayPort connection.
- Using two non-DisplayPort connections of the same connection type (for example, two HDMI connections) and the same clock frequency (like when connected to two identical monitors at the same resolution), so that a single unique pixel clock can be shared between both connections.[54]
Another possible three-monitor solution uses the Embedded DisplayPort on a mobile CPU (which does not use a chipset PLL at all) along with any two chipset outputs.[56]
Haswell
[edit]ASRock Z87- and H87-based motherboards support three displays simultaneously.[57] Asus H87-based motherboards are also advertised to support three independent monitors at once.[58]
Capabilities (GPU hardware)
[edit]| Micro- architecture – Socket |
Brand | Graphics | Vulkan | OpenGL | Direct3D | HLSL shader model | OpenCL | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Core | Xeon | Pentium | Celeron | Gen | Graphics brand | Linux | Windows | Linux | Windows | Linux | Windows | Linux | Windows | ||
| Westmere – 1156 | i3/5/7-xxx | — | (G/P)6000 and U5000 | P4000 and U3000 | 5.5th[59] | HD | — | 2.1 | — | 10.1[1] | 4.1 | — | |||
| Sandy Bridge – 1155 | i3/5/7-2000 | E3-1200 | (B)900, (G)800 and (G)600 | (B)800, (B)700, G500 and G400 | 6th[60] | HD 3000 and 2000 | 3.3[61] | 3.1[1] | |||||||
| Ivy Bridge – 1155 | i3/5/7-3000 | E3-1200 v2 | (G)2000 and A1018 | G1600, 1000 and 900 | 7th[62][63] | HD 4000 and 2500 | 1.2[64] | — | 4.2[65] | 4.0[1][66] | 11.0 | 5.0 | 1.2 (Beignet) | 1.2[67] | |
| Bay Trail – SoCs | — | — | J2000, N3500 and A1020 | J1000 and N2000 | HD Graphics (Bay Trail)[68] | ||||||||||
| Haswell – 1150 | i3/5/7-4000 | E3-1200 v3 | (G)3000 | G1800 and 2000 | 7.5th[69] | HD 5000, 4600, 4400 and 4200; Iris Pro 5200, Iris 5000 and 5100 | 4.6[70] | 4.3[71] | 12 (fl 11_1)[72] | ||||||
| Broadwell – 1150 | i3/5/7-5000 | E3-1200 v4 | 3800 | 3700 and 3200 | 8th[73] | Iris Pro 6200[74] and P6300, Iris 6100[75] and HD 6000,[76] P5700, 5600,[77] 5500,[78] 5300[79] and HD Graphics (Broadwell)[80] | 1.3[81] | — | 4.6[82] | 4.4[1] | 11[83] | 1.2 (Beignet) / 3.0 (Neo)[84] | 2.0 | ||
| Braswell – SoCs | — | — | N3700 | N3000, N3050, N3150 | HD Graphics (Braswell),[85] based on Broadwell graphics | 1.2 (Beignet) | |||||||||
| — | — | (J/N)3710 | (J/N)3010, 3060, 3160 | (rebranded) HD Graphics 400, 405 | |||||||||||
| Skylake – 1151 | i3/5/7-6000 | E3-1200 v5 E3-1500 v5 |
(G)4000 | 3900 and 3800 | 9th | HD 510, 515, 520, 530 and 535; Iris 540 and 550; Iris Pro 580 | 1.4 Mesa 25.0[86] | 1.3[87] | 4.6[88] | 12 (fl 12_1) | 6.0 | 2.0 (Beignet)[89] / 3.0 (Neo)[84] | |||
| Apollo Lake – SoCs | — | — | (J/N)4xxx | (J/N)3xxx | HD Graphics 500, 505 | ||||||||||
| Gemini Lake – SoCs | — | — | Silver (J/N)5xxx | (J/N)4xxx | 9.5th[90] | UHD 600, 605 | |||||||||
| Kaby Lake – 1151 | m3/i3/5/7-7000 | E3-1200 v6 E3-1500 v6 |
(G)4000 | (G)3900 and 3800 | HD 610, 615, 620, 630, Iris Plus 640, Iris Plus 650 | 2.0 (Beignet)[89] / 3.0 (Neo)[84] | 2.1[91] | ||||||||
| Kaby Lake Refresh – 1151 | i5/7-8000U | — | — | — | UHD 620 | ||||||||||
| Whiskey Lake – 1151 | i3/5/7-8000U | — | — | — | |||||||||||
| Coffee Lake – 1151 | i3/5/7/9-8000 i3/5/7/9-9000 |
E-2100 E-2200 |
Gold (G)5xxx | (G)49xx | UHD 630, Iris Plus 655 | ||||||||||
| Ice Lake – 1526 | i3/5/7-10xx(N)Gx | — | — | — | 11th | UHD, Iris Plus | 3.0 (Neo)[84] | ||||||||
| Tiger Lake | i3/5/7-11xx(N)Gx | W-11xxxM | Gold (G)7xxx | (G)6xxx | 12th | Iris Xe, UHD | 1.4[92] | 4.6[93] | 3.0 (Neo)[84] | 3.0 (Neo) | |||||
OpenCL 2.1 and 2.2 possible with software update on OpenCL 2.0 hardware (Broadwell+) with future software updates.[94]
Support in Mesa is provided by two Gallium3D-style drivers, with the Iris driver supporting Broadwell hardware and later,[95] while the Crocus driver supports Haswell and earlier.[96] The classic Mesa i965 driver was removed in Mesa 22.0, although it would continue to see further maintenance as part of the Amber branch.[97]
New OpenCL driver is Mesa RustiCL and this driver written in new language Rust is OpenCL 3.0 conformant for Intel XE Graphics with Mesa 22.3. Intel Broadwell and higher will be also conformant to 3.0 with many 2.x features. For Intel Ivy Bridge and Haswell target is OpenCL 1.2. Actual development state is available in mesamatrix.
NEO compute runtime driver supports openCL 3.0 with 1.2, 2.0 and 2.1 included for Broadwell and higher and Level Zero API 1.3 for Skylake and higher.[98]
All GVT virtualization methods are supported since the Broadwell processor family with KVM[99] and Xen.[100]
Capabilities (GPU video acceleration)
[edit]Intel developed a dedicated SIP core which implements multiple video decompression and compression algorithms branded Intel Quick Sync Video. Some are implemented completely, some only partially.
Hardware-accelerated algorithms
[edit]| CPU's microarchitecture |
Steps | Video compression and decompression algorithms | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| H.265 (HEVC) |
H.264 (MPEG-4 AVC) |
H.262 (MPEG-2) |
VC-1/WMV9 | JPEG / MJPEG |
VP8 | VP9 | AV1 | |||
| Westmere[101] | Decode | ✘ | ✓ | ✓ | ✓ | ✘ | ✘ | ✘ | ✘ | |
| Encode | ✘ | ✘ | ✘ | |||||||
| Sandy Bridge | Decode | Profiles | ✘ | ConstrainedBaseline, Main, High, StereoHigh | Simple, Main | Simple, Main, Advanced | ✘ | ✘ | ✘ | ✘ |
| Levels | ||||||||||
| Max. resolution | 2048x2048 | |||||||||
| Encode | Profiles | ConstrainedBaseline, Main, High | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ||
| Levels | ||||||||||
| Max. resolution | ||||||||||
| Ivy Bridge | Decode | Profiles | ✘ | ConstrainedBaseline, Main, High, StereoHigh | Simple, Main | Simple, Main, Advanced | Baseline | ✘ | ✘ | ✘ |
| Levels | ||||||||||
| Max. resolution | ||||||||||
| Encode | Profiles | ConstrainedBaseline, Main, High | Simple, Main | ✘ | ✘ | ✘ | ✘ | ✘ | ||
| Levels | ||||||||||
| Max. resolution | ||||||||||
| Haswell | Decode | Profiles | Partial 8-bit[102] | Main, High, SHP, MHP | Main | Simple, Main, Advanced | Baseline | ✘ | ✘ | ✘ |
| Levels | 4.1 | Main, High | High, 3 | |||||||
| Max. resolution | 1080/60p | 1080/60p | 16k×16k | |||||||
| Encode | Profiles | ✘ | Main, High | Main | ✘ | Baseline | ✘ | ✘ | ✘ | |
| Levels | 4.1 | High | - | |||||||
| Max. resolution | 1080/60p | 1080/60p | 16k×16k | |||||||
| Broadwell[103][104] | Decode | Profiles | Partial 8-bit & 10-bit[102] | Main | Simple, Main, Advanced | 0 | Partial[102] | ✘ | ||
| Levels | Main, High | High, 3 | Unified | |||||||
| Max. resolution | 1080/60p | 1080p | ||||||||
| Encode | Profiles | ✘ | Main | - | ✘ | ✘ | ✘ | ✘ | ||
| Levels | Main, High | |||||||||
| Max. resolution | 1080/60p | |||||||||
| Skylake[105] | Decode | Profiles | Main | Main, High, SHP, MHP | Main | Simple, Main, Advanced | Baseline | 0 | 0 | ✘ |
| Levels | 5.2 | 5.2 | Main, High | High, 3 | Unified | Unified | Unified | |||
| Max. resolution | 2160/60p | 2160/60p | 1080/60p | 3840×3840 | 16k×16k | 1080p | 4k/24p@15Mbit/s | |||
| Encode | Profiles | Main | Main, High | Main | ✘ | Baseline | Unified | ✘ | ✘ | |
| Levels | 5.2 | 5.2 | High | - | Unified | |||||
| Max. resolution | 2160/60p | 2160/60p | 1080/60p | 16k×16k | - | |||||
| Kaby Lake[106] Coffee Lake[107] Coffee Lake Refresh[107] Whiskey Lake[108] Ice Lake[109] Comet Lake[110] |
Decode | Profiles | Main, Main 10 | Main, High, MVC, Stereo | Main | Simple, Main, Advanced | Baseline | 0 | 0, 1, 2 | ✘ |
| Levels | 5.2 | 5.2 | Main, High | Simple, High, 3 | Unified | Unified | Unified | |||
| Max. resolution | 2160/60p | 1080/60p | 3840×3840 | 16k×16k | 1080p | |||||
| Encode | Profiles | Main | Main, High | Main | ✘ | Baseline | Unified | Support 8 bits 4:2:0 BT.2020 may be obtained the pre/post processing |
✘ | |
| Levels | 5.2 | 5.2 | High | - | Unified | |||||
| Max. resolution | 2160/60p | 2160/60p | 1080/60p | 16k×16k | - | |||||
| Tiger Lake[111] Rocket Lake |
Decode | Profiles | up to Main 4:4:4 12 | Main, High | Main | Simple, Main, Advanced | Baseline | ✘ | 0, 1, 2, partially 3 | 0 |
| Levels | 6.2 | 5.2 | Main, High | Simple, High, 3 | Unified | Unified | 3 | |||
| Max. resolution | 4320/60p | 2160/60p | 1080/60p | 3840×3840 | 16k×16k | 4320/60p | 4K×2K 16K×16K (still picture) | |||
| Encode | Profiles | up to Main 4:4:4 10 | Main, High | Main | ✘ | Baseline | ✘ | 0, 1, 2, 3 | ✘ | |
| Levels | 5.1 | 5.1 | High | - | - | |||||
| Max. resolution | 4320p | 2160/60p | 1080/60p | 16k×16k | 4320p | |||||
| Alder Lake[112] Raptor Lake[113] |
Decode | Profiles | up to Main 4:4:4 12 | Main, High | Main | Simple, Main, Advanced | Baseline | ✘ | 0, 1, 2, 3 | 0 |
| Levels | 6.1 | 5.2 | Main, High | Simple, High, 3 | Unified | 6.1 | 6.1 | |||
| Max. resolution | 4320/60p | 2160/60p | 1080/60p | 3840×3840 | 16k×16k | 4320/60p | 4320/60p 16K×16K (still picture) | |||
| Encode | Profiles | up to Main 4:4:4 10 | Main, High | Main | ✘ | Baseline | ✘ | 0, 1, 2, 3 | ✘ | |
| Levels | 5.1 | 5.1 | High | - | - | |||||
| Max. resolution | 4320p | 2160/60p | 1080/60p | 16k×16k | 4320p | |||||
| Meteor Lake[114] | Decode | Profiles | up to Main 4:4:4 12 | Main, High, Constrained Baseline | Main | Baseline | 0, 1, 2, 3 | Main 4:2:0 8/10 | ||
| Levels | 6.1 | 5.2 | Main, High | Unified | Unified | 6.1 | ||||
| Max. resolution | 4320/60p | 2160p | 1080p | 16k×16k | 4320p/60p
16K x 4K |
4320/60p 16K×16K (still picture) | ||||
| Encode | Profiles | up to Main 4:4:4 10 | Main, High, Constrained Baseline | Baseline | 0, 1, 2, 3 | Main 4:2:0 8/10 | ||||
| Levels | 6.1 | 5.2 | - | - | 6 | |||||
| Max. resolution | 4320p/60p | 2160/60p | 16k×16k | 4320p/60p
16K x 12K |
4320p/30p | |||||
Intel Pentium and Celeron family
[edit]| Intel Pentium & Celeron family | GPU video acceleration | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| VED (Video Encode / Decode) |
H.265/HEVC | H.264/MPEG-4 AVC | H.262 (MPEG-2) |
VC-1/WMV9 | JPEG/MJPEG | VP8 | VP9 | ||
| Braswell[116][b][c][d] | Decode | Profile | Main | CBP, Main, High | Main, High | Advanced | 850 MP/s 4:2:0 640 MP/s 4:2:2 420 MP/s 4:4:4 |
||
| Level | 5 | 5.2 | High | 4 | |||||
| Max. resolution | 4k×2k/30p | 4k×2k/60p | 1080/60p | 1080/60p | 4k×2k/60p | 1080/30p | |||
| Encode | Profile | ✘ | CBP, Main, High | Main, High | ✘ | 850 MP/s 4:2:0 640 MP/s 4:2:2 420 MP/s 4:4:4 |
Up to 720p30 | ||
| Level | 5.1 | High | |||||||
| Max. resolution | 4k×2k/30p | 1080/30p | 4k×2k/30p | ||||||
| Apollo Lake[117] | Decode | Profile | Main, Main 10 | CBP, Main, High | Main, High | Advanced | 1067 MP/s 4:2:0
800 MP/s 4:2:2 533 MP/s 4:4:4 |
0 | |
| Level | 5.1 | 5.2 | High | 4 | |||||
| Max. resolution | 1080p240, 4k×2k/60p | 1080/60p | 1080/60p | ||||||
| Encode | Profile | Main | CBP, Main, High | ✘ | ✘ | 1067 MP/s 4:2:0
800 MP/s 4:2:2 533 MP/s 4:4:4 | |||
| Level | 5 | 5.2 | |||||||
| Max. resolution | 4k×2k/30p | 1080p240, 4k×2k/60p | 4k×2k/30p | 480p30 (SW only) | |||||
| Gemini Lake[118] | Decode | Profile | Main, Main 10 | CBP, Main, High | Main, High | Advanced | 1067 MP/s 4:2:0
800 MP/s 4:2:2 533 MP/s 4:4:4 |
0, 2 | |
| Level | 5.1 | 5.2 | High | 4 | |||||
| Max. resolution | 1080p240, 4k×2k/60p | 1080/60p | 1080/60p | ||||||
| Encode | Profile | Main | CBP, Main, High | Main, High | ✘ | 1067 MP/s 4:2:0
800 MP/s 4:2:2 533 MP/s 4:4:4 |
0 | ||
| Level | 4 | 5.2 | High | ||||||
| Max. resolution | 4k×2k/30p | 1080p240, 4k×2k/60p | 1080/60p | 4k×2k/30p | |||||
Intel Atom family
[edit]| Intel Atom family | GPU video acceleration | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| VED (Video Encode / Decode) |
H.265/HEVC | H.264/MPEG-4 AVC | MPEG-4 Visual | H.263 | H.262 (MPEG-2) |
VC-1/WMV9 | JPEG/MJPEG | VP8 | VP9 | ||
| Bay Trail-T | Decode[119] | Profile | ✘ | Main, High | Main | 0 | ✘ | ||||
| Level | 5.1 | High | |||||||||
| Max. resolution | 4k×2k/30p | 1080/60p | 4k×2k/30p | 4k×2k/30p | |||||||
| Encode[119] | Profile | Main, High | Main | - | - | ||||||
| Level | 5.1 | High | - | - | |||||||
| Max. resolution | 4k×2k/30p | 1080/60p | 1080/30p | - | 1080/30p | ||||||
| Cherry Trail-T[120] | Decode | Profile | Main | CBP, Main, High | Simple | Main | Advanced | 1067 Mbit/s – 4:2:0
800 Mbit/s – 4:2:2 |
|||
| Level | 5 | 5.2 | High | 4 | |||||||
| Max. resolution | 4k×2k/30p | 4k×2k/60p, 1080@240p | 480/30p | 480/30p | 1080/60p | 1080/60p | 4k×2k/30p | 1080/30p | |||
| Encode | Profile | ✘ | Constrained Baseline, Main, High (MVC) | 1067 Mbit/s – 4:2:0
800 Mbit/s – 4:2:2 |
✘ | ||||||
| Level | 5.1 (4.2) | ||||||||||
| Max. resolution | 4k×2k/30p, 1080@120p | 480/30p | 4k×2k/30p | ||||||||
Documentation
[edit]Intel releases programming manuals for most of Intel HD Graphics devices via its Open Source Technology Center.[121] This allows various open source enthusiasts and hackers to contribute to driver development, and port drivers to various operating systems, without the need for reverse engineering.
See also
[edit]Notes
[edit]- ^ The abbreviation "GT" appears in certain monitoring tools, such as Intel Power Gadget, in reference to the graphics core on Intel processors.
- ^ VP9 media codec GPU accelerator to be supported post TTM, for non-Windows operating systems only.
- ^ Resolution details for media codec on open source Linux OS depends on platform features and drivers used. Decode/Encode features may not align to Table 8-4 that is specific to Win8.1 and Win7 operating systems.
- ^ All capabilities dependent on OS. Here HW support is mentioned. For more info, see Table 8-4 on page 80 of PDF.
References
[edit]- ^ a b c d e f "Supported APIs and Features for Intel Graphics Drivers". Intel. Retrieved 2016-05-19.
- ^ Michael Larabel (18 October 2013). "OpenGL 3.3 Support Lands In Mesa! Possible Mesa 11.0". Phoronix.
- ^ "The Khronos Group". The Khronos Group. July 18, 2020.
- ^ "AMD Radeon HD 7310". Notebookcheck.net. 2013-01-17. Retrieved 2014-04-20.
- ^ Junkins, Stephen (14 August 2015). The Compute Architecture of Intel Processor Graphics Gen9 (PDF) (White Paper). Intel. p. 2. Retrieved 9 September 2020.
At Intel, architects colloquially refer to Intel processor graphics architecture as simply 'Gen', shorthand for Generation.
- ^ Intel OpenSource HD Graphics Programmer's Reference Manual (PRM) Volume 4 Part 3: Execution Unit ISA (Ivy Bridge) – For the 2012 Intel Core Processor Family (PDF) (Manual). Intel. May 2012. p. 29. Retrieved 9 September 2020.
The GEN instruction set is a general-purpose data-parallel instruction set optimized for graphics and media computations.
- ^ Ioffe, Robert (22 January 2016). Introduction to GEN Assembly (Article). Intel. Retrieved 9 September 2020.
- ^ Larabel, Michael (6 September 2019). "Intel Graphics Compiler Changes For Gen12 – Biggest Changes To The ISA Since i965". Phoronix. Retrieved 9 September 2020.
- ^ Intel 965 Express Chipset Family and Intel G35 Express Chipset Graphics Controller PRM – Programmer's Reference Manual (PRM) Volume 1: Graphics Core (PDF) (Manual). Revision 1.0a. Intel. January 2008. p. 24. Retrieved 9 September 2020.
The GEN4 ISA describes the instructions supported by a GEN4 EU.
- ^ J. F. Amprimoz (2009-02-22). "The Delayed Mobile Nehalems: Clarksfield, Arrandale, and the Calpella Platform". Bright Hub. Retrieved 2014-01-15.
- ^ Shimpi, Anand Lal. "The Clarkdale Review: Intel's Core i5 661, i3 540 & i3 530". AnandTech. Archived from the original on March 30, 2010.
- ^ Pop, Sebastian (24 April 2012). "Intel's Official Ivy Bridge CPU Announcement Finally Live". Softpedia.
- ^ Larabel, Michael (September 2, 2013). "Linux 3.12 Enables Haswell's Iris eLLC Cache Support". Phoronix. Retrieved October 25, 2013.
- ^ Widawsky, Ben (July 16, 2013). "drm/i915: Use eLLC/LLC by default when available". git.kernel.org. Retrieved October 25, 2013.
- ^ Wilson, Chris (August 22, 2013). "drm/i915: Use Write-Through cacheing for the display plane on Iris". git.kernel.org. Retrieved October 25, 2013.
- ^ Shilov, Anton (November 20, 2013). "First Details Regarding Intel 'Broadwell-K' Microprocessors Emerge". Xbit. Archived from the original on January 12, 2014. Retrieved September 25, 2022.
- ^ "Intel will announce Broadwell U 14nm cpu at CES 2014". chinese.vr-zone.com. Archived from the original on September 29, 2014. Retrieved June 12, 2014.
- ^ a b Cutress, Ian (August 5, 2015). "Skylake's iGPU: Intel Gen9 – The Intel 6th Gen Skylake Review: Core i7-6700K and i5-6600K Tested". AnandTech. Archived from the original on August 7, 2015. Retrieved August 6, 2015.
- ^ Larabel, Michael (September 10, 2014). "Intel Publishes Initial Skylake Linux Graphics Support". Phoronix. Retrieved September 16, 2014.
- ^ "Khronos Products: Conformant Products". Khronos. July 11, 2015. Retrieved August 8, 2015.
- ^ Cutress, Ian (September 1, 2015). "Intel's Generation 9 Graphics – The Intel Skylake Mobile and Desktop Launch, with Architecture Analysis". AnandTech. Archived from the original on September 4, 2015. Retrieved September 2, 2015.
- ^ Cutress, Ian (May 31, 2016). "Intel Announces Xeon E3-1500 v5: Iris Pro and eDRAM for Streaming Video". AnandTech. Archived from the original on June 1, 2016. Retrieved May 31, 2016.
- ^ Shenoy, Navin (August 30, 2016). "New 7th Gen Intel Core Processor: Built for the Immersive Internet". Intel Newsroom. Retrieved August 4, 2018.
- ^ Alcorn, Paul (August 30, 2016). "Intel Kaby Lake: 14nm+, Higher Clocks, New Media Engine". Tom's Hardware. Retrieved August 4, 2018.
- ^ Cutress, Ian (August 21, 2017). "Intel Launches 8th Generation Core CPUs, Starting with Kaby Lake Refresh for 15W Mobile". AnandTech. Archived from the original on August 21, 2017. Retrieved September 25, 2022.
- ^ "Intel Product Specification Comparison". Intel. 7 October 2017. Archived from the original on 7 October 2017. Retrieved 27 May 2018.
- ^ Shilov, Anton. "Intel Launches New Pentium Silver and Celeron Atom Processors: Gemini Lake is Here". AnandTech. Archived from the original on December 13, 2017.
- ^ Cutress, Ian (July 31, 2019). "Examining Intel's Ice Lake Processors: Taking a Bite of the Sunny Cove Microarchitecture". AnandTech. Archived from the original on August 1, 2019. Retrieved August 1, 2019.
- ^ "Intel Processor Graphics Gen11 Architecture" (PDF). Intel. Retrieved September 25, 2022.
- ^ "Developer and Optimization Guide for Intel Processor Graphics Gen11..." Intel.
- ^ Intel Graphics [@IntelGraphics] (August 31, 2019). "Our community suggested it and we are making it a reality. Retro scaling is now available in the new Intel Graphics Command Center (in beta for Gen11 Graphics). Let us know what you think!" (Tweet). Retrieved July 1, 2021 – via Twitter.
- ^ "Intel Iris Plus Graphics and UHD Graphics Open Source Programmer's Reference Manual" (PDF). Intel. 2020.
- ^ a b c "Intel Gen11 Architecture, Page 10" (PDF). Intel. Retrieved November 2, 2020.
- ^ "Intel® Core™ 3 processor 100U (10M Cache, up to 4.70 GHZ) Product Specifications".
- ^ "Intel® Core™ 5 processor 120U (12M Cache, up to 5.00 GHZ) Product Specifications".
- ^ "Intel® Core™ 7 processor 150U (12M Cache, up to 5.40 GHZ) Product Specifications".
- ^ Smith, Ryan (August 13, 2020). "The Intel Xe-LP GPU Architecture Deep Dive: Building Up The Next Generation". AnandTech. Archived from the original on August 14, 2020. Retrieved April 10, 2021.
- ^ Cutress, Ian (12 December 2018). "Intel's Architecture Day 2018: The Future of Core, Intel GPUs, 10nm, and Hybrid x86". AnandTech. p. 5. Archived from the original on April 10, 2019.
Intel will use the Xe branding for its range of graphics that were unofficially called 'Gen12' in previous discussions
- ^ Hill, Brandon (September 9, 2019). "Intel Says Tiger Lake Gen12 Xe Graphics Is Its Biggest Architectural Revamp In A Decade". Hot Hardware. Retrieved October 5, 2022.
- ^ a b c "Intel Processor Graphics Xe-LP API Developer and Optimization Guide". Intel. June 22, 2021. Retrieved October 5, 2022.
- ^ Kucukgoz, Mehmet (October 9, 2020). "AV1 Hardware Accelerated Video on Windows 10". Microsoft. Retrieved October 5, 2022.
- ^ "Intel® Processor Graphics Xᵉ-LP API Developer and Optimization Guide".
- ^ a b "Intel Technology Roadmaps and Milestones". Intel.
- ^ "Intel Meteor Lake Client Processors to use Arc Graphics Chiplets". Archived from the original on February 17, 2022.
- ^ "[Intel-gfx] [PATCH 0/2] i915: Introduce Meteorlake". Lists.freedesktop.org. 7 July 2022. Retrieved 2022-09-01.
- ^ Nautiyal, Ankit (November 7, 2022). "[Intel-gfx] [RFC 00/15] Add support for HDMI2.1 FRL". Free Desktop. Retrieved November 15, 2022.
- ^ Bashir, Samir (20 September 2023). "Intel Innovation 2023: Intel confirms three new CPU generations for client PCs: Arrow Lake, Lunar Lake and Panther Lake". Igor's Lab. Retrieved 31 January 2024.
- ^ Knupffer, Nick. "Intel Insider – What Is It? (IS it DRM? And yes it delivers top quality movies to your PC)". Archived from the original on 2013-06-22. Retrieved 2011-02-02.
- ^ Agam Shah (2011-01-06). "Intel: Sandy Bridge's Insider is not DRM". Computerworld. Archived from the original on 2011-12-04. Retrieved 2014-03-22.
- ^ Sunil Jain (May 2014). "Intel Graphics Virtualization Update". Intel. Archived from the original on 2014-05-08. Retrieved 2014-05-11.
- ^ "Bringing New Use Cases and Workloads to the Cloud with Intel Graphics Virtualization Technology (Intel GVT-g)" (PDF). Intel Open Source Technology Center. 2016. Retrieved 14 August 2020.
- ^ "Graphics Virtualization Technologies Supported on Each Intel®". Archived from the original on 2022-08-14. Retrieved 2022-08-14.
- ^ "Do 11th Generation Intel® Processors Support GVT-g Technology?".
- ^ a b Michael Larabel (2011-10-06). "Details On Intel Ivy Bridge Triple Monitor Support". Phoronix.
A limitation of this triple monitor support for Ivy Bridge is that two of the pipes need to share a PLL. Ivy Bridge has three planes, three pipes, three transcoders, and three FDI (Flexible Display Interface) interfaces for this triple monitor support, but there's only two pipe PLLs. This means that two of the three outputs need to have the same connection type and same timings. However, most people in a triple monitor environment will have at least two — if not all three — of the monitors be identical and configured the same, so this shouldn't be a terribly huge issue.
- ^ LG Nilsson (2012-03-12). "Most desktop Ivy Bridge systems won't support three displays". VRZone. Archived from the original on 2012-04-01.
Despite the fact that Intel has been banging its drums about support for up to three displays on the upcoming 7-series motherboards in combination with a shiny new Ivy Bridge based CPU, this isn't likely to be the case. The simple reason behind this is that very few, if any motherboards will sport a pair of DisplayPort connectors.
- ^ a b David Galus (February 2013). "Migration to New Display Technologies on Intel Embedded Platforms" (PDF). Intel. Archived from the original (PDF) on 2013-02-01.
The Intel 7 Series Chipset based platform allows for the support of up to three concurrent displays with independent or replicated content. However, this comes with the requirement that either one of the displays is eDP running off the CPU or two DP interfaces are being used off the PCH. When configuring the 2 DP interfaces from the PCH, one may be an eDP if using Port D. This limitation exists because the 7 Series Intel PCH contains only two display PLLs (the CPU has one display PLL also) which will control the clocking for the respective displays. All display types other than DP have an external variable clock frequency associated with the display resolution that is being used. The DP interface has an embedded clocking scheme that is semi- variable, either at 162 or 270 MHz depending on the bandwidth required. Therefore, Intel only allows sharing of a display PLL with DP related interfaces.
Alt URL - ^ "Z87E-ITX". ASRock.
This motherboard supports Triple Monitor. You may choose up to three display interfaces to connect monitors and use them simultaneously.
- ^ "H87I-PLUS". Asus.
Connect up to three independent monitors at once using video outputs such as DisplayPort, Mini DisplayPort, HDMI, DVI, or VGA. Choose your outputs and set displays to either mirror mode or collage mode.
- ^ "Techpowerup GPU database". Techpowerup. Retrieved 2018-04-22.
- ^ "2nd Generation Intel Core : Datasheet, Volume 1" (PDF). Intel.com. Retrieved 27 May 2018.
- ^ Michael Larabel (2014-09-20). "OpenGL 3.3 / GLSL 3.30 Lands For Intel Sandy Bridge On Mesa". Phoronix.
- ^ "Desktop 3rd Gen Intel Core Processor Family: Datasheet". Intel. Retrieved 2015-06-18.
- ^ "Intel Pentium Processor N3500-series, J2850, J2900, and Intel Celeron Processor N2900-series, N2800-series, J1800-series, J1900, J1750 – Datasheet" (PDF). Intel. 2014-11-01. p. 19. Retrieved 2016-02-08.
- ^ "hasvk driver downgrades vulkan". gitlab.freedesktop.org. Retrieved 2025-08-12.
- ^ Francisco Jerez (14 April 2017). "mark GL_ARB_vertex_attrib_64bit and OpenGL 4.2 as supported by i965/gen7+". freedesktop.org.
- ^ "Release Notes Driver version: 15.33.22.64.3621" (PDF). 2014-06-02. Retrieved 2014-07-21.
- ^ "Archived copy" (PDF). Archived from the original (PDF) on 2015-04-02. Retrieved 2015-03-07.
{{cite web}}: CS1 maint: archived copy as title (link) - ^ "Intel HD Graphics (Bay Trail)". notebookcheck.net. Retrieved 2016-01-26.
- ^ "Desktop 4th Generation Intel Core: Datasheet" (PDF). Intel. Retrieved 27 May 2018.
- ^ Michael Larabel (15 June 2021). "Mesa's New "Crocus" OpenGL Driver Is Performing Well For Old Intel Hardware". Phoronix. Retrieved 2023-07-03.
Crocus does allow for OpenGL 4.6 on Haswell compared to OpenGL 4.5 being exposed on the i965 driver. Additionally, Crocus allows for OpenGL ES 3.2 rather than OpenGL ES 3.1 on Haswell. Aside from that the drivers are in similar shape for the most part.
- ^ "Release Notes Driver version: 15.36.3.64.3907" (PDF). 2014-09-07. Retrieved 2014-09-05.
- ^ "Intel Graphics Driver PV 15.40.36.4703 Release Notes" (PDF). Intel. June 23, 2017. Archived from the original (PDF) on October 3, 2017. Retrieved October 2, 2017.
- ^ "5th Generation Intel Core Processor Family, Intel Core M Processor Family, Mobile Intel Pentium Processor Family, and Mobile Intel Celeron Processor Family Datasheet – Volume 1 of 2" (PDF). Intel. 2015-06-01. p. 22. Retrieved 2016-02-11.
- ^ "Intel Iris Pro Graphics 6200". notebookcheck.net. Retrieved 2016-02-09.
- ^ "Intel Iris Graphics 6100". notebookcheck.net. Retrieved 2016-02-09.
- ^ "Intel HD Graphics 6000". notebookcheck.net. Retrieved 2016-02-09.
- ^ "Intel HD Graphics 5600". Notebookcheck. Retrieved 2016-02-09.
- ^ "Intel HD Graphics 5500". notebookcheck.net. Retrieved 2016-02-09.
- ^ "Intel HD Graphics 5300". notebookcheck.net. Retrieved 2016-02-09.
- ^ "Intel HD Graphics (Broadwell)". notebookcheck.net. Retrieved 2016-02-09.
- ^ Väinö Mäkelä (4 December 2022). "hasvk driver".
- ^ Michael Larabel (21 August 2019). "Intel's OpenGL Linux Driver Now Has OpenGL 4.6 Support For Mesa 19.2". Phoronix.
- ^ Michael Larabel (9 March 2019). "Intel's New Driver Is Now Working With Gallium's Direct3D 9 State Tracker". Phoronix.
- ^ a b c d e Intel. "Intel compute-runtime, Supported Platforms". GitHub. Retrieved 16 April 2021.
- ^ "Intel HD Graphics (Braswell)". notebookcheck.net. Retrieved 2016-01-26.
- ^ "Khronos Vulkan Conformant Products".
- ^ "Khronos Vulkan Conformant Products".
- ^ "Driver Version: 31.0.101.2114". Intel. Retrieved October 20, 2022.
- ^ a b Michael Larabel (20 January 2017). "Beignet 1.3 Released With OpenCL 2.0 Support". Phoronix.
- ^ Michael Larabel (27 October 2015). "Intel Is Already Publishing Open-Source 'Kabylake' Graphics Driver Patches". Phoronix.
- ^ "Khronos Vulkan Conformant Products".
- ^ "Khronos Vulkan Conformant Products".
- ^ "Driver Version: 31.0.101.3729". Intel. Retrieved October 20, 2022.
- ^ "OpenCL – The open standard for parallel programming of heterogeneous systems". 21 July 2013.
- ^ "iris: Add a new experimental Gallium driver for Intel Gen8+ GPUs (!283) · Merge Requests · Mesa / mesa". GitLab. 20 February 2019. Retrieved 2023-07-03.
- ^ "initial crocus driver submission (!11146) · Merge Requests · Mesa / mesa". GitLab. 2 June 2021. Retrieved 2023-07-03.
- ^ Michael Larabel (3 December 2021). "Mesa's Classic Drivers Have Been Retired – Affecting ATI R100/R200 & More". Phoronix. Retrieved 2023-07-03.
- ^ "Releases · intel/Compute-runtime". GitHub.
- ^ Wang, Hongbo (18 October 2018). "2018-Q3 release of KVMGT (Intel GVT-g for KVM)" (Press release). Intel Open Source Technology Center. Archived from the original on 16 January 2021. Retrieved 14 August 2020.
- ^ Wang, Hongbo (18 October 2018). "2018-Q3 release of XenGT (Intel GVT-g for Xen)" (Press release). Intel Open Source Technology Center. Archived from the original on 16 January 2021. Retrieved 14 August 2020.
- ^ "Intel Core i5-600, i3-500 Desktop Processor Series, Intel Pentium Desktop Processor 6000 Series" (PDF) (PDF). Intel. January 2011. Retrieved 11 May 2017.
- ^ a b c Robert_U (19 January 2015). "Intel Iris and HD Graphics Driver update posted for Haswell and Broadwell version 15.36.14.4080". Intel Communities. Intel. Retrieved 16 April 2016.
- ^ "5th Generation Intel Core Processor Family Datasheet Vol. 1" (PDF) (PDF). Intel. 1 June 2015. Retrieved 16 April 2016.
- ^ "Desktop 5th Gen Intel Core Processor Family Datasheet, Vol. 1" (PDF) (PDF). Intel. 27 May 2015. Retrieved 16 April 2016.
- ^ "6th Generation Intel Processor Datasheet" (PDF). Intel. October 2015. Retrieved 12 February 2016.
- ^ "Datasheet, Vol. 1: 7th Gen Intel Core Processor U/Y-Platforms" (PDF). Intel. August 2016. Retrieved 2020-01-24.
- ^ a b "8th and 9th Generation Intel Core Processor Families Datasheet, Volume 1 of 2". Intel. Retrieved 2020-01-24.
- ^ "8th Gen Intel Core Processor Family Datasheet, Vol. 1". Intel. Retrieved 2020-07-19.
- ^ "10th Gen Intel Core Processor Families Datasheet, Vol. 1". Intel (in Spanish). Retrieved 2020-07-19.
- ^ "10th Gen Intel Core Processor Families Datasheet, Vol. 1". Intel. Retrieved 2020-07-19.
- ^ "11th Generation Intel Core ProcessorDatasheet, Volume 1 of 2 Supporting 11th Generation Intel Core Processor Families, IntelPentium Processors, Intel Celeron Processors for UP3, UP4, and H35 Platform, formerly known as Tiger Lake". January 2021. Retrieved 2021-03-17.
- ^ "12th Generation Intel Core Processors". March 2022. Retrieved 2022-03-27.
- ^ "Hardware Accelerated Video Decode – 002 – ID:743844 | 13th Generation Intel® Core™ Processors". edc.intel.com. Retrieved 2022-12-24.
- ^ "Hardware Accelerated Video Decode – 003 – ID:792044 | Intel® Core™ Ultra Processor". edc.intel.com. Retrieved 2025-02-24.
- ^ "Hardware Accelerated Video Decode – 004 – ID:832586 | Intel® Core™ Ultra 200S and 200HX Series Processors". edc.intel.com. Retrieved 2025-02-24.
- ^ "N-series Intel Pentium Processors and Intel Celeron Processors Datasheet – Volume 1 of 3" (PDF). Intel. 2015-04-01. pp. 77–79. Retrieved 2016-02-08.[permanent dead link]
- ^ "Intel Pentium and Celeron Processor N- and J- Series Datasheet, Volume 1". Intel. 2021-09-28.
- ^ "2017Q2 Intel Graphics Stack Recipe". 01.org. 2017-07-06. Archived from the original on 2018-02-25. Retrieved 2017-08-14.
- ^ a b "Intel Atom Processor Z3600 and Z3700 Series Datasheet" (PDF). Intel. December 2014. Retrieved 12 February 2016.
- ^ "Intel Atom Z8000 Processor Series Datasheet (Volume 1 of 2)" (PDF). Intel. 2016-03-01. p. 130. Retrieved 2016-04-24.[permanent dead link]
- ^ "Linux Graphics, Documentation". Intel Open Source Technology Center. 01.org. 2014-01-12. Retrieved 2014-01-12.
External links
[edit]Intel Graphics Technology
View on GrokipediaIntroduction
Overview
Intel Graphics Technology encompasses the family of graphics processing unit (GPU) architectures developed by Intel, primarily integrated into its central processing units (CPUs) but extending to discrete GPUs, beginning with the i740 discrete graphics card released in 1998 and evolving to modern Xe-based integrated and discrete solutions.[4][3] These technologies enable visual rendering, compute tasks, and media processing within Intel platforms, supporting a range of applications from basic display output to advanced gaming and AI acceleration.[6] At the core of Intel's graphics architectures are execution units (EUs), which serve as programmable shader units responsible for executing vertex, geometry, pixel, and compute shaders in parallel to handle complex rendering computations.[9] Supporting these are texture sampler units, which fetch and filter texture data during the rendering process to apply surface details to 3D models, integrated within subslices that organize the GPU's processing resources for efficient pipeline operation.[10] Together, these components form the foundation of the graphics rendering pipeline, where shaders perform transformations and lighting calculations while texture units enhance visual fidelity.[11] Graphics commands originate from applications using APIs such as DirectX, OpenGL, or Vulkan, which submit draw calls and resource data to Intel's graphics driver; the driver translates these into hardware-specific instructions that the GPU's command streamer dispatches to the appropriate pipelines for execution on the EUs and supporting units.[12][13] This workflow ensures compatibility across Intel hardware while optimizing for performance in real-time rendering scenarios.[14] The branding for Intel's graphics has evolved to reflect performance tiers and capabilities, starting with Intel Extreme Graphics for early integrated solutions in 2001, transitioning to Intel HD Graphics in 2010 for mainstream use, introducing Intel Iris Graphics in 2013 for premium integrated performance, and launching the Intel Arc brand in 2022 for high-end discrete Xe GPUs.[3][4][15] This progression underscores Intel's shift toward unified architectures like Xe, enhancing both integrated and discrete offerings.Importance in Computing
Intel Graphics Technology powers the vast majority of personal computers, with its integrated graphics present in approximately 70% of x86-based PCs as of Q3 2025,[16] enabling efficient handling of everyday tasks such as web browsing, office productivity, light gaming, and content creation like photo editing and video streaming.[17] This dominance stems from Intel's extensive integration into consumer and enterprise hardware, where its graphics solutions serve as the default for systems without discrete GPUs, supporting seamless 2D and 3D rendering in applications ranging from graphical user interfaces to basic CAD modeling.[18] In laptops, desktops, and even servers, Intel's integrated graphics play a critical role in multimedia processing, including high-definition video playback via hardware acceleration like Quick Sync Video, which offloads decoding tasks from the CPU to reduce power consumption and improve efficiency. For compute-intensive workloads, these graphics units facilitate machine learning inference on edge devices, allowing local execution of AI models for tasks such as image recognition and natural language processing without relying on cloud resources.[19] This versatility extends to embedded systems in industrial applications, where compact form factors benefit from the graphics' low-latency support for real-time visualization and data processing. The economic advantages of Intel's SoC integration, which embeds graphics directly into the processor, significantly lower manufacturing costs by eliminating the need for separate GPU components, thereby enabling the production of affordable, thin-and-light devices that dominate the ultrabook and budget laptop markets.[20] This approach fosters competition in entry-level segments against AMD's Radeon integrated solutions and NVIDIA's low-end discrete cards, driving innovation in power-efficient designs while maintaining accessibility for consumers and small businesses.[21] By 2025, Intel Graphics has gained heightened relevance in AI PCs through the Core Ultra series, where synergy between the integrated GPU, Neural Processing Unit (NPU), and CPU optimizes hybrid AI workloads, enhancing on-device performance for generative AI tools and extending battery life in mobile scenarios.[22]Historical Development
Early Discrete Solutions
Intel's initial venture into discrete graphics began with the acquisition of Real3D technology from Lockheed Martin in 1999, which laid the groundwork for its first standalone graphics processing unit.[4] The i740, codenamed Auburn, was launched in February 1998 as Intel's debut discrete GPU, built on a 150 nm process with a core clock of 66 MHz and support for the Accelerated Graphics Port (AGP) interface.[23] It featured hardware acceleration for DirectX 6.0 and OpenGL 1.2, with configurations supporting 4 MB to 32 MB of SDRAM, though typical cards shipped with 8 MB or 16 MB.[3] Designed to compete in the consumer 3D graphics market, the i740 emphasized cost-effectiveness and integration with Intel's chipsets, but its performance lagged behind rivals like NVIDIA's RIVA 128 and 3dfx's Voodoo2 due to limitations in texture handling and fill rate.[24] Market reception for the i740 was mixed, with Intel shipping approximately 4 million units in 1998, capturing about 4% of the graphics accelerator market at the time.[24] Despite partnerships with over 45 vendors showcasing 59 i740-based boards at Computex 1998, the chip struggled with driver issues and insufficient raw performance, leading to underwhelming adoption amid a saturated market dominated by established players.[24] By late 1999, Intel had effectively discontinued standalone i740 production, recognizing the high development costs and competitive pressures from NVIDIA and ATI as barriers to sustained success in discrete graphics.[4] As discrete efforts faltered, Intel began incorporating graphics capabilities into its chipsets, marking an early shift toward integration while still supporting discrete add-ons. The i810 chipset, released in April 1999, integrated a variant of the i740 graphics core (known as Whitney) directly onto the motherboard, providing basic 2D/3D acceleration without requiring a separate card, though it allowed AGP upgrades for enhanced performance. This was followed by the i815 chipset in June 2000, which featured the i752 Solano graphics core and explicitly supported AGP 4x slots for discrete GPUs, enabling users to pair integrated basics with add-in cards for better visuals.[25] These chipsets achieved commercial success by reducing system costs and simplifying builds, but the discrete add-on option highlighted Intel's transitional approach amid ongoing market challenges. The pivot away from discrete graphics was driven by escalating R&D expenses—estimated in the hundreds of millions for the i740 alone—and the entrenched dominance of specialized vendors like NVIDIA and ATI, who controlled over 80% of the market by 2000.[26] Intel's leadership concluded that focusing on integrated solutions within its CPU ecosystem offered better margins and synergy, leading to a full retreat from standalone GPUs by 2000 and emphasizing embedded graphics in subsequent platforms.[27]Emergence of Integrated Graphics
The emergence of integrated graphics represented a pivotal shift in Intel's approach to graphics processing, prioritizing power efficiency, cost savings, and seamless integration within mobile platforms over the performance of standalone discrete solutions. This transition gained momentum in the mid-2000s as laptops became the dominant form of personal computing, demanding graphics capabilities that did not compromise battery life or increase system complexity. Intel's early efforts focused on embedding graphics acceleration into chipsets paired with mobile CPUs, enabling basic 2D/3D rendering and video playback without the need for separate graphics cards. In 2004, Intel debuted the Generation 3 Graphics Media Accelerator (GMA) 900 as part of the Mobile Intel 915 Express Chipset Group, marking the company's first dedicated integrated graphics solution for mobile processors like the Pentium M.[28] The GMA 900, built on a 130 nm process, supported DirectX 9.0 and provided hardware acceleration for MPEG-2 video decode, targeting everyday tasks such as office productivity and light media consumption in ultraportable devices.[29] This on-chip integration in the chipset—rather than requiring external components—reduced overall system power draw by up to 50% compared to prior discrete setups and facilitated thinner laptop designs. Building on this foundation, Intel introduced the Generation 4 GMA X3000 in 2006, integrated into chipsets supporting the Core 2 Duo processor family, such as the Intel G965 Express.[30] The GMA X3000 featured up to 8 pixel shader units and supported Shader Model 3.0 under DirectX 9.0c, delivering improved 3D performance for applications like casual gaming and enhanced video processing via Intel Clear Video Technology. These advancements allowed for dynamic clock speeds up to 667 MHz, balancing performance with thermal constraints in mobile environments. Early driver support for these integrated solutions encountered hurdles, particularly with Windows Vista's release in 2007, where older GMA implementations like the 900 series lacked full compatibility with the Windows Display Driver Model (WDDM), limiting features such as the Aero glass interface.[31] Intel addressed these through iterative driver updates by mid-2007, enabling broader Vista Premium certification for Gen4 and later architectures. The benefits of this integrated paradigm—lower power usage (often under 5W for graphics alone), reduced manufacturing costs by eliminating discrete components, and minimized physical footprint—proved transformative for laptops, with Intel's solutions powering over 57% of the notebook graphics market by late 2008.[32] This adoption underscored integrated graphics as a cornerstone for mainstream computing, paving the way for further refinements in subsequent generations like Gen5.Evolution to Xe Architectures
In the mid-2010s, Intel refined its Gen architectures with enhancements focused on security and multimedia capabilities. The Gen9.5 architecture, launched in 2017 with Kaby Lake processors, introduced support for 4th-generation High-bandwidth Digital Content Protection (HDCP 2.2), allowing secure transmission of high-definition content over HDMI and DisplayPort interfaces. This paved the way for Gen11 in 2019, integrated into Ice Lake processors, which served as precursors to hardware-accelerated ray tracing by improving geometry processing and compute throughput, setting the stage for dedicated ray tracing units in subsequent designs.[33] The pivotal shift occurred with the Xe architecture's launch in 2020, establishing a unified design philosophy applicable to both integrated and discrete GPUs. Xe emphasized scalability, supporting configurations from a single Xe-core in low-power integrated solutions to up to 128 Xe-cores in high-end discrete variants, while incorporating the DP4a instruction for accelerated AI workloads through efficient dot-product operations on low-precision data types. This unified approach departed from the siloed Gen generations, aiming for consistent performance across client, server, and professional applications.[34][35] By 2025, Intel announced the end of active driver support for graphics in 11th through 14th Gen Intel Processors, effective September 19, 2025, transitioning these to a legacy branch with critical security updates only. This pivot underscores the focus on Xe2 (Battlemage) and emerging Xe3 architectures, which prioritize advanced AI acceleration and full hardware ray tracing, with Xe3 promising over 50% performance gains in integrated GPUs through enhanced ray tracing units and larger cache hierarchies.[8][36] In October 2025, Intel detailed Xe3 at the Technology Tour, featuring up to 12 Xe-cores in Panther Lake with 33% more L1/SLM cache and over 50% performance gains versus Xe2 in integrated GPUs.[36] Central to Xe's design philosophy is efficient rendering and compute, particularly beneficial in disaggregated systems like those in Meteor Lake processors, with features like larger caches reducing memory bandwidth. Complementing this, Intel introduced XMX (Xe Matrix Extensions) engines dedicated to matrix mathematics, utilizing systolic arrays for high-throughput operations in AI tasks such as deep learning inference and training, with support for instructions like DPAS (Dot Product Accumulate Systolic) to achieve peak performance in INT8 and BF16 formats.[37][38]Integrated Graphics Generations
Gen4 Architecture
The Intel Gen4 architecture marked a significant advancement in integrated graphics during the mid-2000s, debuting with the Graphics Media Accelerator (GMA) X3000 in 2006 as part of Intel's 965 Express chipset family and extending through implementations like the GMA X4500 in 2008. This generation introduced up to 16 execution units (EUs), with configurations varying by model—such as 4 EUs in the X3000, 8 in the X3100, and 16 in the X4500—each EU featuring a 128-bit wide floating-point unit capable of processing multiple operations per cycle. The architecture employed fixed-function pipelines optimized for DirectX 9.0 compliance, including hardware support for transform and lighting (T&L) and pixel shading, enabling basic 3D rendering and video playback in resource-constrained systems.[39][40][41] A prominent implementation was the GMA X3100, integrated into the Santa Rosa platform via the Mobile Intel 965 Express Chipset in 2007, targeting mobile processors like the Core 2 Duo series. This setup utilized dynamic video memory technology (DVMT), allocating up to 384 MB of shared system memory for graphics operations, which helped balance performance and power efficiency in laptops without dedicated VRAM. The architecture's design emphasized integration with the graphics memory controller hub (GMCH), reducing latency while supporting resolutions up to 2048x1536 and multi-display configurations.[41][42][39] In terms of performance, Gen4 graphics delivered approximately 10-20 GFLOPS depending on clock speeds (typically 400-533 MHz) and EU count, proving adequate for everyday tasks and the visual effects of Windows Vista, such as the Aero Glass interface, which required hardware acceleration for transparency and animations. Benchmarks from the era showed it handling 720p video decode and light gaming at low settings, with power consumption around 13-13.5 W. However, limitations included the absence of unified shaders, relying instead on separate fixed-function vertex processing and programmable pixel shaders, which constrained flexibility for more advanced rendering techniques. Vertex transformations often fell back to software emulation for complex scenes, impacting efficiency in DirectX 9 workloads.[41][40][39]Gen5 Architecture
The Gen5 architecture, codenamed Ironlake, represented a significant advancement in Intel's integrated graphics lineup, debuting in January 2010 alongside the Westmere microarchitecture-based Clarkdale desktop and Arrandale mobile processors.[43] This marked the first implementation of a multi-chip module (MCM) design where the 32 nm CPU die and 45 nm graphics die were connected via an on-package ring bus interconnect, enabling tighter integration and shared access to system resources compared to previous generations. Gen5 introduced the unified shader model, emphasizing enhanced media processing and display capabilities while maintaining compatibility with existing software ecosystems.[44] At its core, the Gen5 GPU featured 6 to 12 Execution Units (EUs), with the standard configuration utilizing 12 EUs comprising 96 shading units, 16 texture mapping units (TMUs), and 2 render output units (ROPs).[45] It supported DirectX 10.1, OpenGL 2.1, and Shader Model 4.1, providing improved geometry processing and pixel shading efficiency over prior integrated solutions.[43] A key hardware innovation was the introduction of the first coherent render cache in Intel's integrated graphics, which facilitated efficient pixel data handling and reduced bandwidth demands by allowing the GPU to cache rendered pixels coherently with the CPU's L3 cache via the ring bus.[46] Peak theoretical FP32 performance reached 80 to 133 GFLOPS, depending on the clock speed ranging from 500 MHz in mobile variants to up to 733 MHz in select desktop models, enabling basic 1080p video decode and light 3D workloads.[43] The architecture was deployed across consumer platforms, integrated directly into Intel Core i3 and i5 processors for mainstream desktops and laptops, as well as Pentium and Celeron variants for entry-level systems.[45] Server deployments included select Westmere-based Xeon processors, where the graphics supported remote management and basic visualization tasks. This on-package integration improved power efficiency and latency for shared workloads, with the GPU drawing from the same L3 cache as the CPU cores to minimize data movement overhead.[46] Among its innovations, Gen5 introduced a flexible display engine capable of driving multiple outputs, including LVDS for laptop panels, HDMI 1.3 for external monitors up to 1080p@60Hz, and DisplayPort 1.1a, with support for simultaneous multi-monitor configurations up to three displays.[46] This engine incorporated Intel Clear Video HD Technology for hardware-accelerated video decode, enhancing H.264 playback efficiency while maintaining low power consumption suitable for ultrathin laptops.[44] Overall, Gen5 laid foundational improvements in cache coherence and display versatility that influenced subsequent integrated graphics evolutions.Gen6 Architecture
The Gen6 architecture, introduced in 2011 as part of Intel's Sandy Bridge processor family, marked a significant evolution in integrated graphics by integrating the GPU directly onto the 32 nm die alongside the CPU cores.[47] This design supported both desktop and mobile platforms, with configurations ranging from 6 to 12 execution units (EUs) depending on the processor model, such as the Intel HD Graphics 2000 (6 EUs) in entry-level parts and HD Graphics 3000 (12 EUs) in higher-end variants.[48] The architecture emphasized power efficiency and scalability, enabling seamless operation in diverse computing environments from laptops to desktops.[49] A core advancement in Gen6 was the adoption of a fully unified shader model, where execution units handled vertex, pixel, geometry, and compute workloads interchangeably, building on prior generations while achieving full compliance with DirectX 11.[47] This unification allowed for flexible SIMD processing across widths of 1 to 32 threads, improving resource utilization for 3D rendering and media tasks.[49] Geometry shaders were explicitly enabled, enhancing support for complex scene geometry without relying on software emulation.[48] Drawing briefly from the conceptual heritage of the canceled Larrabee project, Gen6 incorporated elements of many-core efficiency but prioritized fixed-function hardware for better integration.[49] Performance in Gen6 reached up to approximately 200 GFLOPS in peak theoretical floating-point operations for high-end configurations like the HD Graphics 3000 at boosted clocks around 1.1 GHz, establishing competitive baseline capabilities for integrated solutions at the time.[50] The 3D pipeline featured dedicated hardware for tessellation, enabling hardware-accelerated subdivision of primitives to improve rendering detail in DirectX 11 applications.[47] Additionally, Gen6 introduced the second generation of Intel Quick Sync Video, enhancing hardware-accelerated encoding and decoding for formats like H.264, which significantly reduced CPU load for media processing tasks.[51] These features collectively positioned Gen6 as a foundational step toward more capable integrated graphics in mainstream computing.[52]Gen7 Architecture
The Intel Gen7 graphics architecture, introduced in 2012 as part of the Ivy Bridge processor family, marked a significant refinement in integrated graphics for desktop and mobile platforms. Integrated into third-generation Intel Core processors such as the Core i7 series, it supported up to 16 Execution Units (EUs) in its top configuration (GT2 variant, as in the HD Graphics 4000), enabling more efficient parallel processing compared to prior generations. This architecture achieved compatibility with DirectX 11.1, allowing for advanced shader effects and improved 3D rendering capabilities in applications and games of the era.[53][54][55] Performance in the Gen7 architecture ranged from approximately 250 to 400 GFLOPS in single-precision floating-point operations, depending on clock speeds and configuration, with the GT2 variant delivering around 268.8 GFLOPS at boosted frequencies up to 1.15 GHz on desktop Ivy Bridge i7 processors. Each EU featured dual texture samplers, enhancing texture fetch efficiency and reducing bottlenecks in rendering pipelines for detailed scenes. Building briefly on the shader model continuity from the Gen6 architecture, Gen7 maintained a unified shader design while optimizing EU throughput for better overall compute density.[53][56][11] Key innovations included improved power gating mechanisms, which allowed finer-grained control over EU activation, reducing idle power consumption with activation latencies in the tens of microseconds to support battery life in mobile Ivy Bridge systems. The enhanced geometry engine provided better handling of complex 3D models through support for hierarchical Z-culling and improved tessellation, facilitating more intricate geometry processing without excessive overhead. Additionally, Gen7 enabled support for up to three simultaneous displays on Ivy Bridge i7 platforms, expanding multi-monitor setups for productivity and light gaming via integrated outputs like DisplayPort and HDMI.[57][58][59]Gen7.5 Architecture
The Gen7.5 graphics architecture, introduced in 2013 as part of Intel's Haswell microarchitecture, built upon the foundational execution unit (EU) design of its predecessor while prioritizing power efficiency for mobile platforms. Integrated directly into the Haswell system-on-chip (SoC), it featured configurations with 10 to 20 EUs in standard Intel HD Graphics variants (such as HD 4200, 4400, and 4600), enabling scalable performance tailored to processor tiers. The premium Iris Pro Graphics 5200 variant extended this to 40 EUs, augmented by an optional 128 MB eDRAM acting as a last-level cache to boost bandwidth and reduce latency in memory-bound tasks.[60][61] Key advancements included support for DirectX 11.1, enhancing 3D rendering capabilities with improved tessellation and compute shaders, and OpenCL 1.2, which delivered better parallel compute performance through refined work-group management and vector processing optimizations. Media handling saw notable upgrades with hardware-accelerated decode for H.265/HEVC, allowing efficient processing of high-efficiency video codecs at resolutions up to 4K. In terms of raw compute power, configurations leveraging eDRAM achieved up to 500 GFLOPS in peak floating-point performance, particularly benefiting graphics-intensive and video workloads by mitigating the limitations of shared system memory.[62] Designed for ultrathin laptops and ultrabooks, Gen7.5 powered low-power Haswell U and Y series processors, delivering enhanced battery life through dynamic voltage and frequency scaling while supporting 4K display output via DisplayPort 1.2 and HDMI 1.4 interfaces. This architecture's emphasis on efficiency made it suitable for emerging mobile computing scenarios, where integrated graphics needed to balance performance with thermal constraints without discrete GPU alternatives. The eDRAM integration in Iris Pro models provided a critical edge, often doubling effective performance in bandwidth-sensitive applications compared to non-eDRAM variants.Gen8 Architecture
The Gen8 architecture, introduced in 2014 alongside Intel's Broadwell microarchitecture, represented a 14 nm process shrink from the prior generation, enabling refinements in power efficiency and integration for mobile and low-power devices.[63] It supported up to 48 execution units (EUs) in GT3 configurations such as Iris Graphics 6100 and 24 EUs in GT2 configurations, delivering theoretical peak FP32 performance up to 768 GFLOPS in high-end variants at 1 GHz boost clocks.[64] This architecture provided preview-level support for DirectX 12 at Feature Level 11_1, alongside full OpenGL 4.3 and OpenCL 2.0 compatibility, allowing early adoption of advanced rendering and compute APIs.[65] Key enhancements in Gen8 focused on graphics pipeline efficiency, including improved sampler caches that increased sampling throughput by 25% per EU compared to Gen7.5, reducing latency in texture access for complex scenes. Larger L1 caches further bolstered this by minimizing data fetch overheads, contributing to overall performance gains of around 20% in graphics workloads over Haswell-based designs.[65] For compute tasks, Gen8 introduced better multi-threading capabilities in its execution units, featuring dual 4-wide vector SIMDs with simultaneous multi-threading (SMT) support to handle parallel workloads more effectively, such as GPGPU applications.[66] Gen8 found primary deployment in low-power platforms, including the Broadwell Core M processors for ultrathin laptops and tablets, as well as the Braswell Atom series for entry-level 2-in-1 devices and embedded systems, where its balanced efficiency enabled sustained performance under thermal constraints.[63] These implementations built on Gen7.5's media processing strengths by leveraging the smaller process node for reduced power draw without sacrificing core functionality.[67]Gen9 Architecture
The Gen9 architecture, introduced in 2015 alongside Intel's Skylake processors, marked a significant advancement in integrated graphics by providing full support for DirectX 12 at feature level 12_1 and Vulkan 1.0, enabling more efficient rendering and compute workloads compared to previous generations.[68][69][70] This architecture built upon the foundations of Gen8 while introducing native hardware acceleration for modern APIs, allowing developers to leverage low-overhead graphics pipelines for improved performance in games and applications. Configurations typically featured 12 to 24 Execution Units (EUs) in standard Skylake implementations, such as Intel HD Graphics 530, scaling up to 48 or 72 EUs in Iris and Iris Pro variants for enhanced graphical capabilities.[71] Performance in Gen9 reached up to approximately 800 GFLOPS in mid-range configurations like Iris Graphics, with higher-end Iris Pro models approaching 1150 GFLOPS at boost clocks around 1 GHz, facilitated by dual-subslice structures per EU for better throughput.[72][71] A key enhancement was the inclusion of asynchronous compute support, which permitted concurrent execution of compute shaders alongside graphics rendering, optimizing resource utilization in DirectX 12 scenarios and boosting overall efficiency for multi-threaded tasks.[68] The architecture was deployed across Skylake S-series (desktop) and H-series (high-performance mobile) platforms, as well as in low-power Apollo Lake Celeron processors launched in 2016, extending Gen9's reach to entry-level embedded and ultrabook systems. Gen9 also introduced robust support for 4K resolution playback, including 5th-generation HDCP for handling protected content, ensuring compatibility with high-definition streaming and media without compromising security or quality.[73] This feature was particularly vital for consumer applications like video playback and light gaming at ultra-high definitions, where the architecture's improved media engine handled 4K decoding efficiently while maintaining power efficiency on 14nm process. Overall, Gen9 represented Intel's push toward mainstream adoption of advanced graphics standards in integrated solutions.Gen9.5 Architecture
The Gen9.5 architecture was released in 2016 alongside Intel's Kaby Lake processor family, serving as a refined iteration of the Gen9 graphics core introduced in Skylake processors. This generation maintained the core structural elements of Gen9, such as slice-based execution units (EUs), but incorporated process optimizations on the 14nm+ node to enable higher clock speeds and improved efficiency. Configurations scaled up to 24 EUs in GT2 variants, supporting DirectX 12 Feature Level 12_1 for enhanced shader model capabilities and tiled resources. Performance in Gen9.5 implementations ranged from approximately 500 GFLOPS in higher-end GT2 setups to around 1000 GFLOPS in select Iris configurations, driven by boosted GPU frequencies up to 1.15 GHz or higher in optimized scenarios.[74] These improvements contributed to better power efficiency compared to Gen9, with reduced leakage and dynamic power consumption on the refined 14nm process, enabling sustained performance in battery-constrained mobile platforms without significant TDP increases. For example, integrated graphics in Kaby Lake delivered up to 10-15% better frame rates in DirectX 12 workloads under similar power envelopes. Gen9.5 saw broad adoption across multiple Intel platforms from 2016 to 2019, including the Kaby Lake Refresh (2017) for mainstream laptops, Coffee Lake desktop processors with up to 6 CPU cores and integrated UHD Graphics 630, Whiskey Lake mobile chips for ultrabooks, Comet Lake high-end desktops, and Gemini Lake low-power Atom SoCs for embedded and entry-level devices. This versatility highlighted its role in diverse computing segments, from consumer PCs to compact systems. A key innovation in the Gen9.5 media engine was the addition of native hardware acceleration for VP9 decode, supporting 8-bit and 10-bit profiles up to 4K resolution, which improved efficiency for web video streaming and reduced CPU overhead in applications like browsers and media players.[75] This enhancement built on prior HEVC support, enabling premium 4K UHD content playback with protected DRM, a feature particularly beneficial for mobile and desktop platforms handling high-bandwidth video.[74]Gen11 Architecture
The Gen11 architecture marked Intel's integrated graphics evolution prior to the Xe unification, debuting in 2019 with the 10th-generation Core processors codenamed Ice Lake. Fabricated on Intel's 10 nm process node using third-generation FinFET technology, Gen11 represented the first implementation of graphics on this advanced node, enabling denser transistor integration and improved power efficiency compared to prior 14 nm designs. This shift allowed for up to 64 execution units (EUs) in high-end configurations like Iris Plus Graphics, a significant increase from the 24 EUs in Gen9.5 GT2 variants, while maintaining compatibility with mobile platforms through configurable TDPs from 9 W to 28 W.[1][76][77] At its core, Gen11 enhanced execution unit throughput via architectural refinements, including dual SIMD8 pipelines per EU for better handling of mixed-precision workloads such as INT8 and FP16 operations, alongside improved floating-point unit (FPU) efficiency. These changes supported DirectX 12.1, OpenGL 4.6, and Vulkan 1.2 APIs, facilitating advanced rendering techniques and compute tasks. Performance reached up to approximately 1 TFLOPS of FP32 compute in 64-EU variants clocked at 1.1 GHz, providing roughly 2x the graphics throughput of Gen9.5 in select workloads like 3DMark. Additionally, the inclusion of the Intel Gaussian & Neural Accelerator (GNA) enabled low-power neural network inference for always-on features, such as noise suppression in audio processing, offloading the CPU without impacting battery life.[78][79][80][81] Gen11 powered platforms like the Core i7-1065G7 and i5-1035G1 in ultrabooks, delivering media capabilities including 4K60 HDR video decode and encode via enhanced Quick Sync hardware, supporting up to four simultaneous 4K displays or a single 8K output. This made it suitable for content creation and streaming, with BT.2020 color space for vibrant HDR playback. While building on Gen9.5's efficiency gains, Gen11's denser EU array and process improvements established a foundation for subsequent Xe architectures, emphasizing balanced mobile performance over discrete-level capabilities.[77][82][83][84]Gen12 Xe-LP Architecture
The Gen12 Xe-LP architecture, introduced in 2020 as part of Intel's unified Xe graphics family, represents the low-power integrated graphics variant designed for mobile and client platforms. It debuted with the 11th-generation Core processors codenamed Tiger Lake, featuring up to 96 execution units (EUs) in its highest configuration to deliver enhanced graphics performance within power-constrained environments.[85][86] This architecture supports DirectX 12 Ultimate, enabling advanced features such as ray tracing, variable rate shading, and mesh shaders for improved rendering efficiency and visual fidelity. Performance-wise, the Xe-LP in Tiger Lake configurations achieves approximately 2 TFLOPS of FP32 compute and up to 4 TFLOPS in FP16, depending on clock speeds reaching 1.35 GHz, making it suitable for light gaming and productivity tasks. The architecture incorporates DP4a instructions, which accelerate AI matrix operations by enabling efficient dot-product accumulation for deep learning workloads, serving as a foundational element for later technologies like Intel Xe Super Sampling (XeSS).[85][87] A key innovation is its adoption of tile-based rendering, which processes graphics in smaller tiles to reduce memory bandwidth usage and power consumption compared to immediate-mode rendering in prior generations.[88] This builds briefly on ray tracing capabilities introduced in Gen11, with Xe-LP adding hardware acceleration for real-time ray tracing via dedicated units.[14] Xe-LP was further evolved in subsequent platforms, including a Gen12.2 variant in 12th-generation Alder Lake processors and a tailored implementation in the Core Ultra 100 series (Meteor Lake), maintaining compatibility with low-power integrated graphics needs while supporting up to 96 EUs across these systems. These deployments emphasize the architecture's scalability for ultrabooks and embedded applications, with innovations like DP4a enabling broader AI integration without discrete GPUs.[38]Xe2 Architecture
The Xe2 architecture, introduced by Intel in 2024 as part of the Lunar Lake platform (Core Ultra 200V Series), represents a refined iteration of the Xe graphics family optimized for low-power mobile devices. Released in September 2024, it powers integrated graphics in these processors, supporting up to 8 Xe2 cores in the Xe2-LPG variant designed specifically for thin-and-light laptops. This architecture builds on the foundational tile-based design of the prior Xe-LP, enhancing scalability for AI-driven workloads while maintaining a focus on power efficiency.[89][90] Lunar Lake integrates the Xe2 graphics directly with an on-package NPU, enabling synergistic AI processing for PCs, where the GPU contributes to overall system performance exceeding 100 TOPS in combined CPU, GPU, and NPU capabilities. The architecture delivers theoretical peak performance in the range of 3-5 TFLOPS, depending on configuration and clock speeds up to 2.05 GHz, while achieving approximately 50% better performance per watt compared to the Xe-LP architecture in Meteor Lake processors. This efficiency gain stems from architectural optimizations like enhanced 2nd-generation Xe cores, larger 8MB L2 cache per slice, and improved power gating, making it ideal for battery-constrained AI PCs.[91][90] Key advancements in Xe2 include full hardware-accelerated ray tracing with 8 enhanced ray tracing units per Xe core, enabling up to 1.5x graphics performance over previous generations in ray-traced workloads. It also supports AV1 video encoding and decoding, facilitating high-efficiency streaming and content creation on mobile platforms. The improved XeSS 2 upscaling technology leverages AI-based frame generation and super-resolution, further boosting gaming and visual applications by integrating seamlessly with the NPU for low-latency inference. These features position Xe2 as a cornerstone for 2024-2025 mobile AI ecosystems, emphasizing balanced compute and graphics in power-sensitive environments.[92][93]Xe3 Architecture
The Xe3 architecture represents Intel's next-generation integrated graphics solution, announced in 2025 and debuting with the Panther Lake system-on-chip (SoC) as part of the Core Ultra 300 series (with availability expected in early 2026).[36] This architecture supports up to 12 Xe cores, enabling high-core-count configurations tailored for mobile platforms.[94] Compared to the preceding Xe2 architecture, Xe3 delivers over 50% higher graphics performance at equivalent power levels, with improvements exceeding 40% in power efficiency for the same performance targets.[94] Performance advancements in Xe3 emphasize balanced compute and graphics capabilities, positioning it as a foundational element for versatile XPUs in the Arc B-series lineup. Configurations with 12 Xe cores achieve up to 120 TOPS of AI performance, primarily through optimized INT8 operations, supporting demanding workloads such as AI training and inference.[95] The architecture's Xe3P variant extends these benefits to future discrete graphics, maintaining compatibility with integrated mobile designs while targeting broader XPU ecosystems.[96] Targeted at mobile Panther Lake platforms, Xe3 enhances support for high-resolution video processing and AI-accelerated tasks, including efficient handling of advanced media pipelines and machine learning models.[97] Key innovations include the upgraded XMX3 matrix engines, which introduce FP8 dequantization and native INT8 support for improved AI throughput, alongside full integration with the Battlemage architecture for seamless ray tracing and vector processing enhancements.[95] These features, combined with expanded L2 cache up to 16 MB in high-end SKUs and refined power scaling, enable Xe3 to address both gaming and productivity demands in thin-and-light devices.[98]Discrete Graphics Developments
Xe-HP and Xe-HPC Architectures
The Xe-HP and Xe-HPC architectures represent Intel's high-performance implementations of the Xe GPU family, optimized for data center, professional workstations, and high-performance computing (HPC) environments. Announced in 2021 as part of Intel's Architecture Day, these variants emphasize scalability, dense compute capabilities, and support for AI and scientific workloads through a tile-based design that allows modular assembly of multiple GPU tiles via high-speed interconnects like Xe Link. Xe-HP serves as a flexible compute solution for integrated and server applications, while Xe-HPC powers discrete accelerators focused on extreme-scale HPC. Both build on the unified Xe architecture initially introduced in low-power integrated graphics, enabling shared software ecosystems via oneAPI. Xe-HPC, exemplified by the Ponte Vecchio GPU (now part of the Intel Data Center GPU Max Series), features up to 128 Xe-cores across two stacks, with each Xe-core containing eight vector engines and eight matrix engines for parallel processing. This configuration supports up to 128 ray tracing units and integrates HBM2e memory stacks providing high bandwidth for memory-intensive tasks, with capacities reaching 128 GB per GPU. The architecture delivers over 100 TFLOPS of FP16 performance, scaling to petaFLOPS in lower-precision formats like BF16 for AI training and inference, making it suitable for supercomputing platforms such as the Aurora exascale system.[99] Deployed in the Data Center GPU Max Series (e.g., Max 1100 and 1550 models), Xe-HPC accelerators target HPC and AI workloads, offering up to 52 TFLOPS FP64 and 839 TFLOPS BF16 with XMX engines.[100] In contrast, Xe-HP provides versatile compute acceleration in integrated forms, notably within Sapphire Rapids-based 4th Gen Intel Xeon Scalable processors with HBM variants. These integrate up to four Xe-HP tiles, delivering around 40 TFLOPS FP32 performance for professional visualization, rendering, and compute tasks, with HBM2e memory up to 64 GB for enhanced bandwidth. Xe-HP emphasizes flexible scalability for server environments, supporting multi-tile configurations while prioritizing FP64 precision for scientific simulations. Key to both architectures is the XMX (Xe Matrix Extensions) engine, which accelerates tensor operations for deep learning, providing dense matrix multiply capabilities up to 4096-bit widths per engine to boost AI throughput without specialized hardware silos.[101] Platforms like Sapphire Rapids servers leverage Xe-HP for in-package acceleration, enabling up to 2.5x performance gains in HPC benchmarks compared to prior generations.Xe-HPG Architectures
The Xe-HPG (High Performance Graphics) architecture represents Intel's dedicated microarchitecture for discrete graphics processing units (GPUs) optimized for gaming and media workloads, emphasizing ray tracing, AI acceleration, and high-bandwidth compute. Introduced as part of the broader Xe family, Xe-HPG builds on tiled-based rendering principles but incorporates enhanced execution units, including up to 512 vector engines per GPU for improved parallelism in graphics pipelines. This architecture powers Intel's Arc discrete GPU lineup, focusing on consumer applications with support for modern APIs and features like hardware-accelerated mesh shading and variable-rate shading.[14][102] The first implementation of Xe-HPG arrived with the Alchemist (DG2) generation in 2022, branded as the Intel Arc A-series for desktops. These GPUs feature up to 32 Xe-cores, delivering peak floating-point performance in the 10-20 TFLOPS range for FP32 operations, depending on the model, such as the flagship Arc A770. Alchemist includes dedicated hardware for ray tracing with support for DirectX Raytracing (DXR) 1.1 and Vulkan RT, enabling real-time path tracing in games, alongside full DirectX 12 Ultimate compliance for features like sampler feedback and resource heap tier 3. Additionally, it integrates AV1 hardware decode and encode engines for efficient video processing, and introduces Intel Xe Super Sampling (XeSS), an AI-driven upscaling technology that leverages matrix multiply units for temporal super-resolution similar to NVIDIA DLSS. Alchemist GPUs are fabricated on TSMC's 6nm process and target mid-range gaming performance, with resizable BAR (ReBAR) support to optimize CPU-GPU data transfer.[14][103][104][105] Succeeding Alchemist, the Battlemage (BMG) generation launched in late 2024 as the Intel Arc B-series, refining Xe-HPG with the Xe2 variant for greater efficiency and scalability in discrete desktop cards. Models like the Arc B580 incorporate 20 Xe-cores, achieving up to 70% higher performance per Xe-core compared to Alchemist equivalents, alongside a 50% improvement in performance per watt through architectural tweaks such as enhanced instruction scheduling and power gating. This uplift, combined with ongoing driver optimizations, addresses early Alchemist limitations in game compatibility and frame consistency, enabling stronger 1440p gaming with ray tracing. Battlemage retains core Xe-HPG features like AV1 support, XeSS (now version 2 with improved AI models), and ReBAR, while expanding ray tracing throughput via more efficient intersection engines. These GPUs continue to target desktop platforms, with no integrated variants in this high-performance tier, though related Xe2 tiles appear in client processors like Lunar Lake for hybrid rendering.[106][103][107] Looking ahead, Intel announced in October 2025 the Xe3P microarchitecture for the next generation of Arc discrete GPUs, branded as the C-series. This architecture, part of the broader Xe3 family, focuses on enhanced performance-per-watt efficiency and AI capabilities, including support for advanced inference workloads. Initial implementations include the Crescent Island data center GPU with 160 GB of LPDDR5X memory, optimized for AI inference. Consumer discrete variants are expected to follow, building on Xe-HPG foundations with improvements in ray tracing, upscaling technologies like XeSS 3, and broader API compliance, targeting further gains in gaming and content creation as of late 2025.[108]Core Technologies
Graphics Processing Features
Intel Graphics Technology has evolved to support a range of modern graphics APIs, enabling compatibility with contemporary rendering pipelines and compute workloads. Early generations, such as Gen7 introduced with Ivy Bridge processors in 2012, provided foundational support for DirectX 11, which includes tessellation capabilities for enhanced geometry processing in 3D applications. By Gen9 in 2015, support extended to OpenGL 4.6, facilitating advanced shader models and multi-threaded rendering.[12] Subsequent architectures progressed further, with Gen11 in 2019 achieving OpenGL 4.6 conformance, allowing for core profile features like compute shaders and geometry instancing without extensions. The adoption of Vulkan has marked a significant shift toward low-overhead, cross-platform graphics, with Intel's Xe architectures (starting from Gen12 in 2020) delivering full Vulkan 1.3 compliance. This includes extensions for dynamic rendering and enhanced synchronization, optimizing performance in high-demand scenarios like real-time ray tracing previews.[109] Variable rate shading (VRS), introduced in Gen11 hardware alongside DirectX 12 support, allows developers to apply different shading rates across the screen, reducing computational load in less critical areas while preserving detail in focal regions, thereby improving frame rates by up to 30% in compatible titles.[110] Core rendering technologies in Intel Graphics emphasize efficiency and visual fidelity. Tessellation, hardware-accelerated since Gen7, subdivides polygons into finer meshes for smoother surfaces and complex deformations, supporting DirectX 11's hull and domain shaders without software emulation. Anisotropic filtering, available across generations, enhances texture clarity at grazing angles by sampling more texels, with maximum support up to 16x to minimize aliasing in distant or oblique views while incurring minimal performance overhead on modern hardware.[111] Unique to Intel's ecosystem, Deep Link technology facilitates resource sharing between the integrated GPU (iGPU) in 11th-generation Core processors and newer and discrete Arc GPUs, dynamically allocating power and memory for tasks like video encoding or AI inference to boost overall system efficiency by up to 2x in multi-GPU configurations.[112] For low-latency rendering, Intel Xe Low Latency (XeLL), integrated into Xe architectures from 2022, optimizes frame presentation by reducing input-to-photon delays through efficient queue management and synchronization, enabling smoother experiences in competitive gaming with latencies under 10ms in optimized applications.[113] In 2025, Intel advanced AI-driven rendering with Xe Super Sampling (XeSS) 2.0, an upscaling solution leveraging deep learning to reconstruct higher-resolution images from lower ones, delivering up to 4x frame rate improvements in supported games while maintaining near-native quality. This feature is natively available on Arc A-series and B-series discrete GPUs, as well as integrated graphics in Core Ultra Series 2 processors, with SDK support extending to over 200 titles by mid-year.[114] Quick Sync integration complements these graphics features by offloading media tasks, though primary focus remains on 3D pipeline enhancements.[12]Video Acceleration Technologies
Intel Quick Sync Video (QSV) is Intel's dedicated hardware acceleration technology for video encoding and decoding, integrated into the graphics processing units of Intel processors. It utilizes a specialized Multi-Format Codec (MFX) engine to offload video processing tasks from the CPU, enabling faster performance and lower power consumption in multimedia applications. Introduced in January 2011 with the Sandy Bridge microarchitecture (Graphics Generation 6), QSV initially focused on H.264/AVC codec support for both encode and decode operations, marking a significant advancement in hardware-accelerated video processing.[115] Over successive generations, Quick Sync has expanded its codec support and capabilities, evolving from basic formats to advanced, high-efficiency standards. The first generation (Gen1, Sandy Bridge) supported MPEG-2 and H.264 with initial B-frame encoding limitations, while subsequent iterations like Gen7 (Ivy Bridge) and Gen7.5 (Haswell) enhanced H.264 performance, with continued support for MPEG-2 decode. By Gen8 (Broadwell), VP8 decode was available, supported since Gen6. Gen9 (Skylake) brought HEVC/H.265 10-bit decode support. Gen9 (Skylake) also introduced HEVC 10-bit encode and VP9 decode; Gen9.5 (Kaby Lake) further enhanced these capabilities, enabling broader compatibility for modern streaming and 4K content. Later generations, including Gen11 (Ice Lake) and Gen12 (Tiger Lake, Xe-LP), incorporated AV1 decode for up to 8K resolutions at 30 fps in 10-bit 4:2:0 format. Full AV1 encode and decode capabilities arrived with Xe2 architecture (starting 2024 in discrete Arc GPUs and integrated variants), supporting higher bit depths and frame rates. In the Xe3 architecture (announced in 2025, expected deployment in 2026 processors), Quick Sync achieves up to 8K at 60 fps for AV1 and other codecs, with improved multi-format handling and advanced B-frame support across all major standards.[116][117] Quick Sync's MFX engine processes video pipelines independently of the main graphics rendering units, allowing simultaneous 3D graphics and video operations without interference. This separation ensures efficient handling of tasks like deinterlacing and scaling within the video domain. In practical applications such as HandBrake for video transcoding, Quick Sync significantly reduces CPU utilization—often by over 90% compared to software-only encoding—freeing resources for other workloads while maintaining high-quality output. For instance, encoding 4K HEVC video in HandBrake using QSV on Gen12 hardware can achieve speeds 5-10 times faster than CPU-based methods, depending on the preset. This technology is widely adopted in media players, editors like Adobe Premiere Pro, and streaming servers, leveraging APIs such as oneVPL for developer integration.[118][119]Virtualization and Security Features
Intel Graphics Virtualization Technology (GVT-g) was introduced in 2013 alongside the Haswell architecture, enabling mediated device passthrough for virtual machines by emulating virtual GPU instances with full PCIe and graphics functionality. This approach allows up to three virtual GPUs per physical GPU, providing near-native performance for graphics workloads in virtualized environments such as Xen and KVM hypervisors.[120] GVT-g supports key features like 2D/3D rendering and media decoding, validated on both Haswell and subsequent Broadwell processors, facilitating resource sharing among multiple VMs without hardware-level multiplexing.[121] Complementing GVT-g, Intel GVT-d leverages Single Root I/O Virtualization (SR-IOV) for direct device assignment, allowing the entire GPU or virtual functions to be passed through to a single VM for exclusive access.[122] This method, also known as GPU passthrough, assigns full control of the graphics device to the guest OS, ideal for scenarios requiring undivided performance, such as dedicated user interfaces or high-fidelity rendering in virtualized setups.[123] SR-IOV support in GVT-d enables creation of multiple virtual functions from one physical GPU, enhancing scalability in data center and edge computing environments starting from newer architectures like Tiger Lake and beyond.[123] On the security front, Intel integrated graphics from the Gen9 architecture (Skylake) onward support High-bandwidth Digital Content Protection (HDCP) 2.2, enabling secure transmission of premium 4K content over HDMI, DisplayPort, and DVI interfaces to prevent unauthorized copying.[124] This feature ensures compliance with content protection standards for high-definition video playback, including HDR and Ultra HD Blu-ray, by encrypting streams between the GPU and display devices.[125] Earlier, from 2010 to 2015, Intel Insider provided hardware-based protection for premium HD content on Sandy Bridge and subsequent processors, certifying systems for secure playback of encrypted media from providers like CinemaNow, though the program was discontinued as HDCP 2.2 became the industry standard.[126] In 2025, Intel advanced trusted execution capabilities by integrating Software Guard Extensions (SGX) principles with GPU acceleration through Intel TDX Connect, establishing secure channels for confidential computing workloads involving accelerators.[127] This enables protected data processing in GPU environments, safeguarding sensitive AI and graphics tasks from host OS interference or external threats in virtualized setups.[128]Capabilities and Support
Rendering and Compute Capabilities
Intel's Execution Units (EUs) in the Xe architecture form the core of its rendering and compute pipeline, with each EU featuring an 8-wide Single Instruction Multiple Data (SIMD) Arithmetic Logic Unit (ALU) configuration that supports SIMD8, SIMD16, and SIMD32 operations for flexible vector processing.[34] This design enables efficient handling of both floating-point and integer workloads, where the 8 ALUs per EU allow for parallel execution of up to 8 scalar operations per cycle in SIMD8 mode, scaling to wider vectors through dual-issue mechanisms.[34] Each EU is also simultaneously multithreaded with up to 7 hardware threads, facilitating context switching to maintain high occupancy during compute-intensive tasks.[129] Peak floating-point operations per second (FLOPS) for FP32 in Xe-based graphics, such as Gen12 implementations, are calculated using the formula: number of EUs × clock speed (in GHz) × 16, accounting for dual fused multiply-add (FMA) operations across the 8-wide SIMD lanes per EU.[87] For instance, a Gen12 configuration with 96 EUs clocked at 1.35 GHz yields approximately 2.07 TFLOPS of FP32 throughput, providing a measure of the architecture's raw compute capacity for rendering and general-purpose GPU (GPGPU) tasks.[87] This metric highlights the scalability of Xe designs, where higher EU counts and clock speeds in discrete variants like Arc amplify performance for demanding applications. Compute capabilities in Xe graphics are bolstered by support for OpenCL 3.0, which enables portable parallel programming across Intel's GPU stack starting from Gen8 hardware and fully realized in Xe implementations.[130] Additionally, SYCL integration via Intel's oneAPI framework allows for single-source C++ development targeting heterogeneous systems, optimizing workloads like AI inference and scientific simulations on Xe hardware.[131] While hardware threads are limited to 7 per EU, effective parallelism is achieved through sub-group sizes of up to 32, enabling work-groups that can scale to hundreds of threads per EU for balanced resource utilization.[132] While capable for many tasks, integrated graphics may struggle with light 4K video editing due to slower decoding and processing, resulting in preview lag on multi-track footage and longer export times compared to dedicated GPUs. Ray tracing acceleration begins with Gen11 graphics but gains dedicated hardware in subsequent Xe variants, including Ray Tracing Units (RTUs) for efficient bounding volume hierarchy (BVH) traversal and intersection testing.[133] In architectures like Xe-LPG+ and Xe-HPG, each RTU attaches to Xe-cores and features dual traversal pipelines, offloading BVH navigation from shaders to fixed-function logic for real-time path tracing and global illumination effects.[134] This hardware reduces the computational overhead of ray-geometry intersections, enabling playable frame rates in ray-traced scenes without excessive software emulation. In 1080p gaming benchmarks, discrete Intel Arc graphics consistently deliver over 60 FPS in AAA titles at medium settings, such as 105 FPS average in select modern games for the Arc A580.[135] Integrated Xe graphics, by comparison, achieve around 60 FPS in titles like DOOM Eternal at 1080p medium on configurations like the Core Ultra 9 285K's Arc iGPU, demonstrating viable performance for casual gaming.[136] For emerging Xe3-based integrated solutions in Panther Lake processors, early indications suggest up to 50% uplift over prior Xe2 integrated graphics, targeting 60+ FPS in medium-settings 1080p scenarios across a broader range of games.[36]Multi-Display and Output Support
Intel Graphics Technology, beginning with the Gen6 architecture introduced in Sandy Bridge processors, incorporates display engines that support up to three independent display pipes, enabling simultaneous output to multiple monitors.[137] Each pipe can handle resolutions up to 4K (4096x2160) at 60 Hz via compatible interfaces, though 10-bit color depth at this resolution and refresh rate may be restricted in Intel graphics drivers due to bandwidth constraints on connections like HDMI 2.0, which cannot support full 10-bit RGB 4:4:4 without subsampling (e.g., 4:2:0) or compression such as Display Stream Compression (DSC); some laptops with incomplete HDMI 2.1 implementations may further limit higher color depths and resolutions.[138] This provides robust support for high-definition multi-display setups in both desktop and mobile configurations.[139] Supported output interfaces have evolved across generations, with embedded DisplayPort (eDP) 1.4 serving as the standard for integrated laptop displays since early implementations.[139] In Xe-based architectures and later, such as those in Meteor Lake processors, HDMI 2.1 support enables uncompressed 4K at 120 Hz or 8K at 60 Hz with Display Stream Compression (DSC), while DisplayPort 2.1 offers even higher bandwidth for advanced configurations.[140] The Xe3 architecture, featured in 2025's Panther Lake processors, maintains compatibility with advanced display interfaces for high-resolution outputs.[36] Multi-monitor capabilities are enhanced through features like DisplayPort Multi-Stream Transport (MST), introduced in Haswell-era graphics (Gen7.5) and refined in subsequent generations, allowing daisy chaining of up to four displays via a single DisplayPort connection when using compatible monitors.[141] This setup reduces cable clutter and supports extended desktops or mirrored outputs, with Intel UHD Graphics 730 and 770 explicitly rated for four simultaneous displays in recent processors.[142] Additionally, Intel's Surround Gaming mode, available through the Intel Graphics Command Center, enables spanning games across multiple screens for immersive experiences, leveraging the unified rendering pipeline without dedicated hardware limitations.[143] Advancements in prior Xe architectures like Meteor Lake enable 8K resolution support at 60 Hz with HDR decoding, allowing up to three 8K displays or mixed configurations like dual 4K at 144 Hz, all while maintaining power efficiency for mobile use.[144]Integration Across Processor Families
Intel's integrated graphics technology is tailored to the performance tiers and use cases of its processor families, with the Core series receiving the most advanced implementations. In the Intel Core Ultra 200V series (codenamed Lunar Lake), processors feature full Iris Xe2 graphics based on the Xe2-LPG architecture, supporting up to 8 Xe2 cores, equivalent to 128 execution units (EUs), enabling enhanced capabilities for AI-accelerated tasks and light gaming in thin-and-light laptops.[145] These configurations prioritize balanced power efficiency and performance, with graphics clock speeds reaching up to 2.05 GHz in top SKUs like the Core Ultra 9 288V. In contrast, the Pentium and Celeron processor lines incorporate more basic UHD Graphics implementations starting from Generation 9 (Gen9) architecture and later, with configurations limited to a maximum of 24 EUs to suit entry-level computing needs such as office productivity and media playback.[146] These graphics lack support for newer features like Quick Sync Video Generation 12, which debuted in 11th-generation Core processors, resulting in reduced video encoding/decoding efficiency compared to higher-end families.[116] For example, models like the Pentium Gold G5400 utilize 24 EUs at lower clock speeds, emphasizing cost-effectiveness over graphical intensity. The Atom and E-core-focused processors, such as those in the Gemini Lake and Jasper Lake families, integrate UHD Graphics optimized for ultra-low-power scenarios like embedded systems and basic tablets. Gemini Lake variants, including the Celeron N4000, feature UHD Graphics with 12 EUs clocked up to 700 MHz, prioritizing energy efficiency for always-on devices with minimal thermal demands.[147] Jasper Lake processors advance to the Xe-LP architecture, offering up to 24 EUs in models like the Pentium N6000, while maintaining a low-power envelope under 10W TDP to support IoT and fanless designs without compromising basic display and decode functions. As of September 2025, Intel has transitioned graphics driver support for 11th- through 14th-generation Core processors, along with associated Atom, Pentium, and Celeron graphics, to a legacy model, providing only critical security updates rather than new features or optimizations.[8] This shift directs development resources toward the Core Ultra 200 series and upcoming Core Ultra 300 series (codenamed Panther Lake), which will introduce Xe3-based graphics for further advancements in AI and efficiency.[148]References
- https://en.wikichip.org/wiki/intel/microarchitectures/gen7.5
- https://en.wikichip.org/wiki/intel/microarchitectures/gen9.5
