Hubbry Logo
Shared graphics memoryShared graphics memoryMain
Open search
Shared graphics memory
Community hub
Shared graphics memory
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Shared graphics memory
Shared graphics memory
from Wikipedia

In computer architecture, shared graphics memory refers to a design where the graphics chip does not have its own dedicated memory, and instead shares the main system RAM with the CPU and other components.

This design is used with many integrated graphics solutions to reduce the cost and complexity of the motherboard design, as no additional memory chips are required on the board. There is usually some mechanism (via the BIOS or a jumper setting) to select the amount of system memory to use for graphics, which means that the graphics system can be tailored to only use as much RAM as is actually required, leaving the rest free for applications. A side effect of this is that when some RAM is allocated for graphics, it becomes effectively unavailable for anything else, so an example computer with 512 MiB RAM set up with 64 MiB graphics RAM will appear to the operating system and user to only have 448 MiB RAM installed.

The disadvantage of this design is lower performance because system RAM usually runs slower than dedicated graphics RAM, and there is more contention as the memory bus has to be shared with the rest of the system. It may also cause performance issues with the rest of the system if it is not designed with the fact in mind that some RAM will be 'taken away' by graphics.

A similar approach that gave similar results is the boost up of graphics used in some SGi computers, most notably the O2/O2+. The memory in these machines is simply one fast pool (2.1 GB per second in 1996) shared between system and graphics. Sharing is performed on demand, including pointer redirection communication between main system and graphics subsystem. This is called Unified Memory Architecture (UMA).

History

[edit]

Most early personal computers used a shared memory design with graphics hardware sharing memory with the CPU. Such designs saved money as a single bank of DRAM could be used for both display and program. Examples of this include the Apple II computer, the Commodore 64, the Radio Shack Color Computer, the Atari ST, and the Apple Macintosh.[citation needed]

A notable exception was the IBM PC. Graphics display was facilitated by the use of an expansion card with its own memory plugged into an ISA slot.

The first IBM PC to use the SMA was the IBM PCjr, released in 1984. Video memory was shared with the first 128 KiB of RAM. The exact size of the video memory could be reconfigured by software to meet the needs of the current program.

An early hybrid system was the Commodore Amiga which could run as a shared memory system, but would load executable code preferentially into non-shared "fast RAM" if it was available.

Later, the DirectX 6.1 introduced software support for shared graphics memory. Hardware to further support shared graphics memory include Intel DVMT and NVIDIA TurboCache.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Shared graphics memory, also known as shared GPU memory or unified memory in some contexts, is a configuration in which an integrated (iGPU) utilizes a dynamically allocated portion of the system's main (RAM) to store and process graphical , rather than relying on separate dedicated video RAM (VRAM). This approach is common in processors from manufacturers like and , where the iGPU is built into the CPU and shares the same physical memory pool to enable cost-effective and power-efficient graphics rendering for everyday computing tasks such as video playback, web browsing, and light gaming. In systems with integrated graphics, such as Intel HD or UHD , memory allocation occurs automatically via Dynamic Video Memory Technology (DVMT), which adjusts the shared amount based on real-time demand from the graphics workload and overall system needs, with a maximum cap often set in the (e.g., 128 MB to full system RAM availability). For Ryzen processors with integrated graphics, the default allocation typically reserves up to 50% of total system RAM as shared graphics memory, ensuring it remains accessible to the CPU when not in use by the iGPU. This shared model contrasts with discrete GPUs, which have their own dedicated VRAM for higher performance in demanding applications, but it promotes versatility in laptops and budget desktops by reducing hardware costs and thermal output. Key advantages of shared graphics memory include seamless integration with the CPU for unified memory access, which minimizes data transfer overhead, and scalability with system RAM upgrades—allowing the iGPU to leverage more memory as installed capacity increases. However, it can lead to contention between the CPU and iGPU during intensive multitasking, potentially impacting overall system performance if RAM is limited. Users can often adjust allocation limits through settings or driver controls to balance graphics and system demands, though actual usage remains dynamic and workload-dependent. This technology has evolved with modern architectures to support features like variable graphics memory in systems, enhancing efficiency for AI and multimedia workloads.

Fundamentals

Definition

Shared graphics memory, also known as shared GPU memory or integrated graphics memory, refers to a hardware configuration in which the (GPU) borrows a portion of the system's main (RAM) to perform graphical computations and store frame buffers, thereby eliminating the need for separate dedicated video RAM (VRAM). This design is integral to integrated graphics processors (iGPUs), where the GPU lacks its own onboard memory and instead accesses system RAM directly through hardware mechanisms. A primary characteristic of shared graphics memory is its dynamic allocation from the available system RAM, which occurs automatically based on the demands of graphical workloads, with limits often set via configurations. In typical setups, up to 50% of the total system RAM can be allocated to the GPU, or specific fixed amounts such as 1 GB or 2 GB via settings in some systems depending on the hardware and installed memory. Modern enhancements, such as AMD's Variable Graphics Memory (VGM) introduced in 2025, allow even higher allocations, up to 75% of system RAM in supported Ryzen AI systems. For instance, in a computer with 8 GB of total RAM, an integrated GPU might dynamically allocate up to 4 GB for tasks like texture storage and rendering, adjusting in real-time without a fixed reservation. This hardware-level sharing distinguishes shared graphics memory from virtual memory paging, a software technique that swaps data to secondary storage like disk drives to extend RAM capacity, and from software-emulated graphics, which rely on CPU processing without dedicated hardware acceleration. Instead, shared graphics memory enables direct, efficient access to physical system RAM by the embedded GPU, primarily in cost-effective integrated solutions.

Unified Memory Architecture

Unified Memory Architecture (UMA) is a shared memory model in which the CPU and the GPU access a single physical address space within the system's RAM, enabling both processors to read and write the same data locations without the need for explicit data copying between separate memory pools. This architecture contrasts with discrete GPU designs that maintain isolated video RAM, as it integrates processing directly with system memory management to support efficient resource utilization in integrated systems. Key components of UMA include a common overseen by the system's , which allocates portions of RAM dynamically for both general and graphics tasks. Cache coherence mechanisms ensure that modifications made by one processor, such as the CPU updating a texture, are immediately visible to the other, like the GPU rendering from that , through hardware-level protocols. Standards implementing UMA appear in 's integrated graphics solutions, where the leverages the processor's on-die for seamless access, and in ARM-based mobile system-on-chips (SoCs), such as those in Apple's M-series processors, which unify for CPU, GPU, and neural engines to optimize bandwidth in power-constrained environments. In UMA, address mapping aligns GPU memory requests with the system's physical RAM addresses, allowing the GPU to treat system memory as its own without translation overhead, often via direct virtual-to-physical mappings handled by the integrated (IMC). Bandwidth is shared between the CPU and GPU through the in older designs or more modern integrated IMCs, which arbitrate access to ensure equitable distribution while minimizing contention in graphics-intensive workloads. UMA facilitates data transfer in graphics pipelines, where textures, buffers, or vertex data can be directly accessed and modified by both processors, thereby reducing latency and overhead compared to copy-based transfers in non-unified systems.

Technical Operation

Memory Allocation

In shared graphics memory systems, allocation is primarily dynamic, enabling the GPU to use a portion of system RAM on-demand, managed by the operating system, GPU , or technologies like Intel's Dynamic Video Memory Technology (DVMT). DVMT adjusts the allocation based on real-time graphics workload demands, with a configurable pre-allocated minimum set via or (e.g., 128 MB or 256 MB) to ensure baseline availability, returning unused portions to the system for general use. The allocation process relies on the (MMU) to translate GPU virtual addresses into physical system RAM pages, allowing seamless mapping of graphics requests to regions. This mechanism supports the storage of frame buffers—used for rendering output—and textures, which are dynamically assigned within the shared pool to accommodate varying application needs. Configuration options for shared memory are accessible in the BIOS, where users can adjust limits such as the maximum pre-allocated or dynamically usable amount. Traditional caps are often at 50% of total system RAM, but as of 2025, recent Intel systems support up to 87% of installed RAM via driver updates on supported processors like Core Ultra 2 series. Similarly, modern AMD APUs feature variable graphics memory allocation, allowing higher shares for AI and multimedia tasks. For gaming applications, AMD systems enable users to configure a dedicated UMA frame buffer size, such as 2GB via BIOS or AMD Software in performance profiles, which reserves a fixed portion from system RAM for the iGPU, complementing dynamic allocation by providing guaranteed memory availability and potentially allowing higher graphical settings in some games. A key challenge arises from memory fragmentation, particularly during sudden spikes in GPU usage that require rapid reallocations, potentially leading to inefficient page utilization and temporary performance hiccups in the shared pool.

Data Transfer and Access

In shared graphics memory systems, the GPU gains access to system RAM through (DMA) mechanisms facilitated by the integrated , which is shared between the CPU and GPU on the same die or package. This direct pathway eliminates the need for data copying over a separate interconnect like PCIe, allowing the GPU to read and write to system memory as if it were local. For instance, in AMD's Accelerated Processing Units (), the GPU connects directly to the , enabling seamless DMA operations without intermediate buffering. Similarly, Intel's integrated graphics implementations leverage the CPU's for GPU-initiated accesses, ensuring low-overhead data movement within the unified . To maintain data consistency across CPU and GPU caches in these shared environments, cache coherence protocols are employed, often extending traditional CPU protocols like MESI (Modified, Exclusive, Shared, Invalid) to include GPU caches. These extensions track cache line states across both processors, invalidating or flushing lines as needed to prevent stale data during concurrent access. For example, selective caching approaches in GPU architectures allow coherence only for lines requiring CPU-GPU , decoupling the GPU from full protocol overhead while preserving correctness. In AMD APUs, a hierarchical MESI-based directory integrates CPU and GPU caches, propagating snoop requests to ensure visibility of modifications. Such protocols are critical in integrated setups where both processors frequently access the same memory regions. Data transfer in shared graphics memory relies on operations, where the GPU directly reads from or writes to shared buffers in system RAM without explicit CPU-mediated copies. This approach leverages the unified to map buffers accessible by both processors, reducing overhead in workflows like compute shaders or rendering pipelines that interchange data between CPU and GPU tasks. However, if the GPU attempts to access unmapped or paged-out regions of system RAM, a occurs, prompting the operating system to intervene by migrating the necessary pages into GPU-accessible memory. AMD's Heterogeneous-compute Interface for Portability () triggers host notification on faults, enabling dynamic page migration to the device while minimizing thrashing through prefetch hints. These mechanisms ensure but introduce variable latency depending on migration costs. Bandwidth considerations arise from the shared nature of the memory bus, where CPU and GPU compete for DDR4 or DDR5 channels, leading to contention that can degrade performance under simultaneous loads. In integrated graphics, the GPU's high-bandwidth demands—such as texture fetches or framebuffer updates—can saturate the bus, reducing available throughput for CPU operations and vice versa. For DDR5-equipped systems, dual-channel configurations provide up to 89.6 GB/s aggregate bandwidth, but contention may limit effective performance during mixed workloads. Compared to dedicated VRAM, system RAM access incurs higher latency due to shared controller arbitration and longer physical paths within the SoC. This latency gap amplifies the impact of contention, making bus scheduling optimizations essential for balanced operation. Synchronization of concurrent CPU-GPU access to shared memory is managed through primitives like fences and semaphores in graphics APIs, ensuring ordered execution without race conditions. Fences provide GPU-to-CPU signaling, where the GPU signals completion of a command queue operation, allowing the CPU to safely proceed with dependent tasks. In Vulkan, fences are submitted with queue operations and polled on the host, blocking until the GPU advances past the signaled value to coordinate shared buffer modifications. Semaphores, conversely, enable inter-queue or intra-GPU synchronization for finer-grained control over memory visibility. DirectX employs similar fences via ID3D12Fence, which the CPU waits on after queue submission to synchronize with GPU progress, supporting multi-engine scenarios in shared memory contexts. These tools prevent overwrites in shared buffers, with APIs providing timeline variants for efficient multi-threaded usage.

Implementations

Integrated Graphics Processors

Integrated graphics processors incorporate the GPU directly onto the CPU die, enabling efficient sharing of system memory for graphics operations without dedicated VRAM. In designs, this is exemplified by Intel UHD Graphics, introduced in 2017 as a of the earlier Intel HD Graphics line that debuted in 2010 with Westmere processors. These integrated GPUs access shared system memory through the CPU's integrated (IMC), allowing dynamic allocation from the main RAM. For instance, in newer Ultra Series 2 processors supporting DDR5 memory, recent drivers (as of 2025) enable overrides up to 87% of total system memory for enhanced VRAM allocation in demanding tasks. AMD's approach to integrated graphics sharing began with the Fusion architecture in 2011, which unified CPU and GPU on a single die for improved coherence, and evolved with Radeon Vega graphics integrated into Ryzen APUs starting in 2018 with the Raven Ridge series. In these configurations, the GPU shares access to system RAM alongside the CPU's L3 cache via the Infinity Fabric interconnect, facilitating low-overhead data movement and caching for graphics workloads. Ryzen APUs typically reserve a BIOS-configurable frame buffer from system memory, such as 1-2 GB in setups with 8 GB or more RAM, while allowing dynamic expansion as needed. A primary advantage of on-die integration in both and processors is reduced latency for memory access, as the GPU bypasses external interconnects like PCIe to directly tap into the CPU's memory channels. UHD Graphics further supports Quick Sync Video, a dedicated hardware accelerator for video encoding and decoding that leverages the model for efficient media processing in applications like streaming and . These integrated solutions are commonly deployed in non-gaming desktops and laptops for everyday tasks such as web browsing, office productivity, and light . However, performance remains bounded by the shared memory subsystem's bandwidth; for example, a dual-channel DDR4-3200 configuration provides approximately 51.2 GB/s total throughput, split between CPU and GPU demands.

System-on-Chip Designs

System-on-chip (SoC) designs integrate the CPU, GPU, , and other components onto a single die, making shared graphics memory essential for efficient resource utilization in space- and power-constrained environments like mobile and embedded systems. In these architectures, the GPU accesses the same system RAM as the CPU, typically low-power () memory, eliminating the need for dedicated video RAM and reducing overall chip complexity. This approach is particularly suited to -based SoCs, where on-chip interconnects facilitate coherent between processing units. ARM-based SoCs exemplify shared graphics memory through highly integrated designs that prioritize battery life and thermal efficiency. For instance, Qualcomm's Snapdragon series has employed shared LPDDR RAM with its Adreno GPUs since the Snapdragon S1 in 2007, featuring a 32-bit single-channel LPDDR interface at 200 MHz for a bandwidth of 1.6 GB/s. Similarly, Apple's A-series chips introduced unified memory architecture with the A4 SoC in the iPhone 4 (2010), pairing a PowerVR SGX535 GPU with shared system memory to enable seamless CPU-GPU data access. Apple's design leverages this unified pool for its Metal API, allowing direct GPU manipulation of CPU-allocated buffers without explicit data copies. Key characteristics of these SoCs include on-chip interconnects like ARM's CoreLink CCI-400, which provides cache coherency for up to three ACE-Lite masters such as Mali-T600 series GPUs, ensuring consistent memory views across CPU and GPU. Memory sharing occurs via low-bandwidth LPDDR4 or LPDDR5 interfaces, optimized for power efficiency; for example, a typical 64-bit LPDDR4 configuration at 3200 MT/s delivers around 25.6 GB/s shared bandwidth, sufficient for mobile graphics workloads while minimizing energy draw. This integration extends to modems and other peripherals on the same die, further streamlining power delivery and interconnect latency in devices like smartphones and IoT modules. Other prominent examples include MediaTek's Helio series, such as the Helio G81 with its Arm Mali-G52 GPU sharing system LPDDR RAM for gaming and multimedia tasks in budget smartphones. Samsung's SoCs, like the Exynos 9820 featuring a Mali-G76 MP12 GPU, similarly rely on unified system memory to support immersive 3D graphics in tablets and wearables, where physical constraints preclude discrete VRAM. These implementations are widespread in smartphones, tablets, and IoT devices, as reduces bill-of-materials costs and board space compared to dedicated graphics solutions. A unique aspect of shared graphics memory in SoCs is firmware-level allocation, managed through the GPU's (MMU) to map CPU page tables directly into the GPU . This enables real-time in embedded applications, such as IoT interfaces, by allowing to configure large-page mappings for textures and buffers without runtime overhead, supporting up to 48-bit physical addressing for efficient protection and data sharing.

Advantages and Limitations

Benefits

Shared graphics memory eliminates the need for dedicated (VRAM) chips, significantly reducing manufacturing costs by leveraging the system's main RAM instead. This cost-effective approach is particularly advantageous for budget systems, enabling entry-level PCs and devices under $500 to include capabilities without the expense of a separate processor. The design also provides substantial power and space efficiency, as integrated graphics processors using typically consume far less electricity—often in the range of 5-15W—compared to discrete GPUs that draw 75W or more. This lower power draw supports compact form factors in thin laptops and mobile devices, while extending battery life through reduced energy demands during everyday use. In system-on-chip designs, this integration further amplifies power savings by minimizing inter-component data transfers. Shared graphics memory simplifies overall system design, as memory allocation occurs automatically from the available system RAM, offering inherent without manual configuration. This enables reliable for basic computing tasks, including office productivity, video playback, and light gaming—such as achieving resolution at 30 frames per second in many titles on modern integrated setups. In AMD systems with integrated graphics, such as Ryzen APUs, users can manually configure a dedicated allocation of system RAM (e.g., 2 GB via BIOS UMA Frame Buffer Size or Variable Graphics Memory settings in AMD Software) for the iGPU on systems with sufficient total RAM (at least 8 GB). This reservation allows games to detect additional VRAM, potentially enabling higher graphical settings, and can improve performance in VRAM-intensive titles through higher frame rates and reduced stuttering, while enhancing stability by preventing operating system reclamation of the allocated memory. Furthermore, the produces less heat due to its , allowing for simpler cooling solutions and quieter operation in space-constrained devices. It facilitates seamless hybrid switching, as seen in technologies like , where the system dynamically toggles between integrated for low-power tasks and discrete options for demanding workloads, optimizing both performance and .

Drawbacks

Shared graphics memory, while enabling graphics processing without dedicated hardware, introduces several performance limitations primarily due to its reliance on system RAM, which has significantly lower bandwidth than dedicated VRAM. Typical dual-channel DDR4-3200 configurations provide around 51 GB/s of bandwidth, far below the 200-500 GB/s or more offered by modern GDDR6 VRAM in discrete GPUs, resulting in bottlenecks for graphics-intensive tasks. This disparity leads to slower access speeds and reduced frame rates, such as averaging under 30 FPS at 1080p medium settings in AAA titles like or on high-end integrated GPUs like AMD's Vega 11. Resource contention arises as the GPU draws from the same system RAM pool used by the CPU and other applications, potentially reducing available memory for multitasking and causing overall system slowdowns. For instance, allocating 1-2 GB for graphics can leave less RAM for CPU tasks, exacerbating latency on the shared bus, which can be up to twice as high as in dedicated setups due to overhead. This contention not only hampers GPU efficiency but also increases the risk of or reduced responsiveness in mixed workloads. Particularly in GPU-accelerated tasks, light usage (a few GB) of shared graphics memory can cause 10-30% delays, while heavy usage (10GB+) may result in 5-10x slowdowns, reducing iterations per second (it/s) to 1/10th or less in AI generation tasks, often making them impractical. Scalability issues further limit shared graphics memory, as it is constrained by total system RAM capacity and bandwidth, making it unsuitable for high-resolution demands like 4K rendering or VR applications that require rapid, high-volume data access. Integrated GPUs often fail to exceed 20 FPS even at in demanding scenarios, rendering 4K or VR infeasible without severe performance degradation. Other constraints include challenges in , where strict voltage limits and dependencies restrict potential gains, typically in the range of 10-30% depending on the platform and cooling. Moreover, if the GPU exhausts the shared memory pool, it can trigger system-wide vulnerabilities, such as crashes or allocation failures, particularly in memory-intensive applications.

Historical Development

Early Adoption

The origins of shared graphics memory trace back to the 1970s in early microcomputers, where severe cost limitations in hobbyist systems compelled designers to allocate portions of the limited system RAM for basic display functions rather than dedicating separate video memory. During the 1980s, as personal computing expanded, shared graphics memory gained traction in integrated designs aimed at affordability and compactness. The original Apple Macintosh, released in 1984, featured fully integrated graphics that shared its entire 128 KB of system RAM with the frame buffer for a 512x342 bitmapped monochrome display, with approximately 22 KB reserved for video, enabling a graphical user interface without additional hardware. This approach contrasted with the IBM PC (1981) and its clones, which relied on add-in adapters like the Monochrome Display Adapter (MDA) and Color Graphics Adapter (CGA) with dedicated 4 KB and 16 KB video RAM respectively, but highlighted the potential for cost-effective integration in consumer-oriented machines. In the , shared graphics memory became more standardized in mainstream PCs through chipset innovations targeting budget markets. Intel's 810 chipset, launched in 1999, integrated 3D graphics capabilities that dynamically allocated up to 32 MB of system SDRAM for video operations, eliminating the need for a separate and supporting low-end home computing with features like AGP 2x acceleration. This represented a key milestone in the shift from discrete VGA cards, which added $50–100 to system costs, to shared-memory designs in entry-level PCs, broadening accessibility for non-gaming users.

Modern Evolution

In the 2000s, shared graphics memory saw significant integration into mainstream processors, exemplified by series, introduced starting with the GMA 950 in 2005 as part of the Mobile 915 Express Chipset Family. This series utilized Dynamic Video Memory Technology (DVMT), an intelligent scheme that dynamically allocated system for graphics tasks, supporting up to 256 MB of shared to balance performance and efficiency in laptops and desktops without dedicated VRAM. Concurrently, advanced shared through its ATI integrated graphics processors (IGPs), such as the Xpress series launched in 2005, which integrated directly onto the chipset and relied on shared system RAM, laying foundational work for future accelerated processing units () by reducing latency in access for applications. The marked key milestones in unifying CPU and GPU memory architectures, enhancing performance in both desktop and mobile platforms. AMD's Fusion , debuting in 2011 with the Llano series, featured a single die integrating CPU cores and graphics that shared DDR3 system memory, enabling up to 1 GB or more of allocatable graphics memory depending on total RAM, which improved bandwidth efficiency for tasks like video decoding and light gaming. Similarly, Intel's HD Graphics 3000, released in 2011 alongside second-generation Core processors (), supported DirectX 10.1 and utilized fully shared system memory, dynamically allocating up to 1.7 GB from DDR3, which boosted integrated graphics capabilities for everyday and basic 3D rendering. In mobile devices, the rise of Unified Memory Architecture (UMA) became prominent with series processors paired with GPUs, starting around 2010, where shared DRAM access in SoCs like those from and optimized power efficiency for smartphones and tablets, supporting and early multimedia acceleration. Entering the 2020s, shared graphics memory benefited from faster memory standards and specialized integrations, further elevating its role in . The introduction of DDR5 in 2021 provided enhanced bandwidth, up to 120 GB/s in dual-channel configurations, allowing integrated GPUs to allocate up to 8 GB of shared memory in systems with 16 GB or more RAM, significantly improving frame rates in graphics-intensive applications compared to DDR4. Apple's M-series chips, launched in 2020 with the M1 SoC, pioneered high-capacity unified memory architecture, where CPU, GPU, and Neural Engine share a single high-bandwidth pool starting at 8 GB and scaling to 128 GB in later Pro and Max variants by 2024, enabling professional-grade graphics rendering and workloads with minimal data copying overhead. For AI acceleration, Intel's Arc graphics, introduced in 2022 for discrete cards but integrated into Core Ultra processors by 2023, leveraged shared system memory alongside dedicated AI hardware like the XMX engine, supporting features such as Xe Super Sampling (XeSS) for upscaling and accelerating inference tasks in integrated setups. In 2025, advancements continued with introducing Variable Graphics Memory (VGM), a feature that dynamically reallocates system RAM to create dedicated for integrated , optimizing for AI model sizes and quantization. Similarly, added the Shared GPU Memory Override feature to Arc drivers for Core Ultra systems, allowing allocation of up to 87% of system RAM to the iGPU for enhanced VRAM in AI and tasks. Overall trends in the 2020s reflect shared graphics memory's growing dominance, driven by energy-efficient designs in portable devices. By 2025, this architecture underpins the majority of laptops, prioritizing seamless integration over discrete alternatives for general use. Hybrid systems, combining shared integrated graphics for light tasks with switchable discrete GPUs for demanding workloads, have gained versatility in gaming laptops, allowing dynamic mode switching to optimize battery life and performance via technologies like or SmartShift.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.