Hubbry Logo
Scratchpad memoryScratchpad memoryMain
Open search
Scratchpad memory
Community hub
Scratchpad memory
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Scratchpad memory
Scratchpad memory
from Wikipedia

Scratchpad memory (SPM), also known as scratchpad, scratchpad RAM or local store in computer terminology, is an internal memory, usually high-speed, used for temporary storage of calculations, data, and other work in progress. In reference to a microprocessor (or CPU), scratchpad refers to a special high-speed memory used to hold small items of data for rapid retrieval. It is similar to the usage and size of a scratchpad in life: a pad of paper for preliminary notes or sketches or writings, etc. When the scratchpad is a hidden portion of the main memory then it is sometimes referred to as bump storage.

In some systems[a] it can be considered similar to the L1 cache in that it is the next closest memory to the ALU after the processor registers, with explicit instructions to move data to and from main memory, often using DMA-based data transfer.[1] In contrast to a system that uses caches, a system with scratchpads is a system with non-uniform memory access (NUMA) latencies, because the memory access latencies to the different scratchpads and the main memory vary. Another difference from a system that employs caches is that a scratchpad commonly does not contain a copy of data that is also stored in the main memory.

Scratchpads are employed for simplification of caching logic, and to guarantee a unit can work without main memory contention in a system employing multiple processors, especially in multiprocessor system-on-chip for embedded systems. They are mostly suited for storing temporary results (as it would be found in the CPU stack) that typically wouldn't need to always be committing to the main memory; however when fed by DMA, they can also be used in place of a cache for mirroring the state of slower main memory. The same issues of locality of reference apply in relation to efficiency of use; although some systems allow strided DMA to access rectangular data sets. Another difference is that scratchpads are explicitly manipulated by applications. They may be useful for realtime applications, where predictable timing is hindered by cache behavior.

Scratchpads are not used in mainstream desktop processors where generality is required for legacy software to run from generation to generation, in which the available on-chip memory size may change. They are better implemented in embedded systems, special-purpose processors and game consoles, where chips are often manufactured as MPSoC, and where software is often tuned to one hardware configuration.

Examples of use

[edit]
  • Fairchild F8 of 1975 contained 64 bytes of scratchpad.
  • The TI-99/4A has 256 bytes of scratchpad memory on the 16-bit bus containing the processor registers of the TMS9900[2]
  • Cyrix 6x86 is the only x86-compatible desktop processor to incorporate a dedicated scratchpad.
  • SuperH, used in Sega's consoles, could lock cachelines to an address outside of main memory for use as a scratchpad.
  • Sony's PS1's R3000 had a scratchpad instead of an L1 cache. It was possible to place the CPU stack here, an example of the temporary workspace usage.
  • Adapteva's Epiphany parallel coprocessor features local-stores for each core, connected by a network on a chip, with DMA possible between them and off-chip links (possibly to DRAM). The architecture is similar to Sony's Cell, except all cores can directly address each other's scratchpads, generating network messages from standard load/store instructions.
  • Sony's PS2 Emotion Engine includes a 16 KB scratchpad, to and from which DMA transfers could be issued to its GS, and main memory.
  • Cell's SPEs are restricted purely to working in their "local-store", relying on DMA for transfers from/to main memory and between local stores, much like a scratchpad. In this regard, additional benefit is derived from the lack of hardware to check and update coherence between multiple caches: the design takes advantage of the assumption that each processor's workspace is separate and private. It is expected this benefit will become more noticeable as the number of processors scales into the "many-core" future. Yet because of the elimination of some hardware logics, the data and instructions of applications on SPEs must be managed through software if the whole task on SPE can not fit in local store.[3][4][5]
  • Many other processors allow L1 cache lines to be locked.
  • Most digital signal processors use a scratchpad. Many past 3D accelerators and game consoles (including the PS2) have used DSPs for vertex transformations. This differs from the stream-based approach of modern GPUs which have more in common with a CPU cache's functions.
  • NVIDIA's 8800 GPU running under CUDA provides 16 KB of scratchpad (NVIDIA calls it Shared Memory) per thread-bundle when being used for GPGPU tasks. Scratchpad also was used in later Fermi GPU (GeForce 400 series).[6]
  • Ageia's PhysX chip includes a scratchpad RAM in a manner similar to the Cell; the theory of this specific physics processing unit is that a cache hierarchy is of less use than software managed physics and collision calculations. These memories are also banked and a switch manages transfers between them.
  • Intel's Knights Landing processor has a 16 GB MCDRAM that can be configured as either a cache, scratchpad memory, or divided into some cache and some scratchpad memory.
  • Movidius Myriad 2, a vision processing unit, organized as a multicore architecture with a large multiported shared scratchpad.
  • Graphcore has designed an AI accelerator based on scratchpad memories[7]

Alternatives

[edit]

Cache control vs scratchpads

[edit]

Some architectures such as PowerPC attempt to avoid the need for cacheline locking or scratchpads through the use of cache control instructions. Marking an area of memory with "Data Cache Block: Zero" (allocating a line but setting its contents to zero instead of loading from main memory) and discarding it after use ('Data Cache Block: Invalidate', signaling that main memory didn't receive any updated data) the cache is made to behave as a scratchpad. Generality is maintained in that these are hints and the underlying hardware will function correctly regardless of actual cache size.

Shared L2 vs Cell local stores

[edit]

Regarding interprocessor communication in a multicore setup, there are similarities between the Cell's inter-localstore DMA and a shared L2 cache setup as in the Intel Core 2 Duo or the Xbox 360's custom powerPC: the L2 cache allows processors to share results without those results having to be committed to main memory. This can be an advantage where the working set for an algorithm encompasses the entirety of the L2 cache. However, when a program is written to take advantage of inter-localstore DMA, the Cell has the benefit of each-other-Local-Store serving the purpose of BOTH the private workspace for a single processor AND the point of sharing between processors; i.e., the other Local Stores are on a similar footing viewed from one processor as the shared L2 cache in a conventional chip. The tradeoff is that of memory wasted in buffering and programming complexity for synchronization, though this would be similar to precached pages in a conventional chip. Domains where using this capability is effective include:

  • Pipeline processing (where one achieves the same effect as increasing the L1 cache's size by splitting one job into smaller chunks)
  • Extending the working set, e.g., a sweet spot for a merge sort where the data fits within 8×256 KB
  • Shared code uploading, like loading a piece of code to one SPU, then copy it from there to the others to avoid hitting the main memory again

It would be possible for a conventional processor to gain similar advantages with cache-control instructions, for example, allowing the prefetching to the L1 bypassing the L2, or an eviction hint that signaled a transfer from L1 to L2 but not committing to main memory; however, at present no systems offer this capability in a usable form and such instructions in effect should mirror explicit transfer of data among cache areas used by each core.

See also

[edit]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Scratchpad memory (SPM), also known as scratchpad RAM or local store, is a high-speed on-chip (SRAM) that is explicitly managed by software, allowing programmers or compilers to directly allocate and access without hardware intervention. Unlike caches, which rely on automatic hardware mechanisms for and coherence, SPM operates in a distinct with fixed access latencies, enabling predictable performance in time-critical applications. Gaining prominence as a viable alternative to caches in embedded systems during the early , SPM addresses the limitations of cache overheads by eliminating complex tag comparison and miss detection circuitry. It provides significant efficiency gains, including average energy reduction of 40% and area savings of 34% compared to equivalent cache configurations, making it ideal for power-constrained devices such as mobile phones, processors, and communication systems. In modern architectures, SPM remains prominent in multicore processors and specialized accelerators, particularly for deep neural networks, where explicit data management facilitates reuse buffers and minimizes off-chip memory accesses, achieving improvements of up to three orders of magnitude over traditional CPU-based processing. Its software-controlled nature supports optimizations and dynamic allocation techniques, enhancing applicability in real-time and domain-specific computing environments, including recent advancements in neural processing units and GPU architectures as of 2025.

Fundamentals

Definition and Characteristics

Scratchpad memory is a high-speed, software-managed on-chip (SRAM) that serves as temporary storage for and instructions directly accessible by the processor core. Unlike caches, it lacks automatic hardware mechanisms for data placement and eviction, requiring explicit programmer or control to load and unload content. This design positions scratchpad memory within the between processor registers and main memory, facilitating low-latency access to critical program elements in embedded and resource-constrained systems. Key characteristics of scratchpad memory include its fixed capacity, typically ranging from 1 KB to 512 KB, which supports direct addressing without the need for tag arrays or associativity logic found in caches. Access times are highly predictable, as there are no miss penalties or coherence overheads; every valid address within the scratchpad yields a deterministic hit latency, often comparable to or better than L1 cache access due to simplified circuitry. This predictability stems from the absence of hardware-managed replacement policies, making it particularly suitable for real-time applications where timing guarantees are essential. In distinction from general-purpose memory structures, scratchpad memory focuses on minimizing latency for frequently accessed in power- and area-limited environments, such as embedded processors, by integrating seamlessly as a software-controlled buffer. Its basic operational principle involves explicit data movement via software instructions or (DMA), ensuring that only selected program segments reside on-chip at any time and enabling fully deterministic execution without the variability introduced by cache misses.

Historical Development

The concept of scratchpad memory originated in the late and early as a form of fast, modifiable on-chip storage to support control functions in early systems. Honeywell pioneered its use with the H-800 system, announced in 1958 and first installed in 1960, which incorporated a 256-word core-based scratchpad for multiprogram control, enabling efficient task switching without relying solely on slower main . By 1965, Honeywell's Series 200 minicomputers integrated scratchpad memories of varying sizes (up to 64 locations) as control storage, offering access speeds 2 to 6 times faster than main memory to enhance throughput in business applications. A significant came in 1966 with the Honeywell Model 4200 minicomputer, which utilized the TMC3162, a 16-bit bipolar TTL scratchpad memory developed by Transitron and second-sourced by multiple manufacturers including Fairchild, Sylvania, and ; this marked one of the first commercial implementations of scratchpad for high-speed needs. The 1980s saw widespread proliferation of scratchpad memory in digital signal processors (DSPs) for real-time applications, driven by the need for deterministic performance in embedded systems. ' series, launched in 1983, incorporated on-chip scratchpad RAM as auxiliary storage for temporary data, complementing program and data memories to enable high-speed filtering and processing without external memory delays. This design choice in the and subsequent models facilitated efficient algorithmic implementations in and audio processing, establishing scratchpad as a staple in DSP architectures. During the and , scratchpad memory expanded into embedded and multicore systems, particularly with the rise of power-constrained devices. A key example is the Cell Broadband Engine, designed starting in 2001 through the STI alliance (, , ), which featured 256 KB of local store per Synergistic Processing Unit (SPU) as explicitly managed scratchpad memory to support parallel workloads in gaming and scientific computing. This architecture, first shipped in Sony's in 2006, demonstrated scratchpad's efficacy in reducing for vector operations across multiple cores. Post-2010 developments have integrated scratchpad into graphics processing units (GPUs) and explored hybrid designs for improved energy efficiency. NVIDIA's GPU architectures, such as those in the Kepler series from 2012 onward, treat as a configurable scratchpad, allowing programmers to allocate on-chip SRAM explicitly for thread-block data sharing, enhancing performance in parallel compute tasks. Concurrent research has focused on hybrid cache-scratchpad systems, where portions of cache are dynamically repurposed as software-managed scratchpad to minimize ; for instance, adaptive schemes remap high-demand blocks to scratchpad, achieving up to 25% savings in embedded processors while maintaining hit rates.

Design and Operation

Software Management Techniques

Software management techniques for scratchpad memory (SPM) primarily involve explicit, compiler-directed, and dynamic strategies to allocate data and code, ensuring efficient use of this software-controlled on-chip storage. Explicit allocation requires programmers or compilers to specify placements using language directives, such as (e.g., #pragma scratchpad), or runtime application programming interfaces (APIs) that map variables or functions to SPM regions. This approach allows precise control over data placement based on access patterns, often formulated as an solved via integer linear programming (ILP) to minimize access times by assigning global and stack variables to SPM while respecting capacity constraints. For instance, the ILP model uses binary variables to decide allocations, incorporating profile-guided access frequencies, and achieves up to 44% runtime reduction through distributed stack management in embedded systems. Compiler-based techniques leverage static analysis to automate SPM allocation, analyzing variable lifetimes, access frequencies, and interferences to map frequently accessed ("hot") data to SPM for performance gains. These methods profile program execution to identify liveness intervals and prioritize placements that reduce , such as assigning basic blocks or functions to SPM banks, yielding up to 22% energy savings in embedded applications. extends this by modeling allocation as an interference graph where nodes represent data objects and edges denote overlapping ; colors correspond to SPM "registers" of fixed sizes, resolved via standard coloring algorithms adapted from to handle conflicts and ensure non-overlapping assignments. This technique partitions SPM into alignment-based units, splits live ranges at loop boundaries for better fit, and improves runtime by optimizing for smaller SPM sizes, as demonstrated in benchmarks like "untoast" where it enhances utilization without manual intervention. Dynamic allocation methods enable runtime adaptation, particularly in multitasking environments, using compiler-inserted or operating (OS) support to load and evict based on heuristics like access costs and future usage predictions. These approaches construct a -program relationship graph to objects and greedily select transfers from off-chip to SPM at program points, avoiding runtime overheads like caching tags while maintaining predictability. In pointer-based applications, runtime SPM management can reduce execution time by 11-38% (average 31%) and DRAM accesses by 61% compared to static methods, with optimizations for dead exclusion further lowering energy by up to 31%. OS-level support may involve adaptive loading via calls, ensuring portability across varying workloads. Tools and frameworks facilitate these techniques through integrated compiler passes and simulators. Compiler frameworks like LLVM incorporate SPM allocation passes that perform static analysis and graph-based optimizations during code generation, enabling seamless integration with build systems for hybrid memory management. For energy profiling, simulation tools such as CACTI model SPM access energies and leakage, providing estimates for design space exploration; it computes capacitances and power based on technology parameters, supporting evaluations that confirm SPM's 20-30% lower energy than caches for equivalent sizes. Additionally, methods handling compile-time-unknown SPM sizes use binary search or OS queries within compiler flows to generate portable binaries, maintaining near-optimal allocations across hardware variants. ===== END CLEANED SECTION =====

Performance Aspects

Advantages

Scratchpad memory provides deterministic access times, as data allocation is managed explicitly by software at or runtime, eliminating the variability introduced by cache misses and hit/miss resolution hardware. This fixed latency is particularly beneficial for real-time systems, where (WCET) guarantees are essential; techniques for WCET-centric allocation can reduce execution times by 5-80% compared to cache-based approaches by ensuring predictable memory behavior. In terms of energy efficiency, scratchpad memory consumes significantly less power than traditional caches due to the absence of tag lookups, comparators, and mechanisms, with studies reporting average energy savings of 40% per access in embedded systems—for instance, 1.53 nJ for a 2 KB scratchpad versus 4.57 nJ for an equivalent cache. These savings arise from the simpler access path and reduced overhead, making scratchpad memory ideal for power-constrained environments like battery-operated devices. The hardware design of scratchpad memory is simplified, omitting complex caching logic such as tag arrays and replacement policies, which reduces die area by approximately 34% (e.g., 102,852 transistors for a 2 KB scratchpad versus 142,224 for a cache) and allows more to be allocated to compute units. This streamlined also contributes to overall improvements of up to 18% in CPU cycles for embedded benchmarks. For bandwidth optimization, scratchpad memory enables high-throughput access to local data in parallel architectures, as direct addressing and DMA support facilitate efficient data movement without contention from global memory hierarchies; bandwidth-aware tiling techniques can achieve up to 4x performance gains by balancing space utilization and transfer rates in multi-core systems.

Disadvantages

Scratchpad memory imposes significant overhead due to its requirement for explicit software management of data placement and movement, unlike hardware-managed caches that operate transparently. This manual or compiler-assisted allocation process increases development complexity and time, as developers must analyze access patterns and insert code for loading and evicting data, which can be error-prone and non-portable across different memory configurations. The limited capacity of scratchpad memory, often constrained to small sizes such as a few kilobytes in embedded systems, necessitates frequent data swapping between the scratchpad and slower off-chip memory for larger workloads, introducing performance overhead and reducing overall efficiency. This size restriction relegates less frequently accessed data to DRAM, exacerbating latency in applications with extensive datasets. Lack of transparency in scratchpad memory arises from the absence of automatic mechanisms like prefetching or eviction policies found in caches, placing the full burden of optimization on software and risking suboptimal utilization if tuning is inadequate. Without hardware support for coherence or detection, programmers must explicitly handle all data transfers, which can lead to inefficiencies in unpredictable access patterns. Scalability issues in multicore environments stem from scratchpad memory's challenges in maintaining data coherency across cores, as it lacks built-in hardware protocols and requires additional software layers for , complicating management as core counts increase. This results in potential incoherence between local scratchpad copies and shared global memory, hindering efficient scaling in parallel workloads.

Comparisons

With Cache Memory

Scratchpad memory and cache memory represent two distinct approaches to on-chip memory management in processor architectures. While caches are hardware-managed with automatic data placement and eviction policies such as least recently used (LRU), scratchpad memory requires explicit software control for data allocation and deallocation, often handled by the or . This software-centric paradigm in scratchpad memory allows for precise optimization of memory usage tailored to application needs, whereas caches rely on hardware heuristics that may not align perfectly with specific workloads. In terms of access predictability, scratchpad memory provides guaranteed hit times since all allocated data resides directly in the memory without the need for tag comparisons or associative lookups, eliminating the risk of cache misses and related effects where irrelevant data evicts useful content. Caches, by contrast, introduce variability in access latency due to potential misses, compulsory loads, and conflicts, which can lead to unpredictable execution times, particularly in real-time systems. Locked caches, a variant where specific lines are pinned to avoid eviction, improve predictability over standard caches but still incur overhead from hardware and potential mapping conflicts. Regarding power consumption and area , scratchpad memory exhibits lower overhead because it lacks the tag arrays, comparators, and replacement logic required for caches, resulting in reduced energy per access—for instance, approximately 36% less energy for certain benchmarks like compared to equivalent caches. Caches demand significantly more area, with direct-mapped or set-associative designs requiring up to five times the transistors of a scratchpad for the same capacity (e.g., 75,000 vs. 15,000 transistors for 128 bytes), and with caches consuming significantly more power, up to 67% more on average due to these additional circuits based on 40% energy savings for equivalent scratchpad configurations. This makes scratchpad memory particularly advantageous in resource-constrained environments where minimizing static power and die space is critical. Scratchpad memory is ideally suited for embedded applications with predictable, computationally intensive tasks such as multimedia processing or , where software can statically map frequently accessed to ensure consistent performance. In contrast, caches excel in general-purpose scenarios characterized by irregular access patterns, such as desktop or server workloads, where hardware handles dynamic locality without extensive programming intervention. These differences highlight scratchpad's role in optimizing for and efficiency in specialized domains over the flexibility of caches.

With Other On-Chip Memories

Scratchpad memory differs from register files primarily in capacity and access characteristics. Register files typically provide limited storage, often on the order of 128 to 512 bytes per core (equivalent to 32-128 32-bit registers), serving as the fastest on-chip storage for immediate access. In contrast, scratchpad memory offers much larger capacities, ranging from 4 KB to 64 KB or more, enabling storage of larger data structures or temporary arrays that exceed register file limits. However, register files achieve near-zero latency access integrated directly into the execution , while scratchpad accesses incur 1-2 cycles due to their memory-like addressing and load/store operations. Compared to the local store in the Cell processor, scratchpad memory shares the trait of being fully software-managed, requiring explicit data placement and transfers to avoid off-chip accesses. Both structures provide predictable, low-latency on-chip storage without hardware caching overheads. However, the Cell's local store is tightly integrated with its Synergistic Processing Elements (), limited to 256 KB per SPE and relying exclusively on DMA for data movement between main memory and the local store, emphasizing streaming workloads. General-purpose scratchpad memory, by contrast, supports broader applicability across processor architectures, often allowing direct load/store instructions without mandatory DMA, though it lacks the Cell's specialized vector processing optimizations. Scratchpad memory also contrasts with shared L2 caches in terms of access scope and overheads. As a private per-core structure, scratchpad provides dedicated, low-latency access (typically 1-2 cycles) without contention from other cores, making it suitable for localized data reuse. Shared L2 caches, however, serve multiple cores with higher average latencies (often 10-20 cycles) due to bank conflicts and directory-based coherency protocols, which introduce additional for maintaining data consistency across cores. This eliminates coherency overheads in scratchpad designs but requires or intervention for . Emerging hybrid approaches integrate scratchpad memory with caching mechanisms to balance predictability and automation. For instance, designs like Stash enable software-managed scratchpad regions that are globally addressable like caches, supporting implicit data movement and lazy writebacks to reduce programming effort while preserving low-latency benefits. These hybrids have demonstrated up to 12% performance gains and 32% energy savings over pure cache or scratchpad systems in GPU workloads. More recent designs, such as COMPAD (2023) and M3D-MDA (2025), further integrate scratchpad and cache elements for improved energy efficiency in heterogeneous systems.

Applications

In Digital Signal Processors

Scratchpad memory found early adoption in processors (DSPs) during the , particularly in the series, where on-chip RAM served as a fast scratchpad for storing filter coefficients and buffers in audio and applications. For instance, the 10 and 20 utilized their limited on-chip RAM—144 words for the 10 and up to 544 words for the 20—to hold coefficients for () filters (e.g., length-80 bandpass filters at 10 kHz sampling) and buffers for intermediate results in real-time tasks like echo cancellation and . These implementations enabled efficient processing of audio signals, such as 128-tap digital voice echo cancellers compliant with CCITT G.165 standards and (LPC) vocoders at 8 kHz sampling, by keeping critical on-chip to minimize external memory accesses. In DSP architectures, is often integrated with dual-access ports to support simultaneous read and write operations, which is essential for real-time tasks requiring high throughput. The TMS320C54x family, for example, features Dual-Access RAM (DARAM) blocks that allow two independent accesses per , facilitating parallel instruction fetch and manipulation without conflicts in applications like filtering and buffering. This design extends to later iterations, such as the TMS320C4x, where on-chip dual-access RAM enables efficient handling of operands in DSP algorithms, including matrix-vector multiplications and lattice filters, by organizing into independent blocks for concurrent operations. The use of scratchpad memory in DSPs significantly enhances performance by enabling low-power, high-throughput operations, such as (FFT) computations, while avoiding stalls from slower (DRAM). In the TMS320VC5505 DSP, for instance, on-chip scratchpad allocation for FFT data and twiddle factors supports 1024-point complex FFTs with active power consumption below 0.15 mW/MHz, allowing real-time processing in power-constrained environments like portable audio devices without external DRAM dependencies. This approach reduces energy overhead and latency, as demonstrated in early TMS32020 implementations where a 256-point complex FFT completed in 4.375 ms at 5 MHz entirely using on-chip RAM, prioritizing deterministic access over cache unpredictability. A notable example of scratchpad integration in modern DSPs is found in processors, such as the ADSP-BF54x series, which include configurable 4K-byte scratchpad SRAM blocks within the Level 1 (L1) for optimized data storage in . These blocks operate at full core clock speed and can be allocated for stack, local variables, or temporary buffers in real-time tasks, with configuration options via the L1 Data Memory Controller to ensure non-cacheable, low-latency access excluded from (DMA) channels. In architectures, the scratchpad supports efficient execution of DSP operations like multiply-accumulate instructions and circular buffering through data address generators, enhancing throughput for applications in audio and video signal handling. Recent advances (as of 2025) have extended scratchpad memory optimizations to modern DSP applications, including AI-enhanced on embedded processors. For example, heterogeneous SRAM-based scratchpad designs have been proposed to balance reliability and energy efficiency in low-voltage DSP tasks, achieving up to 2x improvements in for applications like video decoding.

In Embedded and Multicore Systems

Scratchpad memory (SPM) serves as a compelling on-chip storage solution in embedded systems, particularly for computationally intensive applications where power and area are paramount. Unlike caches, which rely on hardware-managed automatic replacement policies, SPM requires explicit software control for data and code placement, enabling designers to optimize for specific workloads. This approach has been shown to reduce by an average of 40% compared to cache-based systems, primarily due to the absence of complex tag comparisons and associative lookups. Additionally, SPM offers a 46% reduction in area-time product, making it suitable for resource-constrained embedded devices such as microcontrollers and processors. In real-time embedded systems, SPM's deterministic access times enhance timing predictability, which is crucial for meeting hard deadlines without the variability introduced by cache misses or evictions. This predictability stems from SPM's fixed latency for all valid addresses, avoiding the non-deterministic behavior of caches in contended scenarios. Power savings further support its adoption, as SPM eliminates the energy overhead of protocols, allowing for simpler hardware implementations that consume less dynamic power during accesses. For instance, dynamic SPM units have been proposed to adaptively manage allocation at runtime, balancing predictability with flexibility in evolving real-time tasks. Transitioning to multicore embedded systems, SPM extends its benefits to parallel architectures by facilitating efficient and locality management across cores, often in hybrid hierarchies combining SPM with caches or main memory. Runtime-guided management techniques leverage task dependencies to allocate data to SPM, overlapping transfers with computation and using locality-aware scheduling to minimize inter-core data movement. This results in performance improvements of up to 16% in 32-core configurations, alongside reductions in on-chip network traffic by 31% and power consumption by 22%, making SPM ideal for power-sensitive multicore SoCs in automotive and IoT applications. Shared SPM designs with ownership mechanisms enable time-predictable inter-core communication in multicore systems, where cores temporarily own portions of the SPM via to avoid contention. Such architectures ensure bounded worst-case execution times, critical for safety-critical embedded multicore platforms. Complementing this, scratchpad-centric operating systems (OS) for multicore environments arbitrate shared resources at the OS level, separating application logic from I/O operations temporally to achieve contention-free execution. These OS designs deliver up to 2.1× performance gains over traditional cache-based approaches while maintaining predictability for hard real-time tasks on commercial-off-the-shelf multicore hardware. In multicore embedded contexts, SPM also supports advanced features like data duplication and replication for , mitigating multi-bit upsets in radiation-prone environments without significant overhead. Optimal data allocation algorithms further enhance efficiency by solving placement problems in time for exclusive data copies across cores, reducing memory conflicts in concurrent software. Overall, these applications underscore SPM's role in enabling scalable, low-power multicore embedded systems where predictability and energy efficiency outweigh the management complexity. As of 2025, recent advances in embedded multicore systems include interactive dynamic SPM management strategies that improve allocation for multi-threaded applications, achieving up to 30% energy savings through compiler-directed transfers in heterogeneous many-core architectures. Additionally, integration of non-volatile memory (NVM) with SPM has enhanced energy efficiency and persistence in IoT and automotive multicore SoCs.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.