Hubbry Logo
search button
Sign in
Memory architecture
Memory architecture
Comunity Hub
History
arrow-down
starMore
arrow-down
bob

Bob

Have a question related to this hub?

bob

Alice

Got something to say related to this hub?
Share it here.

#general is a chat channel to discuss anything related to the hub.
Hubbry Logo
search button
Sign in
Memory architecture
Community hub for the Wikipedia article
logoWikipedian hub
Welcome to the community hub built on top of the Memory architecture Wikipedia article. Here, you can discuss, collect, and organize anything related to Memory architecture. The purpose of the hub is to c...
Add your contribution
Memory architecture

Memory architecture describes the methods used to implement electronic computer data storage in a manner that is a combination of the fastest, most reliable, most durable, and least expensive way to store and retrieve information. Depending on the specific application, a compromise of one of these requirements may be necessary in order to improve another requirement. Memory architecture also explains how binary digits are converted into electric signals and then stored in the memory cells. And also the structure of a memory cell.

For example, dynamic memory is commonly used for primary data storage due to its fast access speed. However dynamic memory must be repeatedly refreshed with a surge of current dozens of time per second, or the stored data will decay and be lost. Flash memory allows for long-term storage over a period of years, but it is much slower than dynamic memory, and the static memory storage cells wear out with frequent use.

Similarly, the data bus is often designed to suit specific needs such as serial or parallel data access, and the memory may be designed to provide for parity error detection or even error correction.

The earliest memory architectures are the Harvard architecture, which has two physically separate memories and data paths for program and data, and the Princeton architecture which uses a single memory and data path for both program and data storage.[1]

Most general purpose computers use a hybrid split-cache modified Harvard architecture that appears to an application program to have a pure Princeton architecture machine with gigabytes of virtual memory, but internally (for speed) it operates with an instruction cache physically separate from a data cache, more like the Harvard model.[1]

DSP systems usually have a specialized, high bandwidth memory subsystem; with no support for memory protection or virtual memory management.[2] Many digital signal processors have 3 physically separate memories and datapaths -- program storage, coefficient storage, and data storage. A series of multiply–accumulate operations fetch from all three areas simultaneously to efficiently implement audio filters as convolutions.

See also

[edit]

References

[edit]
  1. ^ a b "Memory Architectures: Harvard vs Princeton".
  2. ^ Robert Oshana. DSP Software Development Techniques for Embedded and Real-Time Systems. 2006. "5 - DSP Architectures". p. 123. doi:10.1016/B978-075067759-2/50007-7