The issue of exactly when a written value must be seen by a reader is defined by a memory consistency model.Ĭoherence and consistency are complementary: Coherence defines the behavior of reads and writes to the same memory location, while consistency defines the behavior of reads and writes with respect to accesses to other memory locations. If, for example, a write of X on one processor precedes a read of X on another processor by a very small time, it may be impossible to ensure that the read returns the value of the data written, since the written data may not even have left the processor at that point. The controller also ensures that only one processor is writing in the cache.A read of X cannot instantaneously see the value written for X by some other processor. Any processor that needs to write has to first request the controller. Typically, there is a centralized controller that is part of the main memory controller, and a directory that is stored in main memory. Each cache controller is able to “snoop” on the network to observe these broadcasted notifications, and react accordingly.ĭirectory protocols collect and maintain information about where copies of lines reside. When an update action is performed on a shared cache line, it must be announced to all other caches by a broadcast mechanism. A cache must recognize when a line that it holds is shared with other caches. Snoopy protocols distribute the responsibility for maintaining cache coherence among all of the cache controllers in a multiprocessor. Because the problem is only dealt with when it actually arises, there is more effective use of caches, leading to improved performance over a software approach These solutions provide dynamic recognition at run time of potential inconsistency conditions. Hardware-based solutions are generally referred to as cache coherence protocols. Software approaches are attractive because the overhead of detecting potential problems is transferred from run time to compile time, and the design complexity is transferred from hardware to software. Software cache coherence schemes attempt to avoid the need for additional hardware circuitry and logic by relying on the compiler and operating system to deal with the problem. The classification of cache coherency protocols is shown in the Figure 5. ![]() Cache coherence is intended to manage such conflicts and maintain consistency between cache and memory.Ĭache coherence approaches have generally been divided into software and hardware approaches. If the client has a copy of a memory block from a previous read and the bottom client changes that memory block, the top client could be left with an invalid cache of memory without any notification of the change. ![]() To prevent this, the other processors must be alerted that an update has taken place and accordingly changes also need to be made. Each local cache contains an image of a portion of memory, if a word is altered in one cache, it could conceivably invalidate a word in another cache. ![]() Multiprocessor systems with caches and shared memory space need to resolve the problem of keeping shared data coherent. When clients in a system maintain caches of a common memory resource, problems may arise with inconsistent data. Cache coherence is a special case of memory coherence. This shared resource is also possibly stored in local caches of other processors.Ĭache is used as a temporary storage for frequently used data from memory. ![]() Cache coherency implies to the consistency of data which is stored in local caches of a shared resource.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |