Direct Mapped vs Fully Associative Cache: 13 Differences 

3
6
Direct Mapped Cache vs a Fully Associative Cache

In a direct mapped cache, bits are usually taken from the main memory address to the particular line in a cache that is described uniquely and where it can be stored.

However, in a fully associative cache the process is completely different where any available cache line is used.

Apart from this basic difference, there are some other major significant differences between a direct mapped cache and a fully associative cache.

KEY TAKEAWAYS

  • In a direct mapped cache the memory address is divided into three fields such as Tag, Word, and Block but in fully associative cache there are only Tag and Word fields.
  • The direct mapped cache is simplest of all mapping techniques and is less expensive.
  • A fully associative cache has high hardware requirements and is therefore not easy to build.

Direct Mapped Cache vs a Fully Associative Cache – The 13 Differences 

Direct Mapped Cache vs a Fully Associative Cache

1. Basic Difference

In simple terms, a direct mapped cache is that where each block of the main memory is mapped into only one particular cache line possible.

On the other hand, in a fully associative cache, each block of the main memory is loaded into any available line of cache.

2. Requirements

In a direct mapped cache, there is only one comparison required by using a direct formula to find the effective cache address for mapping.

On the other hand, in a fully associative cache, it needs comparison with every tag bit to find a match. In other words, the cache control logic needs to look at every tag of the block in order to find a match and also find out whether or not a block is in the cache at the same time.

3. Architecture

In a direct mapped cache, there are three particular fields in which the memory address is divided into, namely, Tag, Block, and Word. The Block and the Word together make the Index.

On the other hand, in a fully associative cache, the main memory address is divided into Tag and Word fields.

4. Possible Locations

In a direct mapped cache, there can be only one possible location for every block in the cache organization from the main memory. This is because a fixed formula is used in its mapping.

On the other hand, in a fully associative cache, the main memory block is mapped with any of the available blocks of the cache.

5. Effects on Cache Hit Ratio

In a direct mapped cache, there is a significant decrease in the cache hit ratio experienced when it is needed by the processor to access the same locations in the memory from two distinct memory pages repeatedly.

On the other hand, there is no such effect in the cache hit ratio in the case of a fully associative cache even if the processor has to access the same locations in the memory from two distinct memory pages repeatedly.

6. Search Time

In a direct mapped cache, the search time is less. This is due to the fact that there is only one possible location in the cache organization to look for in every block from main memory.

Read Also:  What is an I/O Plate? (Explained)

On the other hand, in the case of a fully associative cache, the search time is more because the cache control logic has to examine each individual tag of the block while looking for a match.

7. Advantages

The direct mapped cache is considered to be the simplest of all due to the simple mapping technique. It is also very fast because it needs matching only the tag field to search for a word and it is relatively less expensive.

On the other hand, in comparison, a fully associative cache is also quite fast but is also very easy to implement.

8. Disadvantages

One of the greatest disadvantages of the direct mapped caches is that the level of performance offered by them is quite low as compared to the level of performance of a fully associative cache because it needs replacement for the data-tag value.

On the other hand, one of the most significant disadvantages of a fully associative cache is that it is quite expensive as compared to a direct mapped cache which is mainly because these particular types of caches need to store the address as well along with the data.

9. Transfer of Data

In a direct mapped cache, when data is transferred from the main memory to the cache memory, a line is replaced by using a formula KmodN where K signifies the number of the main memory block and N stands for the number of cache lines.

The word of the main memory represents the mapping between the main memory address and the cache addresses with reference to the information of the same unit.

On the other hand, in a fully associative cache, the transfer of data to the cache lines from the main memory is done by checking the availability of the line.

If it is free, the data is transferred immediately without using any rule or formula. If the line is not free or available, a replacement algorithm is used to transfer data and the new data replaces any one of the lines.

10. Speed

Normally, a direct mapped cache is much simpler because it needs only one multiplexer and one comparator. This makes it work much faster and also quite cheaper in comparison to the fully associative cache.

On the other hand, the fully associative caches are usually slow since there is more than one address to be matched that is stored into any entry. This also makes it more complex and pricier.

11. Conflict Misses

In the direct mapped cache, there is a possibility of high conflict misses. This is because it needs replacing a cache memory block in a particular line even if there are other empty blocks available in the cache memory.

On the other hand, in a fully associative cache, the primary objective is to avoid chances of any high conflict misses. This is because any main memory block can be placed at any place available in the cache memory.

12. Ease in Building

A direct mapped cache is much easier to build since it has pretty low hardware requirements. This also results in lower latency.

On the other hand, a fully associative cache is not so easy to build due to its higher hardware requirements which also add to the latency.

Read Also:  What is Write-Through Cache? (Explained)

13. Cache Utilization

The direct mapped caches offer much lower cache utilization.

On the other hand, the fully associative caches will offer higher cache utilization in comparison to a direct mapped cache.

Which One is Better to Use – Direct Mapped Cache or Fully Associative Cache?

Direct Mapped Cache vs a Fully Associative Cache

The answer to this question depends on the criteria of ‘better’ as well as on your needs.

Typically, it is very hard, if not impossible to say a particular hardware is better to use than the other until you know what exactly you are measuring one against the other as well as unless you know what makes up good or worse.

There are lots of factors to consider for it, as applicable, such as:

  • Area
  • Complexity
  • Power
  • Hit rate and more.

This is applicable to everything even when you consider a direct mapped cache and a fully associative cache.

In the case of caches, where hit rate plays a significant role in determining which among a direct mapped cache and a fully associative cache is better, it will also depend on the workload on the system.

On the other hand, it is hard to determine the workload in either type of caches if the hit rates demonstrated are higher.

However, there is no need to be confused. The list of differences in features and functionality along with the advantages and disadvantages of the direct mapped cache and a fully associative cache will help you a lot already.

However, further explanations and a few other facts and information about the direct mapped cache and a fully associative cache will make things clear to you and help you to decide which one among the two is better to use.

As for the direct mapped cache, there is a high chance of cache thrashing.

This happens because in a direct mapped cache every block of main memory is mapped to a specific location in the cache memory.

This means that two different blocks will be swapped continuously if they are mapped to the same location in the cache.

In that respect, a direct mapped cache works in the following way while mapping a memory address to the cache:

  • The Block field of the main memory address is utilized to access the block of the cache
  • The Tag bits in the address is then matched with the tag of the block and
  • The word is stored in the cache eventually along with the new tag that replaces the old tag.

However, if a match or the word is found in the cache while comparing, a cache hit occurs.

On the other hand, if it does not exist there, a cache miss occurs and the word that is necessary is brought from the main memory into the cache.

And, as said earlier, in a fully associative cache, the technique involves matching only two fields namely Word and Tag, and mapping can be done with any available cache block. This, in short, answers your question.

An example will make things even clearer. Ideally, in a direct mapped cache, A will sit on the first line and B will sit in another line, and so on.

This makes finding A, B, C and so on very easy.

However, in a fully associative cache, every word sits on a single set and at random.

Read Also:  What is Virtual Address Extension (VAX)? (Explained)

Therefore, it is difficult to say in which line or where A is sitting and where B is located.

In addition to that, the fully associative caches need to preserve temporal locality and for that they need to follow an eviction policy.

These caches also use an approximation of Least Recently Used or LRU along with several other additional transistors and comparators into the scheme for that. Of course, all these consume additional time.

Therefore, typically, fully associative caches are more practical when the size of the cache is pretty small such as the Translation Lookaside Buffer or TLB caches that you may find in a few particular Intel processors.

However, these caches are really small allowing only up to a dozen entries at the most in them.

Therefore, a direct mapped cache will offer you benefits which include and are not limited to:

  • Less hardware requirements
  • Lower latency.

However, a direct mapped cache would also have a worse hit ratio and much lower utilization of cache.

On the other hand, a fully associative cache may be complex needing more hardware and resulting in higher latency but it will offer a much higher cache hit ratio and cache utilization.

In terms of speed, a direct mapped cache is more preferable but at this point it is good to keep in mind the fact that, depending on your specific needs, a slow and complex but more accurate cache may be more suitable in comparison to a faster and simpler but less accurate cache, or vice versa.

Therefore, if you are interested in a low possible latency, a direct mapped cache should be your choice and you will be served much better since every piece of data is available in only one place and can be accessed quickly.

On the other hand, if you are looking for higher hit rates, a fully set associative cache would be a better choice due to lower conflict misses which will offer a better overall performance.

However, all these are not certainties and therefore words like ‘would’ and ‘should’ are used.

For example, if the size of a direct mapped cache and the data stored in it are right, it can also potentially have the same hit rate as a fully associative cache.

So, it is eventually up to you to decide which you deem better and will choose.

And yes, the architecture of the Central Processing Unit of the system will also matter a lot in that case.

Conclusion

So, through this article, you know the differences between direct mapped cache and a fully associative cache.

Now you will surely not have any difficulty in choosing one among them that is most suitable for your computing needs.

About Dominic Chooper

AvatarDominic Chooper, an alumnus of Texas Tech University (TTU), possesses a profound expertise in the realm of computer hardware. Since his early childhood, Dominic has been singularly passionate about delving deep into the intricate details and inner workings of various computer systems. His journey in this field is marked by over 12 years of dedicated experience, which includes specialized skills in writing comprehensive reviews, conducting thorough testing of computer components, and engaging in extensive research related to computer technology. Despite his professional engagement with technology, Dominic maintains a distinctive disinterest in social media platforms, preferring to focus his energies on his primary passion of understanding and exploring the complexities of computer hardware.

Previous article13 Pros and Cons of Using Wi-Fi
Next articleMini DisplayPort & Thunderbolt: 14 Differences
Dominic Chooper
Dominic Chooper, an alumnus of Texas Tech University (TTU), possesses a profound expertise in the realm of computer hardware. Since his early childhood, Dominic has been singularly passionate about delving deep into the intricate details and inner workings of various computer systems. His journey in this field is marked by over 12 years of dedicated experience, which includes specialized skills in writing comprehensive reviews, conducting thorough testing of computer components, and engaging in extensive research related to computer technology. Despite his professional engagement with technology, Dominic maintains a distinctive disinterest in social media platforms, preferring to focus his energies on his primary passion of understanding and exploring the complexities of computer hardware.
3 Comments
Oldest
Newest
Inline Feedbacks
View all comments