Write Back Cache

What is Write Back Cache?

Write back cache refers to a particular type of caching technique among different others. However, write back cache is one of the most common and is used in most of the architectures of the processors after Intel introduced 80486.

The most unique aspect of write back cache is that it can copy data to higher levels of cache and back memory or store.

Understanding Write Back Cache

What is Write Back Cache

The write back cache is specially designed to reduce or enhance write operations to and from a cache as well as the data source, which is the Random Access Memory or RAM in most of the cases.

As said earlier, write back cache is known by different names and terminologies such as:

However, in all possible senses, write back cache refers to that particular part of the memory wherein data is held temporarily until the time it can be modified or saved permanently otherwise.

Typically, write back refers to a storage method. In this particular method, every time there is a change, new data is written into the cache.

However, this data is written in the parallel location in the main memory only under specific conditions or at a specific interval.

In the write back mode, when a data location in the cache is updated that particular data is referred to as fresh.

On the other hand, the corresponding data in the main memory is called stale. Check out write back vs write through cache.

This stale data however does not match with the data written in the cache any longer.

When a request is made for the stale data in the main memory from any other specific application program, the data is updated by the cache controller in the main memory before the same application can access it.

The write back method maximizes the speed of the system since it takes very little time to write the data into the cache alone in comparison to writing the same data into the main memory and the cache both.

However, there is a point of bother here. Though the speed may increase, there is a high possibility of loss of data if in case there is any adverse event or during the crashing of the system.

Still, write back cache is a more favored method when it comes to storing data in those particular applications where there can be data loss events happening occasionally and can be tolerated.

However, when more complex applications are involved such as medical device control or in banking, instead of write back cache an alternative method is favored.

This is called the write through method which virtually gets rid of the risk of loss of data.

This is because every data update is written in both the cache as well as in the main memory.

Typically, in this particular writing method, the data that is stored in the main memory always remains fresh.

Approaches

In the write back cache, the data is initially written to the cache only. It is not written at the backing store at the same time.

In fact, it is postponed till it is time for a modified content to be replaced by a different cache block.

Read Also:  What is Integrated Keyboard? 8 Pros & Cons

Typically, a write back cache is much more complicated a process to implement because the approach involves different complicated steps which include:

Typically, the data that are in these particular locations cannot be written to the backing store until these are cast out from the cache.

This is an approach that is typically referred to as a ‘lazy write.’

It is for this reason that in a write back cache the read miss, which needs replacing a block by another, will habitually need two specific types of memory accesses to service. These are:

There are a few other specific conditions that may trigger writing back of data.

For example, when a particular client may want to make alterations in the data stored in the cache.

In that case, the cache must be notified explicitly about the need to write the data back.

However, none of the data is returned during the write operations to the requester or the client.

The decision of whether or not the data is to be loaded to the cache is exclusively made on the basis of the write misses.

This particular situation of loading the data is handled by these two specific approaches:

Just like the write through cache, the write back cache can use any of the write-miss policies.

However, it will be typically paired in a specific way. Since the write back cache utilizes write allocate expecting for consequent read as well as writes on the same location, it will be usually cached.

In the backing store any other entity apart from the cache might modify the data. In such a situation, the copy stored in the cache might become stale or out of date.

On the other hand, when a client updates the data stored in the cache the copies of that particular data will become stale in other caches.

The memory write back process is not performed frequently but only when there is a need for the cache data to be purged or edited to replace it with a new content altogether.

The working process followed in write back cache is just the opposite of write through cache wherein data is written on the memory and cache at the same time.

However, for every process proper communication protocols are followed between the cache managers.

This helps in keeping the data consistent. These particular protocols are referred to as coherency protocols.

Use Cases

Both cache aside and write back cache is used by several developers to absorb the spikes in a much better way especially during peak workloads.

However, the chances of losing the data in an event of a cache failure still remain.

Another common use case is for the relational database storage engines. Here the write back cache is enabled in the internals by default.

In such cases the queries are written first into the memory and later on these are flushed to the disk.

Writing Strategies

In order to understand write back cache and its strategies, you will need to have a fair idea about the cache first.

Cache actually refers to a particular technique where copies of data are stored temporarily in a storage memory for rapid accessing.

Typically, the words that are used most recently are stored in the cache in small memory. This actually adds to the speed of accessing the data.

Ideally, a cache acts as the buffer between the CPU or the Central Processing Unit of a computer and the Random Access Memory or RAM.

This specific arrangement enables the cache to make the necessary data readily available to the processor for its functioning.

When the processor needs to write a word it will first check whether or not the desired address on which it wants to write the data to is available in the cache. If it is available in the cache it refers to a write hit.

The value in the cache can be updated which will eventually help in avoiding costly access to the main memory.

However, this will result in inconsistent data issues when two or more devices share the same memory as it is done in a multiprocessor system because the main memory and cache have different data.

This is where writing back and writing through becomes so useful.

The data here is updated first in the cache only and at a later point in time it is updated in the memory, and that too only when the cache is ready and needs to be replaced.

Typically, Belady’s Anomaly is used for cache line replacement but recently other methods are also utilized such as:

The method utilized will however depend largely on the type of application.

In write back cache, dirty bits are an important aspect.

In cache, every block requires a bit so that it can indicate whether or not the data that is in the cache is modified or not modified.

If the data is modified it is referred to as a dirty bit and if it is not modified it is referred to as a clean bit.

The clean bit is not required to be written into the memory. This is because it is designed in such a way that it reduces the write operation to the memory.

However, the modified data may be lost in several instances such as:

When data is lost, it is almost impossible to restore it back into the cache.

However, when write occurs in a write miss location or where the data is not present, two specific options are used namely:

Here, the good thing about the write back policy is that it works best for diverse workloads.

The primary reason behind this is that both read and write I/O in this case have the same response time levels.

As for the data loss, in all practicalities you can include resiliency by duplicating writes, for example, in order to lower such likelihood.

This in particular is known as write behind sometimes as well.

Advantages of Write Back

One of the most significant advantages of write back cache is that it offers a pretty low latency and a considerably high throughput at the same time when it comes to write-intensive applications.

Also, in a write back operation, according to the standard policy, the data is written into the cache and it is only after that the I/O completion is established.

Write back cache typically is very useful when it is combined with read through.

This is because it helps in handling mixed workloads wherein the most recently accessed or updated data is available always in the cache.

The resilience of the write back cache to database failures is also useful in enduring a little database downtime.

Moreover, write back cache can also lower the writes to the databases overall if it supports coalescing or batching.

This specific feature also helps in further reducing the overall load as well the cost eventually, particularly when the database provider charges you on the basis of the number of requests made.

However, you should also know about the most significant disadvantage of write back cache as well in order to have a more comprehensive knowledge and be able to make a much better and more informed decision.

In write back cache there is a high risk of data availability, as said earlier.

This is because there is a chance that the cache may fail and therefore result in a data loss. This means that the data may be lost even before it could be moved to the backing store.

Therefore, it is very important to choose the right type of strategy so that it matches your access patterns and fits in your goals.

This will allow you to enjoy the full benefits and least latency. This will also enable you to get rid of the useless junk from the cache, especially if it is very small in size.

You may choose any strategy, arguably, if the size of the cache is large enough because it may be fine in that case but in the real world where high throughput systems are used, these typically do not come with a big enough cache.

Therefore, choosing the right strategy matters. It is all the more necessary because the server costs can be a real concern.

Therefore, make sure that you evaluate your goals, the pros and cons of each caching strategy, and also be pretty aware about the data access or read/write patterns before you choose one specific and best strategy or a combination of a couple of strategies.

Conclusion

Through this post you have surely come to know much more than the basics about the write back cache and the policies behind it.

Of course, there are other policies as well which is why you should research well for a better understanding of all of them before implementing them to get the best benefits.