Translation Look aside Buffer (TLB)

You must have the concept of paging before learning the translation lookaside Buffer. But In terms of paging, we have to access the main memory twice or more to access a page.

  • Once we get the Page table from the main memory.
  • Second, when we get the particular page from the main memory of the corresponding frame,

 This way, the main memory is accessed twice in paging and even more than twice while using multilevel paging. So, it was time-consuming.

This problem is overcome by using TLB. TLB is a hardware component and is a part of MMU. TLB works like Cache Memory. It Exists within the CPU casing, and in some cases, it is found on the IC chip. 

Working of TLB

when the CPU generates a virtual address of any page, it first looks at that particular page in TLB. In the case of a TLB hit, data is directly accessed from the main memory without accessing the main memory twice or more.

In case of a TLB miss, OS has to repeat the concept of paging again to find the required page. After getting that particular page from the main memory, it first loaded into TLB. So, if CPU demand for this page is later, then it could be easily accessed from TLB without repeating the paging.

TLB contains Tag (Process No + Process ID) and Frame. Page Number is compared with Tag, containing pages of only running processes. So, the Process ID is not compulsory. Let’s look at the TLB diagram given below,

Translation Look aside Buffer (TLB)

TLB only contains the pages that are accessible to the current process. If process A is currently running, then TLB will contain only the translation (Logical to physical address) for the pages of process A. If process B is running, there will be no page for process A in TLB, and vice versa.

When the CPU switches from one process to another, the TLB of the currently running process is also cleared. This process is also known as the flushing of TLB.

Translation Look aside Buffer VS Cache Memory

 TLB and Cache memory are the hardware within the CPU chip. The primary purpose of both these components is to access faster data. There are significant differences between TLB and Cache.

TLB Cache Memory
1. TLB is required only when the CPU uses Virtual Memory. 1. Cache memory is an essential component of modern systems.
2. TLB speeds up address translation for Virtual memory so that the page table does not need access for every address. 2. CPU Cache is used to speed up primary memory access. Most recently and most frequently, data is present in Cache memory. If data is found in Cache, there is no need to go for RAM.
3. TLB performs operations at the time of address translation by MMU. 3. CPU cache performs operations at the time of memory access by the CPU.

 

Cache and TLB working Model

In fact, all modern CPUs have all Cache levels and TLB. Working Model of Cache and TLB with diagram,

cache and TLB working together

Multiple TLBs

Same as the caches, TLBs also have multiple levels. Nowadays, the CPU has various TLBs.  CPU may have three (ITLB1, DTLB1, TLB2) or four TLBs. These TLBs are differing in Speed and capacity from their other types.

Question on TLB

Effective Memory Access Time Calculation Formula’s

TLB_hit_time: = TLB_search_time + memory_access_time

TLB_miss_time: = TLB_search_time + memory_access_time + memory_access_time

EMAT: = hit_ratio * TLB_hit_time + (1- hit-ratio) * (TLB_miss_time) 

OR

EMAT: = hit_ratio * (TLB_search_time + memory_access_time) + (1 – hit_ratio) * (TLB_search_time + 2*memory_access-time)

 If the hit ratio is denoted By “P,” TLB search time is “t,” and TLB memory access time is “m,” then EMAT will be.

 EMAT = P(t+m) + (1-P)(t+2m)

Question: A paging scheme using TLB. TLB access time is 10ns, and main memory access time takes 50ns. What is effective memory access time (in ns) if the TLB hit ratio is 90% and there is no page fault?

Solution

EMAT: = hit_ratio * (TLB_search_time + memory_access_time) + (1 – hit_ratio) * (TLB_search_time + 2*memory_access-time)

= 90%(10+50) +10%(10+2(50))

=65ns