Semaphores in OS
In Operating Systems (OS), process synchronization plays a vital role in ensuring that multiple processes or threads can access shared resources without conflicts. One of the most powerful tools used for synchronization is the Semaphore in OS.
A semaphore is an integer variable “S” that is used to solve critical section problems and achieve process synchronization in a cooperative multi-processing environment. They help prevent race conditions and ensure mutual exclusion within critical sections of code.
Purpose of Semaphores
Process Synchronization ensures that multiple processes access shared resources in a controlled and conflict-free manner. Synchronization can be achieved through hardware and software solutions.
-
Hardware-based solutions: Test-and-Set instruction, Swap instruction, and Interrupt disabling.
-
Software-based solutions: Peterson’s Algorithm, Bakery Algorithm, and Turn variable method.
These traditional methods suffer from complexity, limited scalability, and busy waiting, leading to the introduction of Semaphores as a kernel-level synchronization mechanism to overcome these issues.
Basic Components of Semaphores
A semaphore manages access to shared resources through three main components:
- Entry Section (Wait Operation),
- Critical Section
- Exit Section (Signal Operation).
These components work together to ensure the safe and synchronized execution of processes. The diagram below illustrates a simple working mechanism of a semaphore in an OS.

A semaphore manages process execution by using Wait() to request access and ensuring only safe entry to the Critical Section, while Signal() releases the resource for other waiting processes. This guarantees orderly and synchronized access to shared resources.
Let’s explain these three components in detail
i. Entry Section (Wait Operation)
When a process wants to enter the critical section, it must first execute the entry section code.
This code ensures that only permitted processes can access the shared resource at a time.
-
The entry operation is commonly known as wait(), P(), or down(), depending on notation or system convention.
-
The main purpose of this operation is to decrease the semaphore value by 1.
-
If the semaphore value becomes less than zero, it indicates that no resource is available, and the process must wait (block) until another process releases the resource.
The exact form of the entry code can vary based on the type of semaphore used:
-
Binary Semaphore / Mutex: Allows only one process in the critical section.
-
Counting Semaphore: Allows multiple processes based on available resource count.
ii. Critical section
The critical section is the part of a program where a process accesses shared resources, such as variables, files, or devices. All cooperating processes contain a common segment of code that requires exclusive access to prevent data inconsistency.
-
In a Binary Semaphore, only one process can enter the critical section at a time — ensuring mutual exclusion.
-
In a Counting Semaphore, multiple processes can enter up to N at a time, depending on the number of available instances of the shared resource. Other processes must wait until a resource becomes available.
Before entering the critical section, a process must successfully execute the entry section (to obtain permission). After completing its operations inside, it must execute the exit section (to release the resource).
iii. Exit Section (Signal Operation)
When a process leaves the critical section, it executes the exit section code. This section ensures that other waiting processes can enter the critical section safely and in order.The exit operation may be referred to by different names depending on the system or notation used, such as signal(), V(), up(), post(), or release().
- The main purpose of the exit code is to increment the semaphore value by 1, indicating that a resource or access slot has been released.
- This allows another waiting process (if any) to enter the critical section.
- The implementation of the exit operation can vary depending on the type of semaphore (counting, binary, or mutex).
|
Note: The exit operation must correspond to the entry operation used:
|
Types of Semaphore
There are two main types of semaphores: binary semaphores (or Mutex) and counting semaphores, which are used for process synchronization and resource management.
1. Binary Semaphore
A Binary Semaphore can take only two values: 0 and 1, representing locked and unlocked states, respectively.
It ensures that only one process can enter the critical section at a time, making it ideal for mutual exclusion.
Flow Chart of Binary Semaphore
Flow Chart of the Binary Semaphore is given below

Explanation: The flowchart for the binary semaphore is provided below
Start: The process begins and requests permission to enter the Critical Section.
Process Request to Enter in Critical Section: A process attempts to access the shared resource controlled by a Binary Semaphore.
Initialize: Set S = 1, where
-
S = 1 → Resource is free.
-
S = 0 → Resource is occupied.
Check Condition: S == 1?
-
Yes: The resource is available.
-
Set S = 0 to mark the resource as occupied.
-
Allow the process to enter the Critical Section.
-
-
No: The resource is currently in use.
-
Block the Process and place it in the Blocked Queue.
-
Block Process Exist?
-
Yes: When a process exits the critical section, it performs WakeUp() to unblock one waiting process.
-
The unblocked process moves from the Blocked Queue → Ready Queue.
-
The scheduler moves it to the Running State, where it immediately enters the Critical Section (Blocked → Ready → Running → Critical Section)
-
-
No: Continue normal execution.
After Critical Section: When a process finishes its execution in the critical section and no processes in the block queue, then it releases the resource by setting S = 1.
END: The cycle completes, ensuring that only one process at a time enters the Critical Section, maintaining mutual exclusion and synchronization.
Working of Binary Semaphore
The functioning of a binary semaphore in an operating system is explained in following diagram.

Explanation: When processes P1 to Pn (or threads) try to enter the Critical Section, the Binary Semaphore (S = 1) allows only one process to enter.
- The first process performs Wait(S), sets S = 0, and locks the resource.
- All other processes execute Wait(S) but are blocked until S = 1 again.
- When the active process finishes, it performs Signal(S) to release the resource, waking one waiting process to enter next.
The Entrance and Exit Section of the Counting Semaphore in OS is given below
i. Entrance (Wait) Operation
Wait(S)
{
if (S.value == 1) {
S.value = 0; // Lock the semaphore, enter critical section
}
else {
Add to Suspended_List; // Process waits
}
}
Explianation
- Initially, S = 1, meaning one process can enter the critical section.
- When a process enters, it sets S = 0, locking the semaphore.
- If another process tries to enter while S = 0, it is blocked and placed in the suspended (waiting) list.
- The value of S in a binary semaphore is never negative.
ii. Exit (Signal) Operation
Signal(S) {
if (Block_Queue_Exist?) {
WakeUp(); // Wake one process from the suspended list}
else{
S.value = 1; // Unlock the semaphore
}
Explanation:
-
If no process is waiting, S is set back to 1, making the resource available again. If there are blocked processes, one process (for example, P2) from the suspended list is woken up.
-
The woken-up process directly enters the critical section.
Important Notes
- In a binary semaphore, S can only be 0 or 1 — it cannot go negative.
- The semaphore is initialized as S = 1 to allow the first process to enter.
- If S = 0, all incoming processes are blocked until it becomes 1 again.
2. Counting Semaphore
A Counting Semaphore can take integer values ranging from positive to negative infinity. It is used when multiple instances of a shared resource are available. The semaphore value represents how many processes can access the resource at a given time.
Flowchart of counting Semaphore in OS
A flowchart of the counting Semaphore in OS is given below

Explaination: The counting semaphore flowchart explanation is given below
-
Start: The process begins, and access to a shared resource is requested.
-
Process Request to Enter in Critical Section: A process attempts to enter the critical section controlled by a counting semaphore.
-
Initialize: S = N: The semaphore is initialized to the total number of available resources (N).
-
Decrement Semaphore (S = S – 1): When a process requests a resource, the semaphore value decreases by one.
-
Check Condition: S < 0
-
No: Resource available → process enters the Critical Section.
-
Yes: No resource available → Block the Process and add it to the Blocked Queue.
-
-
Critical Section: The process uses one shared resource safely within the critical section.
-
Exit Critical Section (S = S + 1): Upon completion, the process releases the resource and increments the semaphore value.
-
Check Condition: S ≤ 0
-
Yes: There are waiting processes → perform WakeUp() to unblock one process.
-
No: Continue normal execution.
-
-
WakeUp() a Process from the Blocked Queue: The unblocked process moves from the Blocked Queue → Ready Queue.
-
Ready Queue → Running State: The CPU scheduler assigns the process to run again, allowing it to reattempt entry into the critical section.
| Important:
The process that was woken up:
So, sequence will be Blocked → Ready → Running → Wait(S)→ Critical Section |
-
END: The semaphore cycle completes, ensuring safe and synchronized access to shared resources.
Working of Counting Semaphore
The working of the counting semaphore in the OS is given below

Explaination:
When processes P1 to Pn (or threads) try to enter the Critical Section, the Counting Semaphore (S = N) allows up to N processes to enter simultaneously.
- Each process performs Wait(S) and decreases the semaphore value (S = S – 1) before entering.
- When S = 0, it means all resources are in use — additional processes are blocked and must wait.
- As each active process finishes, it performs Signal(S) to release a resource and wake up one waiting process to enter next.
The Entrance and Exit Section of Counting Semaphore is given below
i. Entrance Section in Counting Semaphore
The Entrance Section in a Counting Semaphore decreases the semaphore value by one each time a process requests access. If the value becomes negative, the process is blocked and placed in the waiting list until resources become available.
Code:
Wait(S) {
S.value = S.value – 1;
if (S.value < 0){
Sleep(); // Block the process and place it in the suspended list
}
else {
// Allow the process to enter the critical section
}
}
Explaination:
- The wait() operation decrements the semaphore value by 1 in start.
- If S.value < 0, and the answer is yes, then the process is blocked and moved to the suspended (Blocked) list.
- If S.value < 0, and the answer is no, the process can enter the critical section.
- The semaphore value S indicates the number of available instances of a resource.
Example:
- If S = 0, no more processes can enter the critical section.
- If S = 4, up to 4 processes can access the critical section simultaneously.
- If S = -2, it means 2 processes are waiting in the blocked list.
Thus, the value of S gives an idea of both available resources (positive values) and waiting processes (negative values).
ii. Exit Section in Counting Semaphore
The Exit Section in a Counting Semaphore increases the semaphore value by one when a process leaves the critical section. If any processes are waiting, one of them is unblocked and moved to the ready queue to attempt entering the critical section.
Code:
Signal(S) {
S.value = S.value + 1;
if (S.value <= 0) {
WakeUp(); // Select one process from the suspended list and make it ready
}
}
-
When a process leaves the critical section, the signal() operation increments the semaphore value by 1.
-
If S.value ≤ 0, it means some processes are waiting, so one blocked process is woken up (typically using FIFO order).
-
The woken-up process is placed in the ready queue and will reattempt to enter the critical section when scheduled again.
Key Points
- Counting semaphores manage multiple resources simultaneously.
- Positive S value: number of available resources.
- Negative S value: number of waiting (blocked) processes.
- Ensures efficient process synchronization and resource utilization without conflicts.
Real Life Example 01: Counting Semaphore
A Counting Semaphore is used when multiple instances of the same resource are available.
- Suppose Room is the shared resource, and there are 4 Rooms → this means S = 4.
- Think of Keys = 4, where each key represents one available room.
- There are 5 Users (P1 to P5) trying to access a room.
- The first 4 users each take one key and enter their respective rooms.
- The 5th user finds no key available (S = 0) and must wait until a room is released.
- When any user leaves a room, they return a key (Signal(S) → S = S + 1), allowing the waiting user to enter next.
Example 02: Counting Semaphore – Two Threads Execution
Let’s explain how two concurrent threads work in process synchronization using Semaphore.
| Semaphore Value (S) | Thread T0 State | Thread T1 State | Explanation |
|---|---|---|---|
| 1 | Initial Section (Running) | Ready | Semaphore initialized to 1 — resource available. |
| 1 → 0 | P(S) called → Decrement S | Ready | Thread 0 executes Wait(S) and locks the resource. |
| 0 | Enters Critical Section | Ready | Since S ≥ 0, Thread 0 continues (does not sleep). |
| Thread Switch | Ready | Running | CPU switches to Thread 1 while Thread 0 is still inside the critical section. |
| 0 → -1 | Ready | Calls P(S) → Decrement S | Thread 1 tries to enter, but S < 0, so it cannot proceed. |
| -1 | Ready | Sleeping (Blocked) | Thread 1 goes to sleep (waiting for the resource). |
| Thread Switch | Running | Sleeping | CPU switches back to Thread 0. |
| -1 → 0 | Executes V(S) → Increment S | Sleeping | Thread 0 leaves the critical section and releases the resource. |
| 0 | Running | Woken Up | The semaphore wakes up one blocked thread (Thread 1). |
| 0 → Running (T1) | Terminated (T0 done) | Ready → Running | Thread 1 resumes from its Wait(S) call and enters its critical section. |
| 0 → 1 | — | Executes V(S) → Increment S | Thread 1 finishes its critical section and releases the resource. |
| 1 | — | Terminated | Semaphore is now free again — no thread waiting. |
Counting Semaphore Questions
Let’s solve some questions of counting semaphore
Remember
P (Wait) → Decrements S → Request resource.
V (Signal) → Increments S → Release resource.
Formula
Final Semaphore Value=Initial S−(Number of P operations)+(Number of V operations)
Question 1 – Normal Case (Positive Value):
If S = 10 and 6 P (Wait) operations and 4 V (Signal) operations are performed, find the final value of the semaphore.
Solution:
S = 10−6+4 = 8
Final Semaphore Value = 8. Some resources are still available; no process is waiting.
Question 2 – Zero Case (Fully Used):
If S = 5 and 5 P (Wait) operations and 0 V (Signal) operations are performed, find the final value of the semaphore.
Solution:
S = 5−5+0 = 0
Final Semaphore Value = 0. All resources are in use; new processes must wait.
Question 3 – Negative Case (Blocked Processes):
If S = 4 and 9 P (Wait) operations and 2 V (Signal) operations are performed, find the final value of the semaphore and how many processes are blocked.
Solution:
S = 4−9+2 = −3
Final Semaphore Value = -3. 3 processes are blocked, waiting for the resource to be released.
Binary Semaphore Questions
Final Semaphore Value=Initial S−(Number of P operations)+(Number of V operations)
- For Binary Semaphore, the value of S is always limited to 0 or 1.
- If a calculation gives a result > 1, it is set to 1 (resource free).
- If it gives a result < 0, it means processes are blocked (resource busy).
Question 1 – Normal Case (No Blocking)
If S = 1 and 2 P (Wait) operations and 2 V (Signal) operations are performed, find the final value of the semaphore.
Solution:
S=1−2+2=1
Final Semaphore Value = 1. After both processes enter and exit in order, the resource ends up free again.
Question 2 – Resource Busy (Blocked Process Case)
If S = 1 and 3 P (Wait) operations and 1 V (Signal) operation are performed, find the final semaphore value.
Solution:
S=1−3+1=−1
Since a Binary Semaphore cannot go below 0,
S = 0 → resource busy, and 1 process is blocked (waiting).
Question 3: Releasing the Resource (Wake-up Case)
If S = 0 and 1 V (Signal) operation is performed, find the final value of the semaphore.
Solution:
S=0−0+1=1
Final Semaphore Value = 1. The resource is now released, and a waiting process (if any) can enter.
Question 04:
There are a total of 10 processes in the system:
-
Processes P₁ to P₉ execute the following code sequence:
P(S) → Critical Section → V(S) -
Process P₁₀ executes the following code sequence:
V(S) → Critical Section → V(S)
Assuming S = 1 is initialized properly, determine the maximum number of processes that can be present in the critical section simultaneously at any point in time.
Solution:
Short answer is 10. Let explain
-
Step 1- Initialization: The semaphore S starts at S=1.
-
Step 2- P10 Entry: Process P10 executes V(S), immediately setting S=2, and enters the Critical Section (CS). (1 process in CS).
-
P10 never waits because it skips the P(S) operation.
-
-
Step 3 – Initial Pass-Through: Since S=2, two processes (e.g., P1 and P2) can execute P(S) successfully, decreasing S to 1 and 0.
-
P1 enters the CS (S=1).
-
P2 enters the CS (S=0).
-
3 processes in CS: P10, P1, P2
-
- Repeat Step 2,3: As P10 leaves the critical section, it executes V(S) and increments S to 1. When it tries to enter the critical section again, it executes V(S) and increments S to 2. Now, 2 processes (P3 to P9) can enter the critical section. Continue with this procedure untill all 10 process resides in critical section.
Binary Vs Counting Semaphore
| Feature | Binary Semaphore | Counting Semaphore |
|---|---|---|
| Possible Values (of Semaphore S) | The semaphore S can only be 0 or 1 — meaning the resource is either locked (0) or free (1) | The semaphore S can be any non-negative integer (0, 1, 2, …, N) — representing how many resources are currently available |
| Main Purpose | Allows only one process to access the critical section at a time | Allows up to N processes to access N identical resources simultaneously |
| Initial Value (S) | S = 1 → one resource available | S = N → N resources available |
| When a Process Enters | Process performs Wait(S) → sets S = 0 → locks the resource | Process performs Wait(S) → S = S – 1 → uses one resource |
| When a Process Exits | Process performs Signal(S) → sets S = 1 → releases the resource | Process performs Signal(S) → S = S + 1 → frees one resource |
| Processes Allowed in Critical Section | Only 1 process at a time | Up to N processes at a time |
| When Resource Not Available | If S = 0, all other processes are blocked | If S = 0, new processes are blocked until a resource is released |
| Common Use Case | For mutual exclusion (mutex locks) — single shared resource | For resource management — multiple identical resources |
| Type of Synchronization | Process synchronization (one-at-a-time control) | Resource synchronization (limited concurrent access) |
| Example | Controlling access to one printer | Controlling access to 4 printers or 5 rooms |
Advantages of Semaphores in OS
Here are the top 5 advantages of using semaphores in an operating system:
-
Enforce Mutual Exclusion: This is the primary advantage. Binary semaphores (often called mutexes) are critical for ensuring that only one process can access a critical section (a block of code accessing shared resources) at a time. This prevents race conditions and data corruption.
-
Effective Synchronization: Semaphores provide a clean and robust way to coordinate the execution of multiple processes or threads. They ensure that operations occur in a specific, required order, such as guaranteeing that one process completes an action before a second process starts its dependent action.
-
Manage Limited Resources (Resource Counting): Counting semaphores are highly efficient for controlling access to a pool of a fixed number of resources (like printers or database connections). By setting the semaphore’s initial value to the total number of available resources, it automatically limits the number of concurrent processes accessing them.
-
Simplicity of Operations: Semaphores rely on just two simple, well-defined, and atomic (indivisible) operations: wait() and signal(). This makes the synchronization logic relatively easy to understand and implement in application code.
-
Machine Independence: The concept of a semaphore is a fundamental, abstract synchronization primitive. This means the logic and method of coordination are independent of the underlying hardware or CPU architecture, making semaphores a widely applicable and portable synchronization tool.
Disadvantages of Semaphores in OS
The major disadvantages of using semaphores for synchronization in Operating Systems are related to complexity, overhead, and the high potential for programming errors that lead to critical system issues like deadlock. Here are the key disadvantages:
1. High Risk of Deadlock:
This is the most serious drawback. Deadlock occurs easily if processes improperly acquire and release semaphores in a cyclic waiting pattern. For example, if Process A holds Semaphore 1 and waits for Semaphore 2, while Process B holds Semaphore 2 and waits for Semaphore 1, both processes will wait indefinitely, halting system progress.
Explain
- P1 executes
wait(S1)→ locks S1 - P2 executes
wait(S2)→ locks S2 - Now:
-
-
P1 tries
wait(S2)→ but S2 is already held by P2 → P1 waits -
P2 tries
wait(S1)→ but S1 is already held by P1 → P2 waits
-
Result → Deadlock
Both processes are waiting for each other’s semaphore. Neither can execute their signal() statements, so both are stuck forever.
2. Programming Errors are Common:
Semaphores rely entirely on the programmer correctly placing the wait() and signal () operations.One simple mistake can lead to disastrous consequences.
- Forgetting a signal() operation: If a process doesn’t release the semaphore after using the shared resource, other processes waiting for it will be blocked forever.
- Calling signal() too early: Releasing the semaphore before the critical section is complete can allow multiple processes into the critical section, defeating the purpose of mutual exclusion and causing data corruption.
3. Busy Waiting (Spinlock Overhead):
In many straightforward implementations of semaphores (especially those known as spinlocks), if a process attempts a wait() operation and the resource is busy, the process enters a tight loop, repeatedly checking the semaphore’s value. This wastes CPU cycles and system resources that could otherwise be used by other processes.
For Example:
-
Initial State: S = 2 (two resources available)
-
Process P1 calls wait() → finds S = 2, so it enters the critical section → S = 1
-
Process P2 calls wait() → finds S = 1, also enters the critical section → S = 0 (no resource left)
-
Process P3 calls wait() → finds S = 0 → enters the while loop, repeatedly checking S to see if it becomes > 0, P3 is now busy waiting — continuously using CPU cycles just to check S.
-
When either P1 or P2 calls signal(), S becomes 1 → only then P3 exits the loop and enters the critical section.
4. Priority Inversion:
In real-time or priority-based scheduling systems, semaphores can cause a high-priority process to wait for a semaphore held by a low-priority process. This condition, known as priority inversion, effectively blocks the important task, potentially leading to missed deadlines or system instability.
For Example:
-
P₁ (Low priority) is using that shared resource, so it locks the semaphore (
wait(S)). -
P₃ (High priority) also needs the same shared resource later, so it waits for the semaphore.
-
P₂ (Medium priority) performs independent work — it doesn’t use that shared resource at all, so it never calls
wait(S)orsignal(S).
|
P₁ (Low) starts and executes: wait(S);
// Critical Section
It acquires the semaphore ( P₃ (High) becomes ready and also needs the resource: wait(S); // blocked because S = 0
P₃ must wait until P₁ releases the semaphore.
|
It’s all about priority inversion.
5. Complexity and Maintenance in Large Systems:
In large operating systems or complex applications, managing a great number of semaphores and ensuring their correct coordination across dozens of processes becomes extremely difficult. This complexity makes debugging synchronization problems highly challenging and time-consuming.