[2022] Operating system interview questions
Written by
An Operating System is an interface between a user and the hardware of the computer. An operating system performs basic tasks such as file management, memory management, process management, handles input and output, and controls the peripheral devices such as disk drives and printers. All these tasks help in better communication between the user and the machine. OS acts as a translator that understands human readable language and the instructions the user gives to the machine and interprets them into machine level language.
- What is an Operating System?
An Operating System is a software that provides an interface between the software and hardware of a computer. An Operating system offers an environment for the user to execute the instructions while interacting with the computer
hardware.
- Name some functions that the Operating system performs.
An OS performs following functions:
- Memory management
- Processor management
- Device management
- File management
- Security
- Job accounting
- Control over system performance
- Error detection
- Communicate between user and software
- Communication between software and hardware.
- What are the different types of Operating Systems?
The different types of Operating Systems are:
- Batched Operating System
- Interactive OS
- Multi-processing Operating system
- Multitasking Operative System
- Distributed Operating System
- Multi-programmed Operating Systems
- Real-Time Operating System
- Timesharing Operating System
- What is the use of an Operating System?
An operating system acts as a management system between the system software and hardware, and guides the hardware to act according to the provided software. It also controls the flow of the program and provides an environment
so the software can communicate with system hardware.
- What is Booting?
Booting is the procedure, through which the Operating System turns on the computer by loading the Kernel.
- What is the Bootstrap program?
A Bootstrap Program is one that resides in the Kernel of an Operating system, and when the system is booted, it is the first program that executes itself. The Bootstrap program stores the read-only memory (ROM) as well.
- What is a multi-programming system in OS?
In the multi-programming system, the system keeps the different program in different parts of the main memory simultaneously and executes each of those concurrently. This helps in reducing the execution time and ensures
maximum utilization of the available resources.
- What is a multitasking system?
When the programs are kept in the main memory so the system can execute them simultaneously, then such a system is known as a multitasking system. It helps in effective resource utilization.
- What is the time-sharing operating system?
In the time-sharing operating system, multiple users can use a specific program from the different terminals at the same time. Time-sharing is a technique, which enables many people, located at various terminals, to use a particular
computer system at the same time. It is a logical extension of multiprogramming. Processor’s time which is shared among multiple users simultaneously is termed as time-sharing.
- What are the advantages of the multiprocessor system?
Multi-processor, as its name suggests, uses more than one processor at a time. Due to the increase in the number of processors in the computer, the processing capability of the system increases as well. This increased
processor capability makes sure that the system performs at its best and utilizes all the resources to its maximum.
- What is a virtual memory?
A Virtual memory is a memory management technique that helps to execute the process using the primary and secondary memory. Any program although gets executed using the main memory only, however the resources and pages are load from
the secondary memory. This creates an illusion of a large memory, and makes users believe that they have ample of memory at their disposal.
- What is a Kernel in the Operating system?
Kernel is an essential part of the Operating system, it is referred to as the core of the Operating system. A kernel is present in the main memory of the system and loads before any other part of the operating system. Every
operating system consists of a kernel; For example, the Linux kernel.
- What are the main functions of a kernel in an operating system?
Since a kernel is the core of the operating system, it performs following functions:
- Process management
- Resource management
- Disk management
- Memory management
- Device management
- Communication between hardware and software.
- How many types of kernels are used in OS?
There are many types of kernels, only two of them are considered to be in use:
- Monolithic Kernel
- MicroKernel
Monolithic Kernel
ensues that all the User services and kernel services reside in the same memory space. The old operating systems would use this type of Kernel. Eg: Linux, Windows 95, 98, Unix, etc.
MicroKernel is small in size, and all the User and Kernel services reside in the different memory addresses. Some Operating systems like Mac OS X and windows use microkernels.
- What are the disadvantages of using a Microkernel?
- Complex process management
- Debugging the messaging is complex.
- Loss in performance because of the requirement of more software.
- What is a SMP?
Symmetric Multiprocessing or SMP, is an architecture that contains multiple processors. The task of these processors is to complete the process, and all the processors share a single memory.
- What is Asymmetric clustering?
Asymmetric clustering occurs when a cluster is encountered by any server running application. In asymmetric clustering, one server tries to run the server application while others remain on standby mode.
- What is a thread?
A thread is a flow of execution through the process code, with its own program counter that keeps track of which instruction to execute next, system registers which hold its current working variables, and a stack which contains the
execution history. A thread is also called a lightweight process.
- What is demand Paging?
Demand paging is a concept used by the virtual machines. During execution, only a part of the process needs to be present in the main memory, thus a process is bifurcated into various pages. Only a few pages are loaded in the
main memory at a time, and rest will are kept in the secondary memory, which are loaded as per the demand.
- What is a process?
A process is the instance of a computer program that is being executed by one or many threads. It contains the code of the program. Depending on the operating system (OS), a process may be made up of multiple threads of execution
that execute all the instructions concurrently or it could be made up of a single process only that executes instructions independently.
- What are the states through which a process goes during its life cycle?
The process states are:
- New
- Running
- Waiting
- Ready
- Terminate
- How a thread is different from the process?
- A process is independent, whereas a thread is dependent.
- A thread can assist other threads, whereas the process can not.
- If one thread stops, the next thread starts executing, but this is not possible in the case of a process.
- What is a deadlock?
A deadlock is a condition, that occurs when two threads are trying to execute themselves simultaneously. When they are waiting for each other to finish the execution as they are dependent on each other, this halt in execution
is known as deadlock.
To the user the deadlock is visible as a system hang.
- What are the necessary conditions for a deadlock?
For a deadlock to occur following conditions must be met:
- Mutual Exclusion
- Hold and wait
- No preemption
- Circular wait
- What is the starvation condition?
When a program is in process, and it does not get all the resources that are needed to execute itself, then the condition is known as starvation. Starvation usually occurs, because the other processes may be holding on the
resources required by that particular process.
- What is a command interpreter?
A command interpreter is a text field input and output interface between the user and the operating system. In the command interpreter, the user gives the input through the keyboard using specific command lines and the output is
displayed after processing the input. Eg: Command Prompt in Windows
- What is a daemon in OS?
It stands for Disk and Execution monitor, in multitasking computer operating systems, a daemon is a computer program that runs as a background process, rather than being under the direct control of the user. The life cycle of
a daemon starts with the system is booted and it continues till the system shuts down.
- What is a race condition?
A race condition occurs when the different operations are performed on the same data simultaneously, the outcome of the execution depends on the order of the operations performed on the data. The race condition can result in an
undesirable outcome.
- What is process synchronization?
To prevent an undesirable outcome from a race condition, the process synchronization is followed. In this synchronization, it is ensured that only one process is executed at a time.
- What do you mean by PCB?
PCB or the Process Control Block, is an Operating System data structure. It can collect and store the information about the processes. Also called as a process descriptor, when a process is created the Operating System creates
a corresponding PCB to store the process status and information.
- What is Semaphore?
Semaphore variable is used to create a synchronized process. There are two types of semaphores available: Counting semaphore and Binary semaphore.
Counting semaphore can have positive integer values, while Binary semaphore can only have 1 and 0 as values.
- What is the FCFS scheduling algorithm?
FCFS stands for the First Come First Serve (FCFS), it is a scheduling algorithm. According to this algorithm, the CPU serves that process first, which approaches it first. FCFS can cause the starvation problem in which the
process is unable to get the proper resources for its execution.
- What are the different RAID Levels?
RAID 0 – Non-redundant striping
RAID 1 – Mirrored Disks
RAID 2 – Memory-style error-correcting codes
RAID 3 – Bit-interleaved Parity
RAID 4 – Block-interleaved Parity
RAID 5 – Block-interleaved distributed Parity
RAID 6 – P+Q Redundancy
- What is a Cache memory?
Cache is a volatile computer memory directly attached to the register, which provides high-speed data access to the processor. Whenever a processor requires for a piece of memory regularly or frequently, then that memory is stored
in a cache, so that the next time, it can directly come from cache rather than secondary memory. A cache memory has a high processing speed and is of limited size.
- What is IPC?
IPC stands for the Inter-Process Communication. IPC is a mechanism, in which the different processes can communicate with each other after the approval from the Operating system.
- Name the Various IPC mechanisms.
- Sockets
- Pipe
- Shared Memory
- Signals
- Message Queues
- What is a Context Switch?
The technique of Context Switching involves storing the context or the process state. It does it, so that it can be reloaded whenever it is required and execution can then be resumed from the same point as earlier.
- What is the difference between a compiler and Interpreter?
A compiler first reads all the code at once then tries to execute it, whereas the Interpreter reads the code line by line and simultaneously executes it.
- What are Sockets?
Sockets are the inter-process Communication mechanisms that are used to provide point-to-point communications between two processes. These sockets are used in a client-server application because the protocols such as FTP, SMTP and
POP3 use sockets to implement the connection between the server and a client.
- What do you understand by Main Memory and Secondary Memory?
Main memory is directly connected to the processor, and it acts as a bridge between the processor and the secondary memory. The main objective of the main memory, is to extract the data from the secondary memory that is needed by
the processor. Eg: RAM and ROM.
A Main memory does not stores the data permanently; it only holds data for a time and tries to give it to the processor for the further execution.
Secondary Memory holds that data permanently. It has more capacity to hold data than a primary memory. This makes the processing speed of the secondary memory as slow. Eg: Hard Disk Drives.
- Is a deadlock situation possible with a single process?
No, for a deadlock situation to occur, at least two dependent processes are needed. A deadlock situation can only arise when these four conditions are met:
- Hold and Wait
- No Preemption
- Mutual Exclusion
- Circular wait.
- What are interrupts?
The signals generated by the external input devices to stop the ongoing active process of a processor are known as Interrupts. They use context switching so the processor can switch between the current process and the new signal
generated by the external device. It helps in prioritizing the process execution of CPU.
- What are zombie processes?
The defunct processes or zombie processes, occur if a child process is still in the process table even after the parent process has been executed. Even the kill commands do not have any effect on these processes.
- What is a pipe in OS?
Pipe is the method, through which information is exchanged between the processes. A, pipe forms a one-way communication, thus a process can only send information such as output or other parameters of the process to another process.
For setting a two-way communication between two processes, we require two pipes one for input and other for output.
- What is a mutex in OS?
Mutex is an abbreviation for Mutual Exclusion. Mutex helps multiple threads to access the same resource, but not simultaneously. Purpose of a mutex is to lock a thread with a resource so the other threads can not use the same
resource until the first thread has finished its execution.
- What is a critical section?
In concurrent programming, concurrent accesses to shared resources can lead to unexpected or error prone behavior, so parts of the program where the shared resource is accessed need to be protected in such a way that it must avoid
the concurrent access. This protected section is the critical section or the critical region.
- What is process scheduling?
Process scheduling is a routine followed by the process manager of the system. In this, the process manager can use different methods and strategies to remove a particular running process and select another process from the list of
available ones to be executed by the processor. There are the numerous algorithms for process scheduling.
- What are the different Process Scheduling algorithms?
The different process scheduling algorithms are:
- First Come First Serve
- Shortest Job First
- Priority Scheduling
- Round Robin Scheduling
- What is the SJFS?
SJFS or Shortest Job First Scheduling is a Scheduling algorithm, it could be either follow a Preemptive or a non-preemptive approach. In this, that process or the job that is closest to its execution gets to execute itself. The
processor gives priority to those jobs which have a low execution time.
- What is Round Robin Scheduling?
Round Robin Scheduling is a preemptive scheduling algorithm. In this each process gets an equal time for execution in a cyclic fashion. It is simple, easy to implement, and starvation-free as all processes get a fair share of CPU.