Round-robin (RR) is one of the algorithms employed by process and network schedulers in computing. As the term is generally used, time slices (also known as time quanta) are assigned to each process in equal portions and in circular order, handling all processes without priority (also known as cyclic executive). Round-robin scheduling is simple, easy to implement, and starvation-free. Round-robin scheduling can also be applied to other scheduling problems, such as data packet scheduling in computer networks. It is an operating system concept.
First-fit is a memory allocation algorithm using a linked list data structure. It sequentially traverses the list of available memory blocks until it finds one that is large enough to satisfy the request. It then allocates the process to that memory block. The size of the allocated block is then reduced by the amount requested, and the address of the newly allocated block is returned to the process. If the process does not fit in any of the available blocks, it is added to the list of waiting processes. When a process completes, it releases its memory block, which is then merged with any adjacent free blocks in memory.
Best-fit is a memory allocation algorithm using a linked list data structure. It sequentially traverses the list of available memory blocks until it finds one that is large enough to satisfy the request. It then allocates the process to that memory block. The size of the allocated block is then reduced by the amount requested, and the address of the newly allocated block is returned to the process. If the process does not fit in any of the available blocks, it is added to the list of waiting processes. When a process completes, it releases its memory block, which is then merged with any adjacent free blocks in memory.
Worst-fit is a memory allocation algorithm using a linked list data structure. It sequentially traverses the list of available memory blocks until it finds one that is large enough to satisfy the request. It then allocates the process to that memory block. The size of the allocated block is then reduced by the amount requested, and the address of the newly allocated block is returned to the process. If the process does not fit in any of the available blocks, it is added to the list of waiting processes. When a process completes, it releases its memory block, which is then merged with any adjacent free blocks in memory.
First Come First Serve (FCFS) is an operating system scheduling algorithm that automatically executes queued requests and processes in order of their arrival. It is the easiest and simplest CPU scheduling algorithm. In this type of algorithm, processes which requests the CPU first get the CPU allocation first. This is managed with a FIFO queue. As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail of the queue. When the CPU is free, it should be assigned to the process at the head of the queue. The queue is implemented as a linked list.