Skip to main content

9. Operating System - Extra



Interrupt: In system programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention.

Trap: In computing and operating systems, a trap, also known as an exception or a fault, is typically a type of synchronous interrupt caused by an exceptional condition (e.g., breakpoint, division by zero, invalid memory access).

Signal: A signal is a software-generated interrupt that is sent to a process by the OS because when the user press ctrl-c or another process tells something to this process. There is fix set of signals that can be sent to a process. Signals are identified by integers. A signal number has symbolic names.

System call: The interface between a process and an operating system is provided by system calls. System calls are usually made when a process in user mode requires access to a resource. Then it requests the kernel to provide the resource via a system call.

Types of System calls:

  • Process control.
  • File management.
  • Device management.
  • Information maintenance.
  • Communications.

Fork system call: Fork system call is used for creates a new process, which is called the child process

Exec: In computing, exec is a functionality of an operating system that runs an executable file in the context of an already existing process, replacing the previous executable. This act is also referred to as an overlay.

Wait: The system call wait () is easy. This function blocks the calling process until one of its child processes exits or a signal is received. 

Exit: exit (system call): On many computer operating systems, a computer process terminates its execution by making an exit system call.

Thread: Thread is a single sequence stream within a process. Threads have the same properties as the process so they are called lightweight processes

Multicore programming: Multicore programming helps you create concurrent systems for deployment on multicore processors and multiprocessor systems. A multi-core processor system is a single processor with multiple execution cores in one chip. By contrast, a multiprocessor system has multiple processors on the motherboard or chip.

CPU scheduling: The aim of CPU scheduling is to make the system efficient, fast, and fair. 

Process synchronization: Process Synchronization means sharing system resources by processes.

Semaphore: In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple processes in a concurrent system such as a multitasking operating system.

Deadlock: A deadlock is a situation in which two computer programs sharing the same resource are effectively preventing each other from accessing the resource, 

MVT: MVT (Multiprogramming with a Variable number of Tasks) is the memory management technique in which each job gets just the amount of memory it needs. That is, the partitioning of memory is dynamic and changes as jobs enter and leave the system

MFT: MFT (Multiprogramming with a fixed number of Tasks) is one of the old memory management techniques in which the memory is partitioned into fixed-size partitions and each job is assigned to a partition. The memory assigned to a partition does not change.

Inter-process communication: Inter-process communication (IPC) is a set of programming interfaces that allow a programmer to coordinate activities among different program processes that can run concurrently in an operating system.

Multithreading: Multithreading is the ability of an operating system process to manage its use by more than one user at a time and to even manage multiple requests.

Contagious Memory Allocation: In contiguous memory allocation, when a process requests for the memory, a single contiguous section of a memory block is assigned to the process according to its requirement.

Critical section: 

Comments

Popular posts from this blog

7.Operating System - Memory Management

Process Address Space:   The process address space is the set of logical addresses that a process references in its code. Static vs Dynamic Loading:  The choice between Static or Dynamic Loading is to be made at the time of the computer program being developed. If you have to load your program statically, then at the time of compilation, the complete programs will be compiled and linked without leaving any external program or module dependency. If you are writing a dynamically loaded program, then your compiler will compile the program and for all the modules which you want to include dynamically, only references will be provided and the rest of the work will be done at the time of execution. At the time of loading, with static loading, the absolute program (and data) is loaded into memory in order for execution to start. If you are using dynamic loading, dynamic routines of the library are stored on a disk in re-locatable form and are loaded into memory only when they a...

8. Operating System - Virtual Memory and Input/Output

Definition: A computer can address more memory than the amount physically installed on the system. This extra memory is actually called virtual memory and it is a section of a hard disk that's set up to emulate the computer's RAM. The main visible advantage of this scheme is that programs can be larger than physical memory. Virtual memory serves two purposes. First, it allows us to extend the use of physical memory by using disk. Second, it allows us to have memory protection, because each virtual address is translated to a physical address. Demand Paging: A demand paging system is quite similar to a paging system with swapping where processes reside in secondary memory and pages are loaded only on demand, not in advance. Page Replacement Algorithm: Page replacement algorithms are the techniques using which an Operating System decides which memory pages to swap out, write to disk when a page of memory needs to be allocated Reference String: The string of memory references i...