Process (computing)

400x400px
A list of processes as displayed by htop
A process table as displayed by KDE System Guard
The various process states, displayed in a state diagram, with arrows indicating possible transitions between states.

Instance of a computer program that is being executed by one or many threads.

- Process (computing)

453 related topics

Relevance

Computer program

Sequence of instructions in a programming language that a computer can execute or interpret.

Lovelace's description from Note G.
350px
Zuse Z3 replica on display at Deutsches Museum in Munich
Glenn A. Beck is changing a tube in ENIAC.
Switches for manual input on a Data General Nova 3, manufactured in the mid-1970s
A VLSI integrated-circuit die.
IBM's System/360 (1964) CPU wasn't a microprocessor.
Artist's depiction of Sacramento State University's Intel 8008 microcomputer (1972).
The original IBM Personal Computer (1981) used an Intel 8088 microprocessor.
The DEC VT100 (1978) was a widely used computer terminal.
Prior to programming languages, Betty Jennings and Fran Bilas programmed the ENIAC by moving cables and setting switches.
"Hello, World!" computer program by Brian Kernighan (1978)
A computer program written in an imperative language
A diagram showing that the user interacts with the application software. The application software interacts with the operating system, which interacts with the hardware.
A kernel connects the application software to the hardware of a computer.
Physical memory is scattered around RAM and the hard disk. Virtual memory is one continuous block.
NOT gate.
NAND gate.
NOR gate.
AND gate.
OR gate.
A symbolic representation of an ALU.

If the executable is requested for execution, then the operating system loads it into memory and starts a process.

Inter-process communication

A grid computing system that connects many personal computers over the Internet via inter-process network communication

In computer science, inter-process communication or interprocess communication (IPC) refers specifically to the mechanisms an operating system provides to allow the processes to manage shared data.

Memory management

Form of resource management applied to computer memory.

An example of external fragmentation

This is critical to any advanced computer system where more than a single process might be underway at any time.

Kernel (operating system)

Computer program at the core of a computer's operating system and generally has complete control over everything in the system.

A kernel connects the application software to the hardware of a computer
Diagram of a monolithic kernel
In the microkernel approach, the kernel itself only provides basic functionality that allows the execution of servers, separate programs that assume former kernel functions, such as device drivers, GUI servers, etc.
The hybrid kernel approach combines the speed and simpler design of a monolithic kernel with the modularity and execution safety of a microkernel
A diagram of the predecessor/successor family relationship for Unix-like systems

When a process requests a service from the kernel, it must invoke a system call, usually through a wrapper function.

Task (computing)

Unit of execution or a unit of work.

A sample thread pool (green boxes) with task queues of waiting tasks (blue) and completed tasks (yellow), in the sense of task as "unit of work".

The term is ambiguous; precise alternative terms include process, light-weight process, thread (for execution), step, request, or query (for work).

Deadlock

Any situation in which no member of some group of entities can proceed because each waits for another member, including itself, to take action, such as sending a message or, more commonly, releasing a lock.

Both processes need resources to continue execution. P1 requires additional resource R1 and is in possession of resource R2, P2 requires additional resource R2 and is in possession of R1; neither process can continue.
Four processes (blue lines) compete for one resource (grey circle), following a right-before-left policy. A deadlock occurs when all processes lock the resource simultaneously (black lines). The deadlock can be resolved by breaking the symmetry.

In an operating system, a deadlock occurs when a process or thread enters a waiting state because a requested system resource is held by another waiting process, which in turn is waiting for another resource held by another waiting process.

Parallel computing

IBM's Blue Gene/P massively parallel supercomputer
A graphical representation of Amdahl's law. The speedup of a program from parallelization is limited by how much of the program can be parallelized. For example, if 90% of the program can be parallelized, the theoretical maximum speedup using parallel computing would be 10 times no matter how many processors are used.
Assume that a task has two independent parts, A and B. Part B takes roughly 25% of the time of the whole computation. By working very hard, one may be able to make this part 5 times faster, but this only reduces the time for the whole computation by a little. In contrast, one may need to perform less work to make part A be twice as fast. This will make the computation much faster than by optimizing part B, even though part B's speedup is greater by ratio, (5 times versus 2 times).
Taiwania 3 of Taiwan, a parallel supercomputing device that joined COVID-19 research.
A canonical processor without pipeline. It takes five clock cycles to complete one instruction and thus the processor can issue subscalar performance.
A canonical five-stage pipelined processor. In the best case scenario, it takes one clock cycle to complete one instruction and thus the processor can issue scalar performance.
A canonical five-stage pipelined processor with two execution units. In the best case scenario, it takes one clock cycle to complete two instructions and thus the processor can issue superscalar performance.
A logical view of a non-uniform memory access (NUMA) architecture. Processors in one directory can access that directory's memory with less latency than they can access memory in the other directory's memory.
A Beowulf cluster
A cabinet from IBM's Blue Gene/L massively parallel supercomputer
Nvidia's Tesla GPGPU card
The Cray-1 is a vector processor
ILLIAC IV, "the most infamous of supercomputers"

Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously.

Call stack

Stack data structure that stores information about the active subroutines of a computer program.

Call stack layout for upward-growing stacks after the subroutine (shown in ) called   (shown in ), which is the currently executing routine

There is usually exactly one call stack associated with a running program (or more accurately, with each task or thread of a process), although additional stacks may be created for signal handling or cooperative multitasking (as with setcontext).

Computing

Any goal-oriented activity requiring, benefiting from, or creating computing machinery.

Computer simulation, one of the main cross-computing methodologies.
ENIAC, the first programmable general-purpose electronic digital computer

The execution process carries out the instructions in a computer program.

Interrupt

Interrupt is a request for the processor to interrupt currently executing code (when permitted), so that the event can be processed in a timely manner.

interrupt sources and processor handling

In a kernel process, it is often the case that some types of software interrupts are not supposed to happen.