Scheduling (computing)

schedulingschedulerscheduling algorithmprocess schedulertask schedulingprocess schedulingscheduling theoryCPU schedulingpriorityscheduling policy
In computing, scheduling is the method by which work specified by some means is assigned to resources that complete the work.wikipedia
359 Related Articles

Computer multitasking

multitaskingmulti-taskingmultitask
Scheduling is fundamental to computation itself, and an intrinsic part of the execution model of a computer system; the concept of scheduling makes it possible to have computer multitasking with a single central processing unit (CPU).
In multiprogramming systems, a task runs until it must wait for an external event or until the operating system's scheduler forcibly swaps the running task out of the CPU.

Preemption (computing)

preemptive multitaskingpreemptivepre-emptive multitasking
It usually has the ability to pause a running process, move it to the back of the running queue and start a new process; such a scheduler is known as preemptive scheduler, otherwise it is a cooperative scheduler.
It is normally carried out by a privileged task or part of the system known as a preemptive scheduler, which has the power to preempt, or interrupt, and later resume, other tasks in the system.

Load balancing (computing)

load balancingload balancerload-balancing
Schedulers are often implemented so they keep all computer resources busy (as in load balancing), allow multiple users to share system resources effectively, or to achieve a target quality of service.
Numerous scheduling algorithms, also called load-balancing methods, are used by load balancers to determine which back-end server to send a request to.Simple algorithms include random choice, round robin, or least connections.

Job scheduler

job schedulingbatch-queuing systembatch scheduling
In these cases, special-purpose job scheduler software is typically used to assist these functions, in addition to any underlying admission scheduling support in the operating system.
Job scheduling should not be confused with process scheduling, which is the assignment of currently running processes to CPUs by the operating system.

System call

syscallcallaccess
The short-term scheduler (also known as the CPU scheduler) decides which of the ready, in-memory processes is to be executed (allocated a CPU) after a clock interrupt, an I/O interrupt, an operating system call or another form of signal.
In computing, a system call is the programmatic way in which a computer program requests a service from the kernel of the operating system it is executed on. This may include hardware-related services (for example, accessing a hard disk drive), creation and execution of new processes, and communication with integral kernel services such as process scheduling.

Thread (computing)

threadthreadsmultithreading
The work may be virtual computation elements such as threads, processes or data flows, which are in turn scheduled onto hardware resources such as processors, network links or expansion cards. Scheduling disciplines are used in routers (to handle packet traffic) as well as in operating systems (to share CPU time among both threads and processes), disk drives (I/O scheduling), printers (print spooler), most embedded systems, etc. Context switches, in which the dispatcher saves the state (also known as context) of the process or thread that was previously running; the dispatcher then loads the initial or previously saved state of the new process.
In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system.

Operating system

operating systemsOScomputer operating system
Scheduling disciplines are used in routers (to handle packet traffic) as well as in operating systems (to share CPU time among both threads and processes), disk drives (I/O scheduling), printers (print spooler), most embedded systems, etc.
Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources.

Coscheduling

coschedulecoscheduled
For example, in concurrent systems, coscheduling of interacting processes is often required to prevent them from blocking due to waiting on each other.
Coscheduling is the principle for concurrent systems of scheduling related processes to run on different processors at the same time (in parallel).

Real-time computing

real-timereal timereal-time systems
In real-time environments, such as embedded systems for automatic control in industry (for example robotics), the scheduler also must ensure that processes can meet deadlines; this is crucial for keeping the system stable.
Compared to these the programmable interrupt controller of the Intel CPUs (8086..80586) generates a very large latency and the Windows operating system is neither a real-time operating system nor does it allow a program to take over the CPU completely and use its own scheduler, without using native machine language and thus surpassing all interrupting Windows code.

Round-robin scheduling

round-robinRound Robinround robin scheduling
The simplest best-effort scheduling algorithms are round-robin, fair queuing (a max-min fair scheduling algorithm), proportionally fair scheduling and maximum throughput.
Round-robin (RR) is one of the algorithms employed by process and network schedulers in computing.

Fair queuing

fair queueingfair schedulingfair-queuing principle
The simplest best-effort scheduling algorithms are round-robin, fair queuing (a max-min fair scheduling algorithm), proportionally fair scheduling and maximum throughput.
Fair queuing is a family of scheduling algorithms used in some process and network schedulers.

Quality of service

QoSquality-of-serviceQuality of Service (QOS)
Schedulers are often implemented so they keep all computer resources busy (as in load balancing), allow multiple users to share system resources effectively, or to achieve a target quality of service.
Differentiated services ("DiffServ") implements the prioritized model. DiffServ marks packets according to the type of service they desire. In response to these markings, routers and switches use various queueing strategies to tailor performance to expectations. Differentiated services code point (DSCP) markings use the first 6 bits in the ToS field (now renamed as the DS Byte) of the IP(v4) packet header.

Process (computing)

processprocessesprocessing
The work may be virtual computation elements such as threads, processes or data flows, which are in turn scheduled onto hardware resources such as processors, network links or expansion cards. Scheduling disciplines are used in routers (to handle packet traffic) as well as in operating systems (to share CPU time among both threads and processes), disk drives (I/O scheduling), printers (print spooler), most embedded systems, etc. Context switches, in which the dispatcher saves the state (also known as context) of the process or thread that was previously running; the dispatcher then loads the initial or previously saved state of the new process.
First, the process is "created" by being loaded from a secondary storage device (hard disk drive, CD-ROM, etc.) into main memory. After that the process scheduler assigns it the "waiting" state.

Proportionally fair

proportional fairnessproportional fair
The simplest best-effort scheduling algorithms are round-robin, fair queuing (a max-min fair scheduling algorithm), proportionally fair scheduling and maximum throughput.
Proportional fair is a compromise-based scheduling algorithm.

FIFO (computing and electronics)

FIFOfirst in, first outFIFOs
In packet-switched computer networks and other statistical multiplexing, the notion of a scheduling algorithm is used as an alternative to first-come first-served queuing of data packets.
FCFS is also the jargon term for the FIFO operating system scheduling algorithm, which gives every process central processing unit (CPU) time in the order in which it is demanded.

Statistical time-division multiplexing

statistical multiplexingstatistical multiplexerstatistical time division multiplexing
In packet-switched computer networks and other statistical multiplexing, the notion of a scheduling algorithm is used as an alternative to first-come first-served queuing of data packets.
In alternative fashion, the packets may be delivered according to some scheduling discipline for fair queuing or differentiated and/or guaranteed quality of service.

Supercomputer

high-performance computingsupercomputinghigh performance computing
Long-term scheduling is also important in large-scale systems such as batch processing systems, computer clusters, supercomputers, and render farms.
While in a traditional multi-user computer system job scheduling is, in effect, a tasking problem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources, as well as gracefully deal with inevitable hardware failures when tens of thousands of processors are present.

Context switch

context switchingtask switchingswitch
Context switches, in which the dispatcher saves the state (also known as context) of the process or thread that was previously running; the dispatcher then loads the initial or previously saved state of the new process.
Most commonly, within some scheduling scheme, one process must be switched out of the CPU so another process can run.

Shortest job next

shortest job firstShortest Job (or Process) Next
Similar to shortest job first (SJF).
Shortest job next (SJN), also known as shortest job first (SJF) or shortest process next (SPN), is a scheduling policy that selects for execution the waiting process with the smallest execution time.

Execution model

Scheduling is fundamental to computation itself, and an intrinsic part of the execution model of a computer system; the concept of scheduling makes it possible to have computer multitasking with a single central processing unit (CPU).
Each and every programming language has an execution model, which determines the manner in which the units of work (that are indicated by program syntax) are scheduled for execution.

Computer cluster

clusterclusteringclusters
Long-term scheduling is also important in large-scale systems such as batch processing systems, computer clusters, supercomputers, and render farms.
When a large multi-user cluster needs to access very large amounts of data, task scheduling becomes a challenge.

Multilevel feedback queue

For example, Windows NT/XP/Vista uses a multilevel feedback queue, a combination of fixed-priority preemptive scheduling, round-robin, and first in, first out algorithms.
In computer science, a multilevel feedback queue is a scheduling algorithm.

Turnaround time

Turn-Around Time
Turnaround time, waiting time and response time depends on the order of their arrival and can be high for the same reasons above.
Turnaround time is one of the metrics used to evaluate an operating system's scheduling algorithms.

Work-conserving scheduler

non-work conservingwork-conservingwork-conserving I/O scheduler
A work-conserving scheduler is a scheduler that always tries to keep the scheduled resources busy, if there are submitted jobs ready to be scheduled.
Similarly, when referring to CPU scheduling, i.e. threads or processes scheduled over one or more available processors or cores, a work-conserving scheduler ensures that processors/cores are not idle if there are processes/threads ready for execution.

Processor affinity

thread affinityCPU pinninghave an affinity to
In SMP(symmetric multiprocessing) systems, processor affinity is considered to increase overall system performance, even if it may cause a process itself to run more slowly.
This can be viewed as a modification of the native central queue scheduling algorithm in a symmetric multiprocessing operating system.