Computational complexity theory

computational complexitycomplexity theorycomplexityintractablecomputationally intractableintractabilitytractableasymptotic complexitytime complexitytractable problem
Computational complexity theory focuses on classifying computational problems according to their inherent difficulty, and relating these classes to each other.wikipedia
795 Related Articles

Model of computation

models of computationmodelmodels of computing
The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage.
In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how a set of outputs are computed given a set of inputs.

Computational complexity

amount of computationcomplexitycomputationally expensive
The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage.
The study of the complexity of explicitly given algorithms is called analysis of algorithms, while the study of the complexity of problems is called computational complexity theory.

Travelling salesman problem

traveling salesman problemTSPtraveling salesman
To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the traveling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 15 largest cities? A function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem—that is, the output isn't just yes or no. Notable examples include the traveling salesman problem and the integer factorization problem.
In the theory of computational complexity, the decision version of the TSP (where, given a length L, the task is to decide whether the graph has any tour shorter than L) belongs to the class of NP-complete problems.

Computability theory

recursion theorycomputablecomputability
Closely related fields in theoretical computer science are analysis of algorithms and computability theory.
Although there is considerable overlap in terms of knowledge and methods, mathematical recursion theorists study the theory of relative computability, reducibility notions, and degree structures; those in the computer science field focus on the theory of subrecursive hierarchies, formal methods, and formal languages.

Analysis of algorithms

computational complexitycomplexity analysiscomputationally expensive
Closely related fields in theoretical computer science are analysis of algorithms and computability theory.
Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem.

P versus NP problem

P ≠ NPP = NPcomplexity classes P and NP
One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. The P versus NP problem, one of the seven Millennium Prize Problems, is dedicated to the field of computational complexity.
The relation between the complexity classes P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem.

Decision problem

undecidabledecision problemsdecision procedure
Decision problems are one of the central objects of study in computational complexity theory. A function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem—that is, the output isn't just yes or no. Notable examples include the traveling salesman problem and the integer factorization problem.
In computability theory and computational complexity theory, a decision problem is a problem that can be posed as a yes-no question of the input values.

Function problem

A function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem—that is, the output isn't just yes or no. Notable examples include the traveling salesman problem and the integer factorization problem. The type of computational problem: The most commonly used problems are decision problems. However, complexity classes can be defined based on function problems, counting problems, optimization problems, promise problems, etc.
In computational complexity theory, a function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem.

Formal language

formal language theoryformal languageslanguage
A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, and the non-members are those instances whose output is no. The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration.
In computational complexity theory, decision problems are typically defined as formal languages, and complexity classes are defined as the sets of the formal languages that can be parsed by machines with limited computational power.

Alternating Turing machine

alternationalternating Turing machinesexistential state
Many types of Turing machines are used to define complexity classes, such as deterministic Turing machines, probabilistic Turing machines, non-deterministic Turing machines, quantum Turing machines, symmetric Turing machines and alternating Turing machines.
In computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP.

Church–Turing thesis

belowChurch's thesis as a definitionChurch-Turing theorem
Indeed, this is the statement of the Church–Turing thesis.
These variations are not due to Church or Turing, but arise from later work in complexity theory and digital physics.

DTIME

deterministicDTIME(f(''n''))running time
For instance, the set of problems solvable within time f(n) on a deterministic Turing machine is then denoted by DTIME(f(n)).
In computational complexity theory, DTIME (or TIME) is the computational resource of computation time for a deterministic Turing machine.

Decision tree model

algebraic decision treedecision tree complexityquery complexity
Other complexity measures used in complexity theory include communication complexity, circuit complexity, and decision tree complexity.
In computational complexity and communication complexity theories the decision tree model is the model of computation or communication in which an algorithm or communication process is considered to be basically a decision tree, i.e., a sequence of branching operations based on comparisons of some quantities, the comparisons being assigned the unit computational cost.

Turing machine

deterministic Turing machineuniversal computeruniversal computation
Many types of Turing machines are used to define complexity classes, such as deterministic Turing machines, probabilistic Turing machines, non-deterministic Turing machines, quantum Turing machines, symmetric Turing machines and alternating Turing machines.
Studying their abstract properties yields many insights into computer science and complexity theory.

P (complexity)

Ppolynomial timepolynomial-time
This forms the basis for the complexity class P, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time.
In computational complexity theory, P, also known as PTIME or DTIME(n O(1) ), is a fundamental complexity class.

Algorithm

algorithmscomputer algorithmalgorithm design
A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, and the non-members are those instances whose output is no. The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration.
Van Emde Boas observes "even if we base complexity theory on abstract instead of concrete machines, arbitrariness of the choice of a model remains. It is at this point that the notion of simulation enters".

Big O notation

Obig-O notationΘ
The complexity of an algorithm is often expressed using big O notation.
In computer science, big O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows.

Complexity

complexdetailcomplexity theory
Although time and space are the most well-known complexity resources, any complexity measure can be viewed as a computational resource.
In computational complexity theory, the amounts of resources required for the execution of algorithms is studied. The most popular types of computational complexity are the time complexity of a problem equal to the number of steps that it takes to solve an instance of the problem as a function of the size of the input (usually measured in bits), using the most efficient algorithm, and the space complexity of a problem equal to the volume of the memory used by the algorithm (e.g., cells of the tape) that it takes to solve an instance of the problem as a function of the size of the input (usually measured in bits), using the most efficient algorithm. This allows classification of computational problems by complexity class (such as P, NP, etc.). An axiomatic approach to computational complexity was developed by Manuel Blum. It allows one to deduce many properties of concrete computational complexity measures, such as time complexity or space complexity, from properties of axiomatically defined measures.

Boolean circuit

boolean circuitscircuitcircuits
The model of computation: The most common model of computation is the deterministic Turing machine, but many complexity classes are based on non-deterministic Turing machines, Boolean circuits, quantum Turing machines, monotone circuits, etc.
In computational complexity theory and circuit complexity, a Boolean circuit is a mathematical model for digital logic circuits.

BPP (complexity)

BPPBounded-error Probabilistic Polynomialbounded error probability in polynomial time
Other important complexity classes include BPP, ZPP and RP, which are defined using probabilistic Turing machines; AC and NC, which are defined using Boolean circuits; and BQP and QMA, which are defined using quantum Turing machines.
In computational complexity theory, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded away from 1/2 for all instances.

FP (complexity)

FPthe class of polynomial-time functions
The corresponding set of function problems is FP.
In computational complexity theory, the complexity class FP is the set of function problems which can be solved by a deterministic Turing machine in polynomial time; it is the function problem version of the decision problem class P. Roughly speaking, it is the class of functions that can be efficiently computed on classical computers without randomization.

Counting problem (complexity)

counting problemcounting complexitycounting complexity class
The type of computational problem: The most commonly used problems are decision problems. However, complexity classes can be defined based on function problems, counting problems, optimization problems, promise problems, etc.
In computational complexity theory and computability theory, a counting problem is a type of computational problem.

Circuit complexity

non-uniformcircuitmonotone circuit
Other complexity measures used in complexity theory include communication complexity, circuit complexity, and decision tree complexity. The model of computation: The most common model of computation is the deterministic Turing machine, but many complexity classes are based on non-deterministic Turing machines, Boolean circuits, quantum Turing machines, monotone circuits, etc. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing).
In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of Boolean circuits that compute them.

Random-access machine

random access machineRAM machine modelRAM
Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as a RAM machine, Conway's Game of Life, cellular automata or any programming language can be computed on a Turing machine.
Together with the Turing machine and counter-machine models, the RAM and RASP models are used for computational complexity analysis.

Promise problem

Promise-UPpromisedrestricted
The type of computational problem: The most commonly used problems are decision problems. However, complexity classes can be defined based on function problems, counting problems, optimization problems, promise problems, etc.
In computational complexity theory, a promise problem is a generalization of a decision problem where the input is promised to belong to a particular subset of all possible inputs.