# Reduce (parallel pattern)

reduce
Reduce is a collective communication primitive used in the context of a parallel programming model to combine multiple vectors into one, using an associative binary operator \oplus.wikipedia
20 Related Articles

### MapReduce

map reducemap-reducemap/reduce
The reduction of sets of elements is an integral part of programming models such as Map Reduce, where a function is applied (mapped) to all elements before they are reduced.
A MapReduce program is composed of a map procedure (or method), which performs filtering and sorting (such as sorting students by first name into queues, one queue for each name), and a reduce method, which performs a summary operation (such as counting the number of students in each queue, yielding name frequencies).

### Parallel programming model

parallelprogramming modelconcurrency
Reduce is a collective communication primitive used in the context of a parallel programming model to combine multiple vectors into one, using an associative binary operator \oplus.

### Operator associativity

right-associativeleft-associativeassociativity
Reduce is a collective communication primitive used in the context of a parallel programming model to combine multiple vectors into one, using an associative binary operator \oplus.

### Binary operation

binary operatoroperationbinary
Reduce is a collective communication primitive used in the context of a parallel programming model to combine multiple vectors into one, using an associative binary operator \oplus.

### Function (mathematics)

functionfunctionsmathematical function
The reduction of sets of elements is an integral part of programming models such as Map Reduce, where a function is applied (mapped) to all elements before they are reduced.

### Map (higher-order function)

map mapped
The reduction of sets of elements is an integral part of programming models such as Map Reduce, where a function is applied (mapped) to all elements before they are reduced.

### Parallel algorithm

parallel algorithmsparallelparallelized
Other parallel algorithms use reduce as a primary operation to solve more complex problems.

### Message Passing Interface

MPIMessage PassingMessage Passing Interface (MPI)
The Message Passing Interface implements it in the operations and, with the difference that the result is available at one (root) processing unit or all of them.

### Associative property

associativeassociativityassociative law
Formally, reduce takes an associative (but not necessarily commutative) operator \oplus, which can be evaluated in constant time and an input set of p vectors with m elements each.

### Commutative property

commutativecommutecommutativity
Formally, reduce takes an associative (but not necessarily commutative) operator \oplus, which can be evaluated in constant time and an input set of p vectors with m elements each.

### Parallel random-access machine

PRAMparallel random access machineconcurrent random access machine
Regarding parallel algorithms, there are two main models of parallel computation, the parallel random access machine as an extension of the RAM with shared memory between processing units and the bulk synchronous parallel computer which takes communication and synchronization into account.

### Bulk synchronous parallel

BSPbulk synchronousbulk synchronous parallel computer
Regarding parallel algorithms, there are two main models of parallel computation, the parallel random access machine as an extension of the RAM with shared memory between processing units and the bulk synchronous parallel computer which takes communication and synchronization into account.

### Synchronization (computer science)

synchronizationsynchronoussynchronization primitive
Regarding parallel algorithms, there are two main models of parallel computation, the parallel random access machine as an extension of the RAM with shared memory between processing units and the bulk synchronous parallel computer which takes communication and synchronization into account.

### Time complexity

polynomial timelinear timeexponential time
Both models have different implications for the time-complexity, therefore two algorithms will be shown.

### Distributed memory

distributeddistributed memory multiprocessingdistributed-memory
In contrast to the PRAM-algorithm, in the distributed memory model memory is not shared between processing units and data has to be exchanged explicitly between units, resulting in communication overhead that is accounted for.

### Pipeline (computing)

pipelinepipeliningpipelined
Usually, linear pipelines split data or a task into smaller pieces and process them in stages.

### Fibonacci number

Fibonacci sequenceFibonacciFibonacci spiral
Instead of the binomial tree, a Fibonacci tree is used which has the property that the height of the trees rooted at its two children differ by one.

### Duplex (telecommunications)

half-duplexfull-duplexduplex
The animation shows the execution of such an algorithm in a full-duplex communication model.

### Sorting algorithm

sortingsortsorted
Some parallel sorting algorithms use reductions to be able to handle very big data sets.

### Fold (higher-order function)

foldreducefolds
* Fold (higher-order function)