# A report onData compression

Process of encoding information using fewer bits than the original representation.

- Data compression

## Algorithm

Algorithm is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation.

Algorithm is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation.

Some example classes are search algorithms, sorting algorithms, merge algorithms, numerical algorithms, graph algorithms, string algorithms, computational geometric algorithms, combinatorial algorithms, medical algorithms, machine learning, cryptography, data compression algorithms and parsing techniques.

## Quantization (signal processing)

Process of mapping input values from a large set to output values in a (countable) smaller set, often with a finite number of elements.

Process of mapping input values from a large set to output values in a (countable) smaller set, often with a finite number of elements.

Rate–distortion optimized quantization is encountered in source coding for lossy data compression algorithms, where the purpose is to manage distortion within the limits of the bit rate supported by a communication channel or storage medium.

## Motion JPEG 2000

File format for motion sequences of JPEG 2000 images and associated audio, based on the MP4 and QuickTime format.

File format for motion sequences of JPEG 2000 images and associated audio, based on the MP4 and QuickTime format.

In contrast to the original 1992 JPEG standard, which is a discrete cosine transform (DCT) based lossy compression format for static digital images, JPEG 2000 is a discrete wavelet transform (DWT) based compression standard that could be adapted for motion imaging video compression with the Motion JPEG 2000 extension.

## Delta encoding

Way of storing or transmitting data in the form of differences between sequential data rather than complete files; more generally this is known as data differencing.

Way of storing or transmitting data in the form of differences between sequential data rather than complete files; more generally this is known as data differencing.

However, in video compression, delta frames can considerably reduce frame size and are used in virtually every video compression codec.

## Data differencing

Producing a technical description of the difference between two sets of data – a source and a target.

Producing a technical description of the difference between two sets of data – a source and a target.

Data compression can be seen as a special case of data differencing – data differencing consists of producing a difference given a source and a target, with patching producing a target given a source and a difference, while data compression consists of producing a compressed file given a target, and decompression consists of producing a target given only a compressed file.

Form of entropy encoding used in the H.264/MPEG-4 AVC and High Efficiency Video Coding (HEVC) standards.

Form of entropy encoding used in the H.264/MPEG-4 AVC and High Efficiency Video Coding (HEVC) standards.

CABAC is notable for providing much better compression than most other entropy encoding algorithms used in video encoding, and it is one of the key elements that provides the H.264/AVC encoding scheme with better compression capability than its predecessors.

Situational decision that involves diminishing or losing one quality, quantity, or property of a set or design in return for gains in other aspects.

Situational decision that involves diminishing or losing one quality, quantity, or property of a set or design in return for gains in other aspects.

By compressing an image, you can reduce transmission time/costs at the expense of CPU time to perform the compression and decompression. Depending on the compression method, this may also involve the tradeoff of a loss in image quality.

Example of a generalized class of Fourier transforms.

Example of a generalized class of Fourier transforms.

The Hadamard transform is also used in data encryption, as well as many signal processing and data compression algorithms, such as JPEG XR and MPEG-4 AVC.

## Kullback–Leibler divergence

Type of statistical distance: a measure of how one probability distribution P is different from a second, reference probability distribution Q.

Type of statistical distance: a measure of how one probability distribution P is different from a second, reference probability distribution Q.

Just as absolute entropy serves as theoretical background for data compression, relative entropy serves as theoretical background for data differencing – the absolute entropy of a set of data in this sense being the data required to reconstruct it (minimum compressed size), while the relative entropy of a target set of data, given a source set of data, is the data required to reconstruct the target given the source (minimum size of a patch).