Fisher transformation

Fisher's ''z'' transformationFisher’s Z-transformation
In statistics, hypotheses about the value of the population correlation coefficient ρ between variables X and Y can be tested using the Fisher transformation (aka Fisher z-transformation) applied to the sample correlation coefficient.wikipedia
26 Related Articles

Pearson correlation coefficient

correlation coefficientcorrelationPearson correlation
In statistics, hypotheses about the value of the population correlation coefficient ρ between variables X and Y can be tested using the Fisher transformation (aka Fisher z-transformation) applied to the sample correlation coefficient. While the Fisher transformation is mainly associated with the Pearson product-moment correlation coefficient for bivariate normal observations, it can also be applied to Spearman's rank correlation coefficient in more general cases.
In practice, confidence intervals and hypothesis tests relating to ρ are usually carried out using the Fisher transformation, the inverse hyperbolic function (artanh) of r:

Spearman's rank correlation coefficient

rank correlation coefficientSpearmanSpearman's rho
While the Fisher transformation is mainly associated with the Pearson product-moment correlation coefficient for bivariate normal observations, it can also be applied to Spearman's rank correlation coefficient in more general cases.
Another approach parallels the use of the Fisher transformation in the case of the Pearson product-moment correlation coefficient.

Data transformation (statistics)

data transformationtransformationData Transformations
Data transformation (statistics)
Examples of variance-stabilizing transformations are the Fisher transformation for the sample correlation coefficient, the square root transformation or Anscombe transform for Poisson data (count data), the Box–Cox transformation for regression analysis and the arcsine square root transformation or angular transformation for proportions (binomial data).

Statistics

statisticalstatistical analysisstatistician
In statistics, hypotheses about the value of the population correlation coefficient ρ between variables X and Y can be tested using the Fisher transformation (aka Fisher z-transformation) applied to the sample correlation coefficient.

Covariance

covariantcovariationcovary
Here stands for the covariance between the variables X and Y and \sigma stands for the standard deviation of the respective variable.

Standard deviation

standard deviationssample standard deviationsigma
Here stands for the covariance between the variables X and Y and \sigma stands for the standard deviation of the respective variable.

Natural logarithm

lnnatural logarithmsnatural log
where "ln" is the natural logarithm function and "arctanh" is the inverse hyperbolic tangent function.

Inverse hyperbolic functions

inverse hyperbolic functioninverse hyperbolic tangentinverse hyperbolic cosine
where "ln" is the natural logarithm function and "arctanh" is the inverse hyperbolic tangent function.

Multivariate normal distribution

multivariate normalbivariate normal distributionjointly normally distributed
If (X, Y) has a bivariate normal distribution with correlation ρ and the pairs (X i, Y i ) are independent and identically distributed, then z is approximately normally distributed with mean

Independent and identically distributed random variables

independent and identically distributedi.i.d.iid
If (X, Y) has a bivariate normal distribution with correlation ρ and the pairs (X i, Y i ) are independent and identically distributed, then z is approximately normally distributed with mean

Normal distribution

normally distributednormalGaussian
If (X, Y) has a bivariate normal distribution with correlation ρ and the pairs (X i, Y i ) are independent and identically distributed, then z is approximately normally distributed with mean

Standard error

SEstandard errorsstandard error of the mean
and standard error

Confidence interval

confidence intervalsconfidence levelconfidence
can be used to construct a large-sample confidence interval for r using standard normal theory and derivations.

Variance-stabilizing transformation

variance stabilizedvariance stabilizing transformation
The Fisher transformation is an approximate variance-stabilizing transformation for r when X and Y follow a bivariate normal distribution.

Ronald Fisher

FisherR.A. FisherR. A. Fisher
The behavior of this transform has been extensively studied since Fisher introduced it in 1915.

Harold Hotelling

HotellingHotelling, HaroldH. Hotelling
Hotelling in 1953 calculated the Taylor series expressions for the moments of z and several related statistics and Hawkins in 1989 discovered the asymptotic distribution of z for data from a distribution with bounded fourth moments.

Asymptotic distribution

asymptotically normalasymptotic normalitylimiting distribution
A similar result for the asymptotic distribution applies, but with a minor adjustment factor: see the latter article for details.

Meta-analysis

meta-analysesmeta analysismeta-analytic
Meta-analysis (this transformation is used in meta analysis for stabilizing the variance)

Gene co-expression network

coexpression
Another approach is to use Fisher’s Z-transformation which calculates a z-score for each correlation based on the number of samples.

Fisher

Fisher transformation, a transformation in statistics used to test some hypotheses