Time series

time series analysistime-seriestime-series analysis
Machine Learning. Artificial neural networks. Support vector machine. Fuzzy logic. Gaussian process. Hidden Markov model. Queueing theory analysis. Control chart. Shewhart individuals control chart. CUSUM chart. EWMA chart. Detrended fluctuation analysis. Dynamic time warping. Cross-correlation. Dynamic Bayesian network. Time-frequency analysis techniques:. Fast Fourier transform. Continuous wavelet transform. Short-time Fourier transform. Chirplet transform. Fractional Fourier transform. Chaotic analysis. Correlation dimension. Recurrence plots. Recurrence quantification analysis. Lyapunov exponents. Entropy encoding. Univariate linear measures. Moment (mathematics). Spectral band power.

Neural network

neural networksnetworksartificial neural networks
Neural network software • Nonlinear system identification • Parallel Constraint Satisfaction Processes • Parallel distributed processing • Predictive analytics • Radial basis function network • Self-organizing map • Simulated reality • Support vector machine • Tensor product network • Time delay neural network Function approximation, or regression analysis, including time series prediction and modeling.

Random variable

random variablesrandom variationrandom
Recording all these probabilities of output ranges of a real-valued random variable X yields the probability distribution of X. The probability distribution "forgets" about the particular probability space used to define X and only records the probabilities of various values of X. Such a probability distribution can always be captured by its cumulative distribution function and sometimes also using a probability density function, p_X. In measure-theoretic terms, we use the random variable X to "push-forward" the measure P on \Omega to a measure p_X on \mathbb{R}.

Data mining

data-miningdataminingdata mine
MEPX - cross platform tool for regression and classification problems based on a Genetic Programming variant. ML-Flex: A software package that enables users to integrate with third-party machine-learning packages written in any programming language, execute classification analyses in parallel across multiple computing nodes, and produce HTML reports of classification results. mlpack: a collection of ready-to-use machine learning algorithms written in the C++ language. NLTK (Natural Language Toolkit): A suite of libraries and programs for symbolic and statistical natural language processing (NLP) for the Python language. OpenNN: Open neural networks library.

Estimation theory

parameter estimationestimationestimated
Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements. When the data consist of multiple variables and one is estimating the relationship between them, estimation is known as regression analysis. In estimation theory, two approaches are generally considered.

Artificial intelligence

AIartificially intelligentA.I.
In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending; for example, AI-related M&A in 2017 was over 25 times as large as in 2015. The study of non-learning artificial neural networks began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression.

Pattern recognition

pattern analysispattern detectionpatterns
Gaussian process regression (kriging). Linear regression and extensions. Neural networks and Deep learning methods. Independent component analysis (ICA). Principal components analysis (PCA). Conditional random fields (CRFs). Hidden Markov models (HMMs). Maximum entropy Markov models (MEMMs). Recurrent neural networks (RNNs). Hidden Markov models (HMMs). Dynamic time warping (DTW). Adaptive resonance theory. Black box. Cache language model. Compound term processing. Computer-aided diagnosis. Data mining. Deep Learning. List of numerical analysis software. List of numerical libraries. Machine learning. Multilinear subspace learning. Neocognitron. Perception. Perceptual learning.

Algorithm

algorithmscomputer algorithmalgorithm design
However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain implementing arithmetic or an insect looking for food), in an electrical circuit, or in a mechanical device. In computer systems, an algorithm is basically an instance of logic written in software by software developers, to be effective for the intended "target" computer(s) to produce output from given (perhaps null) input.

Statistical model

modelprobabilistic modelstatistical modeling
For an assumption to constitute a statistical model, such difficulty is acceptable: doing the calculation does not need to be practicable, just theoretically possible. In mathematical terms, a statistical model is usually thought of as a pair, where S is the set of possible observations, i.e. the sample space, and \mathcal{P} is a set of probability distributions on S. The intuition behind this definition is as follows. It is assumed that there is a "true" probability distribution induced by the process that generates the observed data. We choose \mathcal{P} to represent a set (of distributions) which contains a distribution that adequately approximates the true distribution.

Logistic regression

logit modellogisticbinary logit model
In general, the presentation with latent variables is more common in econometrics and political science, where discrete choice models and utility theory reign, while the "log-linear" formulation here is more common in computer science, e.g. machine learning and natural language processing. The model has an equivalent formulation This functional form is commonly called a single-layer perceptron or single-layer artificial neural network. A single-layer neural network computes a continuous output instead of a step function. The derivative of p i with respect to X = (x 1, ..., x k ) is computed from the general form: where f(X) is an analytic function in X.

Expected value

expectationexpectedmean
In regression analysis, one desires a formula in terms of observed data that will give a "good" estimate of the parameter giving the effect of some explanatory variable upon a dependent variable. The formula will give different estimates using different samples of data, so the estimate it gives is itself a random variable. A formula is typically considered good in this context if it is an unbiased estimator— that is if the expected value of the estimate (the average value it would give over an arbitrarily large number of separate samples) can be shown to equal the true value of the desired parameter.

Parameter

parametersparametricparametrization
(Note that the sample standard deviation (S) is not an unbiased estimate of the population standard deviation : see Unbiased estimation of standard deviation.) It is possible to make statistical inferences without assuming a particular parametric family of probability distributions. In that case, one speaks of non-parametric statistics as opposed to the parametric statistics just described.

List of statistics articles

list of statistical topicslist of statistics topics
Beta-binomial distribution. Beta-binomial model. Beta distribution. Beta function – for incomplete beta function. Beta negative binomial distribution. Beta prime distribution. Beta rectangular distribution. Beverton–Holt model. Bhatia–Davis inequality. Bhattacharya coefficient redirects to Bhattacharyya distance. Bias (statistics). Bias of an estimator. Biased random walk (biochemistry). Biased sample – see Sampling bias. Biclustering. Big O in probability notation. Bienaymé–Chebyshev inequality. Bills of Mortality. Bimodal distribution. Binary classification. Bingham distribution. Binomial distribution. Binomial proportion confidence interval. Binomial regression. Binomial test.

Data analysis

data analyticsanalysisdata analyst
Orange – A visual programming tool featuring interactive data visualization and methods for statistical data analysis, data mining, and machine learning. Pandas – Python library for data analysis. PAW – FORTRAN/C data analysis framework developed at CERN. R – a programming language and software environment for statistical computing and graphics. ROOT – C++ data analysis framework developed at CERN. SciPy – Python library for data analysis. Kaggle competition held by Kaggle. LTPP data analysis contest held by FHWA and ASCE. Actuarial science. Analytics. Big data. Business intelligence. Censoring (statistics). Computational physics. Data acquisition. Data blending. Data governance.

Statistical classification

classificationclassifierclassifiers
List of datasets for machine learning research. Machine learning. Recommender system.

Glossary of artificial intelligence

Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". Artificial Intelligence Markup Language – is an XML dialect for creating natural language software agents. Artificial neural network – (ANN) or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains. The neural network itself is not an algorithm, but rather a framework for many different machine learning algorithms to work together and process complex data inputs.

Normal distribution

normally distributednormalGaussian
In finite samples however, the motivation behind the use of s 2 is that it is an unbiased estimator of the underlying parameter σ 2, whereas is biased. Also, by the Lehmann–Scheffé theorem the estimator s 2 is uniformly minimum variance unbiased (UMVU), which makes it the "best" estimator among all unbiased ones. However it can be shown that the biased estimator is "better" than the s 2 in terms of the mean squared error (MSE) criterion.

Confidence interval

confidence intervalsconfidence levelconfidence
A confidence band is used in statistical analysis to represent the uncertainty in an estimate of a curve or function based on limited or noisy data. Similarly, a prediction band is used to represent the uncertainty about the value of a new data point on the curve, but subject to noise. Confidence and prediction bands are often used as part of the graphical presentation of results of a regression analysis. Confidence bands are closely related to confidence intervals, which represent the uncertainty in an estimate of a single numerical value.

Cluster analysis

clusteringdata clusteringclusters
Artificial neural network (ANN). Nearest neighbor search. Neighbourhood components analysis. Latent class analysis. Affinity propagation. Dimension reduction. Principal component analysis. Multidimensional scaling. Cluster-weighted modeling. Curse of dimensionality. Determining the number of clusters in a data set. Parallel coordinates. Structured data analysis.

Computer

computerscomputer systemdigital computer
Computer programs that learn and adapt are part of the emerging field of artificial intelligence and machine learning. Artificial intelligence based products generally fall into two major categories: rule based systems and pattern recognition systems. Rule based systems attempt to represent the rules used by human experts and tend to be expensive to develop. Pattern based systems use data about a problem to generate conclusions. Examples of pattern based systems include voice recognition, font recognition, translation and the emerging field of on-line marketing. As the use of computers has spread throughout society, there are an increasing number of careers involving computers.

Natural language processing

NLPnatural languagenatural-language processing
In some areas, this shift has entailed substantial changes in how NLP systems are designed, such that deep neural network-based approaches may be viewed as a new paradigm distinct from statistical natural language processing. For instance, the term neural machine translation (NMT) emphasizes the fact that deep learning-based approaches to machine translation directly learn sequence-to-sequence transformations, obviating the need for intermediate steps such as word alignment and language modeling that were used in statistical machine translation (SMT).

Linear regression

regression coefficientregressionmultiple linear regression
In Canada, the Environmental Effects Monitoring Program uses statistical analyses on fish and benthic surveys to measure the effects of pulp mill or metal mine effluent on the aquatic ecosystem. Linear regression plays an important role in the field of artificial intelligence such as machine learning.

Computer vision

visionimage classificationImage recognition
This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory. The classical problem in computer vision, image processing, and machine vision is that of determining whether or not the image data contains some specific object, feature, or activity. Different varieties of the recognition problem are described in the literature: Currently, the best algorithms for such tasks are based on convolutional neural networks.

Data

statistical datascientific datadatum
Machine learning. Open data. Scientific data archiving. Statistics. Secondary Data.

Generalized linear model

generalized linear modelslink functionGLM
In statistics, the generalized linear model (GLM) is a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value. Generalized linear models were formulated by John Nelder and Robert Wedderburn as a way of unifying various other statistical models, including linear regression, logistic regression and Poisson regression.