Neural coding

sparse codingneural coderate coding
Dictionary learning (DL) is a representation learning method which aims to find a sparse representation of the input data (also known as sparse coding) in the form of a linear combination of basic elements as well as those basic elements themselves. As above, matching pursuit is mapping a vector or measurement onto a basis which yields a sparse approximation. However, it is necessary to know what the basis is which can allow matching pursuit in order to have a sparse representation. Some basis construction algorithms are proposed, one of which is dictionary learning. Dictionary learning makes it possible to build an overcomplete basis and allow a sparse representation.

Dependent and independent variables

dependent variableindependent variableexplanatory variable
In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data. The target variable is used in supervised learning algorithms but not in unsupervised learning. In mathematical modeling, the dependent variable is studied to see if and how much it varies as the independent variables vary.

Social network

social networksnetworknetworking
The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational.


The statistic generated to measure association is the odds ratio (OR), which is the ratio of the odds of exposure in the cases (A/C) to the odds of exposure in the controls (B/D), i.e. OR = (AD/BC). If the OR is significantly greater than 1, then the conclusion is "those with the disease are more likely to have been exposed," whereas if it is close to 1 then the exposure and disease are not likely associated. If the OR is far less than one, then this suggests that the exposure is a protective factor in the causation of the disease. Case-control studies are usually faster and more cost effective than cohort studies, but are sensitive to bias (such as recall bias and selection bias).

Medical diagnosis

Diagnostic and Statistical Manual of Mental Disorders. Doctor-patient relationship. Etiology (medicine). [[ICD|International Statistical Classification of. Medical classification. Merck Manual of Diagnosis and Therapy. Misdiagnosis and medical error. Nosology. Nursing diagnosis. Pathogenesis. Pathology. Prediction. Preimplantation genetic diagnosis. Prognosis. Sign (medicine). Symptom. List of diagnostic classification and rating scales used in psychiatry. List of diseases. List of disorders. List of medical symptoms.


criminalcriminalscriminal offence
For example: as cultures change and the political environment shifts, societies may criminalise or decriminalise certain behaviours, which directly affects the statistical crime rates, influence the allocation of resources for the enforcement of laws, and (re-)influence the general public opinion. Similarly, changes in the collection and/or calculation of data on crime may affect the public perceptions of the extent of any given "crime problem". All such adjustments to crime statistics, allied with the experience of people in their everyday lives, shape attitudes on the extent to which the State should use law or social engineering to enforce or encourage any particular social norm.


complexdetailcomplexity theory
Recent work in machine learning has examined the complexity of the data as it affects the performance of supervised classification algorithms. Ho and Basu present a set of complexity measures for binary classification problems. The complexity measures broadly cover: Instance hardness is a bottom-up approach that first seeks to identify instances that are likely to be misclassified (or, in other words, which instances are the most complex). The characteristics of the instances that are likely to be misclassified are then measured based on the output from a set of hardness measures.


biophysicistbiophysicalbiological physics
Link archive of learning resources for students: (60% English, 40% German). Journal of Medicine, Physiology and Biophysics,(IISTE), USA. Chief Editor of the journal is Ignat Ignatov. Chief editor of all IISTE journals is Alexander Decker.

Optical character recognition

OCRcharacter recognitiontext recognition
Intelligent character recognition (ICR) – also targets handwritten printscript or cursive text one glyph or character at a time, usually involving machine learning. Intelligent word recognition (IWR) – also targets handwritten printscript or cursive text, one word at a time. This is especially useful for languages where glyphs are not separated in cursive script. De-skew – If the document was not aligned properly when scanned, it may need to be tilted a few degrees clockwise or counterclockwise in order to make lines of text perfectly horizontal or vertical. Despeckle – remove positive and negative spots, smoothing edges.

Data compression

compressionvideo compressioncompressed
There is a close connection between machine learning and compression: a system that predicts the posterior probabilities of a sequence given its entire history can be used for optimal data compression (by using arithmetic coding on the output distribution) while an optimal compressor can be used for prediction (by finding the symbol that compresses best, given the previous history). This equivalence has been used as a justification for using data compression as a benchmark for "general intelligence."

Signal processing

signal analysissignalsignal processor
Statistical signal processing – analyzing and extracting information from signals and noise based on their stochastic properties. Linear time-invariant system theory, and transform theory. System identification and classification. Calculus. Vector spaces and Linear algebra. Functional analysis. Probability and stochastic processes. Detection theory. Estimation theory. Optimization. Numerical methods. Time series. Data mining – for statistical analysis of relations between large quantities of variables (in this context representing many physical signals), to extract previously unknown interesting patterns. Audio filter. Dynamic range compression, companding, limiting, and noise gating.

Quantification (science)

Both disciplines gather data – economics by empirical observation and psychology by experimentation – and both use statistical techniques such as regression analysis to draw conclusions from it. In some instances a seemingly intangible property may be quantified by asking subjects to rate something on a scale—for example, a happiness scale or a quality of life scale—or by the construction of a scale by the researcher, as with the index of economic freedom.

Total operating characteristic

The Total Operating Characteristic (TOC) is a statistical method to compare a Boolean variable versus a rank variable. TOC can measure the ability of an index variable to diagnose either presence or absence of a characteristic. The diagnosis of presence or absence depends on whether the value of the index is above a threshold. TOC considers multiple possible thresholds. Each threshold generates a two-by-two contingency table, which contains four entries: hits, misses, false alarms, and correct rejections. The Receiver Operating Characteristic (ROC) also characterizes diagnostic ability, although ROC reveals less information than the TOC.

Scientific method

scientific researchscientificmethod
Another important human bias that plays a role is a preference for new, surprising statements (see appeal to novelty), which can result in a search for evidence that the new is true. Poorly attested beliefs can be believed and acted upon via a less rigorous heuristic. Goldhaber and Nieto published in 2010 the observation that if theoretical structures with "many closely neighboring subjects are described by connecting theoretical concepts then the theoretical structure .. becomes increasingly hard to overturn". When a narrative is constructed its elements become easier to believe.


insurance companyinsurance companiesinsurer
For example, most insurance policies in the English language today have been carefully drafted in plain English; the industry learned the hard way that many courts will not enforce policies against insureds when the judges themselves cannot understand what the policies are saying. Typically, courts construe ambiguities in insurance policies against the insurance company and in favor of coverage under the policy. Many institutional insurance purchasers buy insurance through an insurance broker.

Bayesian probability

Bayesiansubjective probabilityBayesian interpretation
While frequentist statistics remains strong (as seen by the fact that most undergraduate teaching is still based on it ), Bayesian methods are widely accepted and used, e.g., in the field of machine learning. The use of Bayesian probabilities as the basis of Bayesian inference has been supported by several arguments, such as Cox axioms, the Dutch book argument, arguments based on decision theory and de Finetti's theorem. Richard T. Cox showed that Bayesian updating follows from several axioms, including two functional equations and a hypothesis of differentiability.

Big data

big data analyticsbig-databig data analysis
Big data often poses the same challenges as small data; adding more data does not solve problems of bias, but may emphasize other problems. In particular data sources such as Twitter are not representative of the overall population, and results drawn from such sources may then lead to wrong conclusions. Google Translate—which is based on big data statistical analysis of text—does a good job at translating web pages. However, results from specialized domains may be dramatically skewed.


communicationscommunicatecommunication skills
Language learning normally occurs most intensively during human childhood. Most of the thousands of human languages use patterns of sound or gesture for symbols which enable communication with others around them. Languages tend to share certain properties, although there are exceptions. There is no defined line between a language and a dialect. Constructed languages such as Esperanto, programming languages, and various mathematical formalism is not necessarily restricted to the properties shared by human languages. As previously mentioned, language can be characterized as symbolic.