algorithmscomputer algorithmalgorithm design
List of important publications in theoretical computer science – Algorithms. Theory of computation. Computability theory. Computational complexity theory. Bell, C. Gordon and Newell, Allen (1971), Computer Structures: Readings and Examples, McGraw–Hill Book Company, New York. ISBN: 0-07-004357-4. Includes an excellent bibliography of 56 references., ISBN: 0-8078-4108-0. : cf. Chapter 3 Turing machines where they discuss "certain enumerable sets not effectively (mechanically) enumerable". Campagnolo, M.L., Moore, C., and Costa, J.F. (2000) An analog characterization of the subrecursive functions. In Proc. of the 4th Conference on Real Numbers and Computers, Odense University, pp. 91–109.


computerscomputer systemdigital computer
Computer programs that learn and adapt are part of the emerging field of artificial intelligence and machine learning. Artificial intelligence based products generally fall into two major categories: rule based systems and pattern recognition systems. Rule based systems attempt to represent the rules used by human experts and tend to be expensive to develop. Pattern based systems use data about a problem to generate conclusions. Examples of pattern based systems include voice recognition, font recognition, translation and the emerging field of on-line marketing. As the use of computers has spread throughout society, there are an increasing number of careers involving computers.

Data mining

data-miningdataminingdata mine
It also is a buzzword and is frequently applied to any form of large-scale data or information processing (collection, extraction, warehousing, analysis, and statistics) as well as any application of computer decision support system, including artificial intelligence (e.g., machine learning) and business intelligence. The book Data mining: Practical machine learning tools and techniques with Java (which covers mostly machine learning material) was originally to be named just Practical machine learning, and the term data mining was only added for marketing reasons.

Time series

time series analysistime-seriestime-series analysis
To some extent the different problems (regression, classification, fitness approximation) have received a unified treatment in statistical learning theory, where they are viewed as supervised learning problems. In statistics, prediction is a part of statistical inference. One particular approach to such inference is known as predictive inference, but the prediction can be undertaken within any of the several approaches to statistical inference. Indeed, one description of statistics is that it provides a means of transferring knowledge about a sample of a population to the whole population, and to other related populations, which is not necessarily the same as prediction over time.

Neural network

neural networksnetworksartificial neural networks
For example, it is possible to create a semantic profile of user's interests emerging from pictures trained for object recognition Theoretical and computational neuroscience is the field concerned with the theoretical analysis and computational modeling of biological neural systems. Since neural systems are intimately related to cognitive processes and behaviour, the field is closely related to cognitive and behavioural modeling. The aim of the field is to create models of biological neural systems in order to understand how biological systems work.

Artificial intelligence

AIartificially intelligentA.I.
In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience. A classifier can be trained in various ways; there are many statistical and machine learning approaches. The decision tree is perhaps the most widely used machine learning algorithm.

Information theory

information theoristinformation-theoreticinformation
Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers. Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann and J. Willard Gibbs. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in Entropy in thermodynamics and information theory.


bioinformaticbioinformaticiangenome browser
Theoretical Biology and Medical Modelling 2013 10 :3. Raul Isea The Present-Day Meaning Of The Word Bioinformatics, Global Journal of Advanced Research, 2015. Ilzins, O., Isea, R. and Hoebeke, J. Can Bioinformatics Be Considered as an Experimental Biological Science 2015. Achuthsankar S Nair Computational Biology & Bioinformatics – A gentle Overview, Communications of Computer Society of India, January 2007. Aluru, Srinivas, ed. Handbook of Computational Molecular Biology. Chapman & Hall/Crc, 2006. ISBN: 1-58488-406-1 (Chapman & Hall/Crc Computer and Information Science Series). Baldi, P and Brunak, S, Bioinformatics: The Machine Learning Approach, 2nd edition. MIT Press, 2001.

Artificial neural network

artificial neural networksneural networksneural network
Sometimes a bias term is added to the total weighted sum of inputs to serve as a threshold to shift the activation function. The propagation function computes the input p_j(t) to the neuron j from the outputs o_i(t) of predecessor neurons and typically has the form When a bias value is added with the function, the above form changes to the following :, where w_{0j} is a bias. The learning rule is a rule or an algorithm which modifies the parameters of the neural network, in order for a given input to the network to produce a favored output. This learning process typically amounts to modifying the weights and thresholds of the variables within the network.


statistical datascientific datadatum
Machine learning. Open data. Scientific data archiving. Statistics. Secondary Data.

Statistical model

modelprobabilistic modelstatistical modeling
For an assumption to constitute a statistical model, such difficulty is acceptable: doing the calculation does not need to be practicable, just theoretically possible. In mathematical terms, a statistical model is usually thought of as a pair, where S is the set of possible observations, i.e. the sample space, and \mathcal{P} is a set of probability distributions on S. The intuition behind this definition is as follows. It is assumed that there is a "true" probability distribution induced by the process that generates the observed data. We choose \mathcal{P} to represent a set (of distributions) which contains a distribution that adequately approximates the true distribution.


onlinethe Internetweb
For distance education, help with homework and other assignments, self-guided learning, whiling away spare time, or just looking up more detail on an interesting fact, it has never been easier for people to access educational information at any level from anywhere. The Internet in general and the World Wide Web in particular are important enablers of both formal and informal education.


biologicalbiologistbiological sciences
For example, what is learned about the physiology of yeast cells can also apply to human cells. The field of animal physiology extends the tools and methods of human physiology to non-human species. Plant physiology borrows techniques from both research fields. Physiology is the study the interaction of how, for example, the nervous, immune, endocrine, respiratory, and circulatory systems, function and interact. The study of these systems is shared with such medically oriented disciplines as neurology and immunology. Evolutionary research is concerned with the origin and descent of species, and their change over time.

Computer vision

visionimage classificationImage recognition
This decade also marked the first time statistical learning techniques were used in practice to recognize faces in images (see Eigenface). Toward the end of the 1990s, a significant change came about with the increased interaction between the fields of computer graphics and computer vision. This included image-based rendering, image morphing, view interpolation, panoramic image stitching and early light-field rendering. Recent work has seen the resurgence of feature-based methods, used in conjunction with machine learning techniques and complex optimization frameworks.

Data analysis

data analyticsanalysisdata analyst
Orange – A visual programming tool featuring interactive data visualization and methods for statistical data analysis, data mining, and machine learning. Pandas – Python library for data analysis. PAW – FORTRAN/C data analysis framework developed at CERN. R – a programming language and software environment for statistical computing and graphics. ROOT – C++ data analysis framework developed at CERN. SciPy – Python library for data analysis. Kaggle competition held by Kaggle. LTPP data analysis contest held by FHWA and ASCE. Actuarial science. Analytics. Big data. Business intelligence. Censoring (statistics). Computational physics. Data acquisition. Data blending. Data governance.

Expected value

This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g., where is the indicator function of the set \mathcal{A}. In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values x i and corresponding probabilities p i . Now consider a weightless rod on which are placed weights, at locations x i along the rod and having masses p i (whose sum is one).

Text mining

text analyticstext-miningtext
Gender bias, readability, content similarity, reader preferences, and even mood have been analyzed based on text mining methods over millions of documents. The analysis of readability, gender bias and topic bias was demonstrated in Flaounas et al. showing how different topics have different gender biases and levels of readability; the possibility to detect mood patterns in a vast population by analyzing Twitter content was demonstrated as well. Text mining computer programs are available from many commercial and open source companies and sources.

Branches of science

scientific disciplineField of sciencescientific field
The formal sciences are the branches of science that are concerned with formal systems, such as logic, mathematics, theoretical computer science, information theory, systems theory, decision theory, statistics, and theoretical linguistics. Unlike other sciences, the formal sciences are not concerned with the validity of theories based on observations in the real world (empirical knowledge), but rather with the properties of formal systems based on definitions and rules.

Cross-validation (statistics)

cross-validationcross validationLeave-one-out cross-validation
If such a cross-validated model is selected from a k-fold set, human confirmation bias will be at work and determine that such a model has been validated. This is why traditional cross-validation needs to be supplemented with controls for human bias and confounded model specification like swap sampling and prospective studies. Boosting (machine learning). Bootstrap aggregating (bagging). Bootstrapping (statistics). Model selection. Resampling (statistics). Stability (learning theory). Validity (statistics).

Glossary of artificial intelligence

Computational neuroscience – (also known as theoretical neuroscience or mathematical neuroscience) is a branch of neuroscience which employs mathematical models, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system. Computational number theory – also known as algorithmic number theory, it is the study of algorithms for performing number theoretic computations. Computational problem – In theoretical computer science, a computational problem is a mathematical object representing a collection of questions that computers might be able to solve.

List of distributed computing projects

distributed computing projectMany distributed computing applicationsmany other projects
This is a list of distributed computing and grid computing projects. For each project, donors volunteer computing time from personal computers to a specific cause. The donated computing power comes typically from CPUs and GPUs, but can also come from home video game systems. Each project seeks to solve a problem which is difficult or infeasible to tackle using other methods.

Glossary of computer science

Data science is a "concept to unify statistics, data analysis, machine learning and their related methods" in order to "understand and analyze actual phenomena" with data. It employs techniques and theories drawn from many fields within the context of mathematics, statistics, information science, and computer science. Data structure – is a data organization, management and storage format that enables efficient access and modification. More precisely, a data structure is a collection of data values, the relationships among them, and the functions or operations that can be applied to the data.

Constantinos Daskalakis

Daskalakis works on computation theory and its interface with game theory, economics, probability theory, statistics and machine learning. He has resolved long-standing open problems about the computational complexity of the Nash equilibrium, the mathematical structure and computational complexity of multi-item auctions, and the behavior of machine-learning methods such as the expectation–maximization algorithm. He has obtained computationally and statistically efficient methods for statistical hypothesis testing and learning in high-dimensional settings, as well as results characterizing the structure and concentration properties of high-dimensional distributions.


Theoretical computer science includes computability theory, computational complexity theory, and information theory. Computability theory examines the limitations of various theoretical models of the computer, including the most well-known model – the Turing machine. Complexity theory is the study of tractability by computer; some problems, although theoretically solvable by computer, are so expensive in terms of time or space that solving them is likely to remain practically unfeasible, even with the rapid advancement of computer hardware. A famous problem is the "P = NP?" problem, one of the Millennium Prize Problems.


decision makingdecisionsdecision
Optimism bias is a tendency to overestimate the likelihood of positive events occurring in the future and underestimate the likelihood of negative life events. Such biased expectations are generated and maintained in the face of counter-evidence through a tendency to discount undesirable information. An optimism bias can alter risk perception and decision-making in many domains, ranging from finance to health.