The use of Text Mining together with Machine Learning algorithms received more attention in the last years, with the use of textual content from Internet as input to predict price changes in Stocks and other financial markets.

Applications of fundamental topics of information theory include lossless data compression (e.g. ZIP files), lossy data compression (e.g. MP3s and JPEGs), and channel coding (e.g. for Digital Subscriber Line (DSL)). The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, and electrical engineering. Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes, and numerous other fields. Important sub-fields of information theory are source coding, channel coding, algorithmic complexity theory, algorithmic information theory, information-theoretic security, and measures of information.

Computational biology involves the development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, behavioral, and social systems. The field is broadly defined and includes foundations in computer science, applied mathematics, animation, statistics, biochemistry, chemistry, biophysics, molecular biology, genetics, genomics, ecology, evolution, anatomy, neuroscience, and visualization.

Inductive bias occurs within the field of machine learning. In machine learning one seeks to develop algorithms that are able to learn to anticipate a particular output. To accomplish this, the learning algorithm is given training cases that show the expected connection. Then the learner is tested with new examples. Without further assumptions, this problem cannot be solved exactly as unknown situations may not be predictable. The inductive bias of the learning algorithm is the set of assumptions that the learner uses to predict outputs given inputs that it has not encountered. It may bias the learner towards the correct solution, the incorrect, or be correct some of the time. A classical example of an inductive bias is Occam's Razor, which assumes that the simplest consistent hypothesis is the best.