Recent from talks
Knowledge base stats:
Talk channels stats:
Members stats:
Info-metrics
Info-metrics is an interdisciplinary approach to scientific modeling, inference and efficient information processing. It is the science of modeling, reasoning, and drawing inferences under conditions of noisy and limited information. From the point of view of the sciences, this framework is at the intersection of information theory, statistical methods of inference, applied mathematics, computer science, econometrics, complexity theory, decision analysis, modeling, and the philosophy of science.
Info-metrics provides a constrained optimization framework to tackle under-determined or ill-posed problems – problems where there is not sufficient information for finding a unique solution. Such problems are very common across all sciences: available information is incomplete, limited, noisy and uncertain. Info-metrics is useful for modelling, information processing, theory building, and inference problems across the scientific spectrum. The info-metrics framework can also be used to test hypotheses about competing theories or causal mechanisms.
Info-metrics evolved from the classical maximum entropy formalism, which is based on the work of Shannon. Early contributions were mostly in the natural and mathematical/statistical sciences. Since the mid 1980s and especially in the mid 1990s the maximum entropy approach was generalized and extended to handle a larger class of problems in the social and behavioral sciences, especially for complex problems and data. The word info-metrics was coined in 2009 by Amos Golan, right before the interdisciplinary Info-Metrics Institute was inaugurated.
Consider a random variable that can result in one of K distinct outcomes. The probability of each outcome is for . Thus, is a K-dimensional probability distribution defined for such that and . Define the informational content of a single outcome to be (e.g., Shannon). Observing an outcome at the tails of the distribution (a rare event) provides much more information than observing another, more probable, outcome. The entropy is the expected information content of an outcome of the random variable X whose probability distribution is P:
Here if , and is the expectation operator.
Consider the problem of modeling and inferring the unobserved probability distribution of some K-dimensional discrete random variable given just the mean (expected value) of that variable. We also know that the probabilities are nonnegative and normalized (i.e., sum up to exactly 1). For all K > 2 the problem is underdetermined. Within the info-metrics framework, the solution is to maximize the entropy of the random variable subject to the two constraints: mean and normalization. This yields the usual maximum entropy solution. The solutions to that problem can be extended and generalized in several ways. First, one can use another entropy instead of Shannon’s entropy. Second, the same approach can be used for continuous random variables, for all types of conditional models (e.g., regression, inequality and nonlinear models), and for many constraints. Third, priors can be incorporated within that framework. Fourth, the same framework can be extended to accommodate greater uncertainty: uncertainty about the observed values and/or uncertainty about the model itself. Last, the same basic framework can be used to develop new models/theories, validate these models using all available information, and test statistical hypotheses about the model.
Inference based on information resulting from repeated independent experiments.
The following example is attributed to Boltzmann and was further popularized by Jaynes. Consider a six-sided die, where tossing the die is the event and the distinct outcomes are the numbers 1 through 6 on the upper face of the die. The experiment is the independent repetitions of tossing the same die. Suppose one only observes the empirical mean value, y, of N tosses of a six-sided die, and seeks to infer the probabilities that a specific value of the face will show up in the next toss of the die. It is also known that the sum of the probabilities must be 1. Maximizing the entropy (and using log base 2) subject to these two constraints (mean and normalization) yields the most uninformed solution.
Hub AI
Info-metrics AI simulator
(@Info-metrics_simulator)
Info-metrics
Info-metrics is an interdisciplinary approach to scientific modeling, inference and efficient information processing. It is the science of modeling, reasoning, and drawing inferences under conditions of noisy and limited information. From the point of view of the sciences, this framework is at the intersection of information theory, statistical methods of inference, applied mathematics, computer science, econometrics, complexity theory, decision analysis, modeling, and the philosophy of science.
Info-metrics provides a constrained optimization framework to tackle under-determined or ill-posed problems – problems where there is not sufficient information for finding a unique solution. Such problems are very common across all sciences: available information is incomplete, limited, noisy and uncertain. Info-metrics is useful for modelling, information processing, theory building, and inference problems across the scientific spectrum. The info-metrics framework can also be used to test hypotheses about competing theories or causal mechanisms.
Info-metrics evolved from the classical maximum entropy formalism, which is based on the work of Shannon. Early contributions were mostly in the natural and mathematical/statistical sciences. Since the mid 1980s and especially in the mid 1990s the maximum entropy approach was generalized and extended to handle a larger class of problems in the social and behavioral sciences, especially for complex problems and data. The word info-metrics was coined in 2009 by Amos Golan, right before the interdisciplinary Info-Metrics Institute was inaugurated.
Consider a random variable that can result in one of K distinct outcomes. The probability of each outcome is for . Thus, is a K-dimensional probability distribution defined for such that and . Define the informational content of a single outcome to be (e.g., Shannon). Observing an outcome at the tails of the distribution (a rare event) provides much more information than observing another, more probable, outcome. The entropy is the expected information content of an outcome of the random variable X whose probability distribution is P:
Here if , and is the expectation operator.
Consider the problem of modeling and inferring the unobserved probability distribution of some K-dimensional discrete random variable given just the mean (expected value) of that variable. We also know that the probabilities are nonnegative and normalized (i.e., sum up to exactly 1). For all K > 2 the problem is underdetermined. Within the info-metrics framework, the solution is to maximize the entropy of the random variable subject to the two constraints: mean and normalization. This yields the usual maximum entropy solution. The solutions to that problem can be extended and generalized in several ways. First, one can use another entropy instead of Shannon’s entropy. Second, the same approach can be used for continuous random variables, for all types of conditional models (e.g., regression, inequality and nonlinear models), and for many constraints. Third, priors can be incorporated within that framework. Fourth, the same framework can be extended to accommodate greater uncertainty: uncertainty about the observed values and/or uncertainty about the model itself. Last, the same basic framework can be used to develop new models/theories, validate these models using all available information, and test statistical hypotheses about the model.
Inference based on information resulting from repeated independent experiments.
The following example is attributed to Boltzmann and was further popularized by Jaynes. Consider a six-sided die, where tossing the die is the event and the distinct outcomes are the numbers 1 through 6 on the upper face of the die. The experiment is the independent repetitions of tossing the same die. Suppose one only observes the empirical mean value, y, of N tosses of a six-sided die, and seeks to infer the probabilities that a specific value of the face will show up in the next toss of the die. It is also known that the sum of the probabilities must be 1. Maximizing the entropy (and using log base 2) subject to these two constraints (mean and normalization) yields the most uninformed solution.