Since econometrics doesn’t content itself with only making optimal predictions, but also aspires to explain things in terms of causes and effects, econometricians need loads of assumptions — most important of these are additivity and linearity. Important, simply because if they are not true, your model is invalid and descriptively incorrect. It’s like calling your house a bicycle. No matter […]
Statistics & Econometrics
The rigid focus on statistical significance encourages researchers to choose data and methods that yield statistical significance for some desired (or simply publishable) result, or that yield statistical non-significance for an undesired result, such as potential side effects of drugs — thereby invalidating conclusions … Again, we are not advocating a ban on P values, […]
What strikes me repeatedly when examining the results of randomised experiments is how closely they resemble theoretical models. Both share a fundamental limitation: they are constructed under artificial conditions and struggle with the trade-off between internal and external validity. The greater the control and artificiality, the higher the internal validity — but the lower the […]
Ergodicity often hides behind a veil of mathematical complexity, yet at its core, it offers a profoundly simple and insightful lens through which to view and understand probability and time. At its heart, it challenges us to distinguish between two distinct types of averages: the ensemble average and the time average. To grasp this distinction, let […]
In 1958, with the publication of the twenty-fifth volume of Econometrica, Trygve Haavelmo assessed the role of econometrics in advancing economics. While he praised its ‘repair work’ and ‘clearing-up work,’ he also found reason for despair: We have found certain general principles which would seem to make good sense. Essentially, these principles are based on […]
With the above cautions in mind, we may view each statistical analysis as a thought experiment in a fictional “small world” or “toy example” sharply restricted by its simplifying assumptions. The questions that motivated the study must be translated properly into this fictional world; statistical methods then answer the questions via mathematical deductions from the […]
As has been long and widely emphasized in various terms … frequentism and Bayesianism are incomplete both as learning theories and as philosophies of statistics, in the pragmatic sense that each alone are insufficient for all sound applications. Notably, causal justifications are the foundation for classical frequentism, which demands that all model constraints be deduced […]
Imagine you are a Bayesian turkey. You hold a nonzero prior belief in the hypothesis (H): People are nice vegetarians who would never eat a turkey. Every day I see the sun rise is further confirmation of this fact. Each day you survive and are not eaten constitutes new evidence (e). You dutifully update your […]
There are three fundamental differences between statistical and causal assumptions. First, statistical assumptions, even untested, are testable in principle, given sufficiently large sample and sufficiently fine measurements. Causal assumptions, in contrast, cannot be verified even in principle, unless one resorts to experimental control … Second, statistical assumptions can be expressed in the familiar language of […]
An ongoing concern is that excessive focus on formal modeling and statistics can lead to neglect of practical issues and to overconfidence in formal results … Analysis interpretation depends on contextual judgments about how reality is to be mapped onto the model, and how the formal analysis results are to be mapped back into reality. […]