Pearl asserts, while some RCM (Rubin Causal Models) theorists deny, that so-called “non-manipulable” variables can be causes (Pearl 2019; Holland 1986, 2008). Race and gender, which arguably cannot be experimentally manipulated, are key examples of such variables … My response is that although advocates of the frameworks adopt conflicting positions regarding certain variables, these positions […]
Statistics & Econometrics
Researchers adhering to missing data analysis invariably invoke an ad-hoc assumption called “conditional ignorability,” often decorated as “ignorable treatment assignment mechanism”, which is far from being “well understood” by those who make it, let alone those who need to judge its plausibility. For readers versed in graphical modeling, “conditional ignorability” is none other than the […]
You see it all the time in studies. “We controlled for…” And then the list starts … The more things you can control for, the stronger your study is — or, at least, the stronger your study seems. Controls give the feeling of specificity, of precision. But sometimes, you can control for too much. Sometimes […]
. Statistical reasoning certainly seems paradoxical to most people. Take for example Simpson’s paradox. From a theoretical perspective, it importantly shows that causality can never be reduced to a question of statistics or probabilities unless you are — miraculously — able to keep constant all other factors that influence the probability of the outcome studied. […]
There is one point, to which in practice I attach a great importance, you do not allude to. In many of these statistical researches, in order to get enough observations they have to be scattered over a lengthy period of time; and for a lengthy period of time it very seldom remains true that the […]
One of the reasons Guido Imbens and Joshua Angrist won the 2021 ‘Nobel prize’ in economics is their LATE approach used especially in instrumental variables estimation of causal effects. Another prominent ‘Nobel prize’ winner in economics — Angus Deaton — is not overly impressed: Without explicit prior consideration of the effect of the instrument choice on […]
Evidently, however, the potential for the strictly natural natural experimental approach, which relies exclusively on natural events as instruments, is constrained by the small number of random events provided by nature and by the fact that most outcomes of interest are the result of many factors associated with preferences, technologies, and markets. And the prospect […]
. Making appropriate extrapolations from (ideal, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here.” The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods used when analyzing […]
We who like to imagine ourselves responsible for the public’s knowledge of society despise description and indeed despise the methods that are generally used for quantitative description. Our social indicators are simply disaggregated variables, ready for input to causal analysis. The notions of complex combinatoric description, of typologies based on multiple variables — these fill […]
David A. Freedman‘s Statistical Models: Theory and Practice (2009) is a marvellous book. It should be mandatory reading for every serious social scientist — including economists and econometricians — who don’t want to succumb to ad hoc assumptions and unsupported statistical conclusions! In the social and behavioral sciences, far-reaching claims are often made for the superiority […]