Philosophers On a Physics Experiment that “Suggests There’s No Such Thing As Objective Reality”

Published by Anonymous (not verified) on Fri, 22/03/2019 - 12:15am in

Earlier this month, MIT Technology Review published an article entitled “A quantum experiment suggests there’s no such thing as objective reality.” It was one of several publications to excitedly report on a recent experiment conducted by Massimiliano Proietti (Heriot-Watt University) and others.

[Josef Albers, “Yellow Composition”]

The provocative headline drew a lot of attention to the article and the experiment. Given how outlandish it sounded, I—like most people, largely ignorant of cutting-edge physics—thought that the experiment was either earth-shatteringly amazing or that the claims made about it were bunk. Either way, it sounded like the perfect candidate for an intervention from philosophers and philosophy-knowledgeable physicists. This post, the latest entry in the occasional “Philosophers On” series, is the result.

While I am going to leave most of the explanation of the background physics, experiments, and findings to the guest authors, it might be useful to note how the MIT Technology Review article described what happened. It first notes that Proietti’s experiment is based on a thought experiment devised by physicist Eugene Wigner called “Wigner’s Friend.” It continues:

Last year… physicists noticed that recent advances in quantum technologies have made it possible to reproduce the Wigner’s Friend test in a real experiment. In other words, it ought to be possible to create different realities and compare them in the lab to find out whether they can be reconciled. And today, Massimiliano Proietti at Heriot-Watt University in Edinburgh and a few colleagues say they have performed this experiment for the first time: they have created different realities and compared them. Their conclusion is that Wigner was correct—these realities can be made irreconcilable so that it is impossible to agree on objective facts about an experiment.

You can check out the whole article here.

And now let me introduce our guest authors. They are: Sean Carroll (Research Professor of Physics at Caltech), Karen Crowther (Postdoctoral Researcher in Philosophy at the University of Geneva), Dustin Lazarovici (Postdoctoral Fellow in Philosophy, Université de Lausanne), Tim Maudlin (Professor of Philosophy at New York University), and Wayne Myrvold (Professor of Philosophy at Western University).

I am very grateful to them for the time and effort they put into crafting contributions for this post that are informative, fascinating, and, importantly, accessible to non-experts. Thank you, authors!

Thanks also to Michael Dickson (University of South Carolina) and David Wallace (University of Southern California) for some preliminary feedback about this topic.

You can scroll down to the posts or click on the titles in the following list. (Note: while I normally put the contributions in alphabetical order, I am deviating slightly from that and putting Dr. Crowther’s first, as she included a helpful diagram that is relevant to all of the posts).

What the Experiment Actually Did and What Is Learned from It
by Karen Crowther

Quantum mechanics (QM) is supposed to be a universal theory: its domain of applicability is not restricted to the world at very small length scales. In other words, the theory is meant to describe elephants as well as electrons. While we do not, of course, need to use quantum theory to describe elephants, there are increasingly larger and more complex laboratory systems (i.e., tabletop experiments) being built that do display quantum behaviour. There are various proposals for why, in practice, we do not need to use quantum theory to describe the world at the length- and time-scales that are familiar to us as human beings. The central of these is decoherence—the idea that the interference effects that would otherwise reveal our ‘quantum-ness’ get suppressed when a system interacts with other systems around it (‘the environment’). Thus, demonstrating the quantum behaviour of a laboratory system requires the system to be isolated (to a great degree) from outside influences.

Decoherence, however, does not help when it comes to a more problematic disconnect between the quantum-mechanical description of the world and our experience of it. Whenever we take a measurement of a system to determine the value of some property it possesses (e.g., position, charge, spin, mass, polarisation, etc.), we find the system to have a definite value of this property. Yet, before the measurement, QM says that the system does not possess a determinate value of this property, but rather exists in a superposition of different states with different values of this property.

Rectifying these two pictures is known as the measurement problem, and solving it has spawned the development of various interpretations of QM that seek to explain what’s going on. These interpretations are constrained by the violation of a mathematical relation known as the Bell inequality, which makes it particularly difficult to retain the belief that the system possesses a definite state (i.e., one with particular values of its measurable properties) before being observed—as some interpretations known as hidden variable interpretations seek to do.

Wigner’s thought-experiment was an attempt to show that conscious observers cannot themselves exist in superpositions because it would lead to situations where a person has an experience of the world that conflicts with the experiences of others: two people would record inconsistent facts about one and the same system (Wigner, 1967). In other words, reality would be observer-dependent.

In the thought-experiment, Wigner has a friend in an isolated lab who measures the polarisation of a photon, and finds it to have a definite value—this is the friend’s ‘fact’. Wigner, however, is outside of the lab, and does not know the outcome of his friend’s measurement. Instead, Wigner uses QM to describe his friend’s entire lab as a quantum system and finds it to be in one giant superposition of the different possible polarisations of the photon, as well as the different possible outcomes of his friend’s measurement—this superposition is Wigner’s ‘fact’. The two ‘facts’ are inconsistent. (See Figure 1).

Figure 1. Wigner’s thought-experiment.
Wigner’s friend, F, measures the polarisation S of a photon, obtaining outcome z, which is a definite state, so F records one of two definite states for ψS.. Wigner, W, who is outside the lab instead regards the lab as one big quantum system, L (orange box). Wigner argued that, having no access to z, he would assign a superposition state ΨL to the system (i.e., ΨL is not the same as ψS — one is a definite state, the other a superposition). Deutsch (1985) argued that Wigner could even perform a carefully designed measurement to test this state-assignment, w. Figure and description from Frauchiger & Renner (2018).

Different interpretations of QM have different ways of dealing with this scenario. For example, the relational interpretation of QM would embrace the inconsistency of the two ‘facts’, maintaining that facts are observer-dependent. On the other hand, the many worlds interpretation would deny the inconsistency, saying that the universe has branched into multiple universes, and in any one universe, observers will record consistent facts about the state of a given system.

Wigner’s own interpretation was that the scenario described by his thought-experiment was physically impossible: he argued that the conscious experience of his friend as having recorded a definite measurement-outcome would mean that after her measurement, it would not be correct for Wigner on the outside of the lab to describe the system as being in a superposition. This interpretation means believing that a “being with a consciousness must have a different role in quantum mechanics than the inanimate measuring device”, and hence that there must be “a violation of physical laws where consciousness plays a role” (Wigner, 1967, p. 181).

Yet, the laboratory experiment of Proietti et al. (2019) claims to have concretely realised Wigner’s thought-experiment. In this ‘real life’ experiment, the friend, isolated in her lab, measures the polarisation of a photon and records the outcome of her measurement; Wigner, outside of the lab, can then choose to either measure his friend’s record of her measurement-outcome (to attest to the ‘fact’ established by his friend), or to jointly measure both the friend’s record as well as the polarisation of the original photon (to establish his own ‘fact’).

In this ‘real life’ experiment, however, Wigner and his friend are not conscious observers, but pieces of machinery: they are measuring-and-recording devices. Proietti et al. (2019) argue that these devices can act as observers, defining an observer as any physical system that can extract information about another system (by means of an interaction) and can store that information in a physical memory. On this definition, computers and other devices can act as observers, just as humans can.

Now, what the experiment actually did was to use QM to calculate the probabilities of each of the possible measurement outcomes, and then compare these to the probabilities calculated from the experimental data obtained (1794 six-photon coincidence events, using 64 settings, over a total of 360 hours). The experimenters did this in order to test the violation of a Bell-type inequality, and the experiment was indeed successful in confirming its violation. Thus, the significance of the experiment in this sense was to further confirm the violation of Bell-type inequalities by quantum systems (even relatively large, complex ones) and to place stricter constraints on particular hidden variable interpretations of QM. But there are already many other experiments that have confirmed the violation of Bell-type inequalities by quantum systems (although under different conditions, and subject to different ‘loopholes’ and sources of error). And, there are already many other experiments that have confirmed that QM is not restricted in its domain to very small systems.

So, what is the philosophical interest in this particular experiment? The question is what this experiment demonstrates about QM that was not already known from the thought-experiment plus previous experimental results. Plausibly, what it shows is that a scenario analogous to the one imagined by Wigner is in fact physically possible, and in it the observers do record conflicting facts. Thus, the philosophical significance of the experiment is to make Wigner’s own interpretation of his thought-experiment look increasingly implausible: it is difficult to imagine that this experiment would not have been successful if the devices had conscious experiences.

But, on the other hand, the fact remains that these devices are not conscious, and so Wigner could stand resolute in his interpretation. If anything, he could point out that—in the same way that an observation of a non-black, non-raven provides a negligible sliver of confirmation for the claim that ‘all ravens are black’—the success of the experiment even provides inductive support in favour of his interpretation: the ‘observers’ in this experiment are able to record conflicting facts only because they do not experience these facts.

Reality Remains Intact
by Sean Carroll

Of course there is not a new experiment that suggests there’s no such thing as objective reality. That would be silly. (What would we be experimenting on?)

There is a long tradition in science journalism—and one must admit that the scientists themselves are fully culpable in keeping the tradition alive—of reporting on experiments that (1) verify exactly the predictions of quantum mechanics as they have been understood for decades, and (2) are nevertheless used to claim that a wholesale reimagining of our view of reality is called for. This weird situation comes about because neither journalists nor professional physicists have been taught, nor have they thought deeply about, the foundations of quantum mechanics. We therefore get situations like the present one, where an intrinsically interesting and impressive example of experimental virtuosity is saddled with a woefully misleading sales pitch.

My own preferred version of quantum mechanics is the Everett, or Many-Worlds formulation. It is a thoroughly realist theory, and is completely compatible with the experimental results obtained here. Thus, we have a proof by construction that this result cannot possibly imply that there is no objective reality. I am fairly confident that other realist approaches—hidden-variables models such as Bohmian mechanics, or dynamical-collapse models such as GRW theory—can offer equally satisfactory ways of interpreting this result without sacrificing objective reality, but I’m not confident in my ability to give such an account myself, so I’ll stick to the Everettian story.

Many-Worlds is a simple theory: there are wave functions, and they evolve smoothly according to the Schrödinger equation. Wave functions generally describe superpositions of what we think of as possible measurement outcomes, such as “horizontal” or “vertical” polarizations of a photon. The traditional “collapse of the wave function,” where an observer sees a unique measurement outcome, is replaced by decoherence and branching. That is, once a quantum superposition becomes entangled with a macroscopic system, that entanglement spreads to the environment (effectively irreversibly). If the measurement apparatus included a physical pointer indicating different possible results, that pointer cannot help but interact differently with the photons suffusing the room it’s in, depending on where it’s pointing. The pointer is now entangled with its environment.

That’s decoherence, and it implies that the two parts of the superposition now describe separate, non-interacting worlds, each of which includes observers who see some definite measurement outcome. The separate worlds aren’t put in by hand; they were always there in the space of all possible wave functions, and Schrödinger’s equation naturally brings them to life. If you believe a photon can be in a superposition, it’s not much of a conceptual leap to believe that the universe can be.

The experiment under question here is a version of Wigner’s Friend. The idea is to illustrate the possibility that observers in a quantum world can obtain measurement results, or “facts,” that are seemingly inconsistent with each other. One person, the “friend,” observes the polarization of a photon and obtains a result. But from the perspective of Wigner, both the photon and the friend appear to be in a superposition, and no measurement outcome has been obtained. How can we reconcile the truth of both perspectives while maining a belief in objective reality?

It’s pretty easy, from a Many-Worlds perspective. All we have to do is ask whether the original quantum superposition became entangled with the external environment, leading to decoherence and branching of the wave function. If it did, there are multiple copies of both Wigner and his friend. If it did not, it’s not really accurate to say that a measurement has taken place.

In the experiment being discussed, branching did not occur. Rather than having an actual human friend who observes the photon polarization—which would inevitably lead to decoherence and branching, because humans are gigantic macroscopic objects who can’t help but interact with the environment around them—the “observer” in this case is just a single photon. For an Everettian, this means that there is still just one branch of the wave function all along. The idea that “the observer sees a definite outcome” is replaced by “one photon becomes entangled with another photon,” which is a perfectly reversible process. Reality, which to an Everettian is isomorphic to a wave function, remains perfectly intact.

Recent years have seen an astonishing increase in the precision and cleverness of experiments probing heretofore unobserved quantum phenomena. These experiments have both illustrated the counterintuitive nature of the quantum world, and begun to blaze a trail to a new generation of quantum technologies, from computers to cryptography. What they have not done is to call into question the existence of an objective reality. Such a reality may or may not exist (I think it does), but experiments that return results compatible with the standard predictions of quantum mechanics cannot possibly overturn it.

Keep Calm, Quantum Mechanics has not Rejected Objective Reality
by Dustin Lazarovici

A group of physicists claims to have found experimental evidence that there are no objective facts observed in quantum experiments. For some reason, they have still chosen to share the observations from their quantum experiment with the outside world.

There is a lot wrong with the paper, so let me focus on the most critical points. First of all: what the experiment actually tested has little to do with the existence or non-existence of objective facts. It rather shows that the outcomes of different possible “Wigner’s friend-type” measurements cannot be predetermined, independent of what measurements are actually performed. This should come as no surprise to anyone familiar with quantum foundations as similar results have been established many times before (by various so-called “no hidden variables theorems”). In particular, it doesn’t mean that measurement outcomes, once obtained, are not objective. It rather reminds us that a measurement is not a purely passive perception but an active interaction that “brings about” a particular outcome and can affect the state of the measured system in the process.

Even from a logical point of view, the argument in the paper doesn’t hold water. Proietti et al. test a version of the Bell inequality whose violation, in different settings, has already been confirmed by various other experiments. They claim (but never prove) that their inequality follows from three assumptions: Locality (simply put: distant simultaneous measurements cannot affect each other), “free choice” (simply put: the experimentalists can freely choose what they measure) and “observer-independent facts” (whatever this means). Now, the original Bell inequality is derived from only the first two assumptions, locality and free choice. Hence, it’s already well-established that at least one of these assumptions is violated by quantum phenomena. (Indeed, the extended Wigner’s friend experiment does involve nonlocality; it is carried out on entangled systems, and the measurement of Alice’s friend can instantaneously affect the outcome obtained by Bob’s friend and/or vice versa.) So how could the violation of the Bell inequality in the extended Wigner’s friend scenario challenge the assumption of “observer-independent facts”? Well, it can’t, and it doesn’t. Not any more than the experimental falsification of an inequality derived from the assumption that 2+2=5 and the existence of observer-independent facts.

On a more general note, the entire Wigner’s friend craze is a bit silly. In effect, Wigner’s friend is little more than a rendition of the famous Schrödinger cat paradox, and any precise quantum theory that solves the Schrödinger cat paradox (also known as the “measurement problem” of quantum mechanics) has no difficulties providing a precise and objective description of “extended Wigner’s friend experiments”. My colleague Mario Hubert and I have discussed this in detail for the example of Bohmian mechanics, a quantum theory that grounds the prediction of standard quantum mechanics in an ontology of point particles and precise mathematical equations. In particular, in Bohmian mechanics, the state of a system is not described by the wave function alone but has a definite configuration even if its wave function is in a superposition. This provides a clear and simple solution to both Schrödinger’s cat and the Wigner’s friend “paradox.”

To their credit, the authors are more or less acknowledging this in their discussion, writing:

[O]ne way to accommodate our result is by proclaiming that “facts of the world” can only be established by a privileged observe — e.g., one that would have access to the “global wavefunction” in the many worlds interpretation or Bohmian mechanics.

But Bohmian mechanics and Many-Worlds theories have nothing to do with “privileged observers.” The whole point of these theories is to provide an objective description of the quantum world in which observers have no distinguished role in the first place but are treated just like any other physical system (that’s why John Bell called them “quantum theories without observer”). In doing so, both Bohmian mechanics and the Many-Worlds theory use, of course, an objective wave function that describes the experiment in its entirety. If the authors assume, instead, that wave functions describing the state of quantum systems are subjective, defined relative to different observers, (and mind you, some of the “observers” are just photons in this case!) it is not at all surprising that they end up with inconsistent or observer-dependent facts. They should just not suggest that their experiment provides corroboration for this bizarre and ultimately solipsistic view.

In my opinion, the paper does indeed raise some important questions, though they are mostly sociological ones. For instance: Why does physics tend to get exposure and attention merely for making outlandish claims, regardless of their scientific substance? And why do even many experts tend to abandon rational and critical standards when it comes to quantum mechanics? Why, in other words, have we gotten so used to quantum physics being crazy that even the most outlandish claims come with a presupposition of plausibility and relevance?

As a matter of fact, quantum mechanics can be as clear and rational as any respectable theory that came before it. You just have to do it right.

If There Is No Objective Physical World Then There Is No Subject Matter For Physics
by Tim Maudlin

The MIT Technology Review article that occasions this discussion has the rather astounding title “A quantum experiment suggests there’s no such thing as objective reality”. One could be rightly puzzled about how any experiment could suggest any such thing, since the existence of “objective reality” seems to be a pre-condition for the existence of experiments in the first place.

The abstract is perhaps slightly more promising: “Physicists have long suspected that quantum mechanics allows two observers to experience different, conflicting realities. Now they’ve performed the first experiment that proves it.” After all, familiar optical illusions permit different observers to “experience different, conflicting realities” in the sense of conflicting apparent realities. Of course, in such a case at least one of the “perceived realities” is indeed illusory, since they cannot both be veridical and also conflicting on pain of violating the Law of Non-Contradiction.

But further perusal of the article dashes any hope of anything comprehensible in this way. The experiments in question are done on a system composed of only six photons. Obviously the photons do not experience anything at all, much less conflicting realities. What in the world is going on?

In short, the way that this experiment is described—in terms of its significance—is complete nonsense. Physicists have become accustomed to spouting nonsense when quantum mechanics is the subject of discussion, which often takes the form of mind-blowing assertions about the loss of “classical reality” or even “classical logic”. The reason we know that all of this is nonsense right off the bat is that the experimental predictions of standard quantum mechanics can be accounted for—in several different ways—by theories that postulate an objective, unique physical reality governed by definite laws and using only classical logic and mathematics. So when the sorts of claims made in the title and abstract of the article are made, one knows immediately that they are unjustified hype.

But surely some sort of interesting experiment was done! Yes, indeed. The experiment is of the same general sort as has been done for the last half-century, beginning with John Clauser and Alain Aspect, and continued by many other experimentalists including Anton Zeilinger. All of these are usually, and accurately, described as tests of violations of Bell’s Inequality, the epochal discovery of John Stewart Bell. What Bell showed is that certain correlations between the outcomes of distant experiment cannot be predicted or explained by any theory that satisfies a certain precise locality condition—a condition one would expect any fundamentally Relativistic theory to obey. The fact that quantum theory predicts violations of Bell’s Inequality has been called quantum non-locality, and the increasingly precise and exacting experiments done over the past half-century have all confirmed the quantum predictions, as does this experiment.

All of this is even spelled out in the article itself: “But there are other assumptions too. One is that observers have the freedom to make whatever observations they want. And another is that the choices one observer makes do not influence the choices other observers make—an assumption that physicists call locality.” That is, in order to account for the outcome of this experiment, one has to deny that physical reality is local in Bell’s sense. (This gloss on the locality condition is not accurate, but leave that aside.) That is something we have known for 50 years.

What about “objective reality” and “Wigner’s friend” and what-not? Well, the non-local theories that we have—pilot wave theories such as Bohm’s theory, objective collapse theories such as the Ghirardi-Rimini-Weber theory, and the Many Worlds theory of Hugh Everett—all postulate a single objective reality. In the proper sense of “conflicting”, none of them allow for observers to observe “conflicting realities” (although in the Many Worlds theory observers have experimental access only to a small part of the objective reality). And of course, all of these theories are non-local, as Bell requires.

Now suppose that, for some obscure reason, one were dead-set against accepting Bell’s theoretical work and all of the experiments that have been done. Suppose, in other words, one were dead-set on maintaining that the physical world is local in the face of all the experimental evidence that it isn’t. How might that be done?

It seems rather desperate but I suppose one might go so far as denying the very existence of any objective physical reality at all. Or, as I sometimes put it, “Nothing really exists, but thank God it is local”. But as should be obvious, this accomplishes nothing. If there is no objective physical world then there is no subject matter for physics, and no resources to account for the outcomes of experiments.

There are many good books that correctly and clearly exposit the situation, including David Albert’s Quantum Mechanics and Experience, Travis Norson’s Foundations of Quantum Mechanics, Peter Lewis’s Quantum Ontology, Jean Bricmont’s Understanding Quantum Mechanics and Quantum Sense and Nonsense, and (co-incidentally) my own Philosophy of Physics Quantum Theory which happens to go on sale on March 19.

Objective reality is safe and sound. We can all sleep well.

Quantum Theory Confirmed Again
by Wayne Myrvold

Headline news! Stop the presses! A group of experimenters did an experiment, and the results came out exactly the way that our best physical theory of such things says it should, just as everyone expected. Quantum Theory Confirmed Again.

That’s what actually happened, though you’d never know it from the clickbait headline: A quantum experiment suggests there’s no such thing as objective reality [1].

The experiment [2] was inspired by a recent paper by Časlav Brukner, entitled “A No-Go Theorem for Observer-Independent Facts” [3]. The abstract of the paper reporting on the experiment proclaims, “This result lends considerable strength to interpretations of quantum theory already set in an observer-dependent framework and demands for revision of those which are not.”

Here’s a nice fact about claims of this sort: when you see one, you can be sure, without even going through the details of the argument, that any conclusion to the effect that the predictions of quantum mechanics are incompatible with an objective, observer-independent reality, is simply and plainly false. That is because we have a theory that yields all of the predictions of standard quantum mechanics and coherently describes a single, observer-independent world. This is the theory that was presented already in 1927 by Louis de Broglie, and was rediscovered in 1952 by David Bohm, and is either called the de Broglie-Bohm pilot wave theory, or Bohmian mechanics, depending on who you’re talking to. You can be confident that, if you went through the details of any real or imagined experiment, then you would find that the de Broglie-Bohm theory gives a consistent, observer-independent, one-world account of what happens in the experiment, an account that is in complete accord with standard quantum mechanics with regards to predictions of experimental outcomes.

There are other theories, known as dynamical collapse theories, that also yield accounts of a single, observer-independent reality. These theories yield virtually the same predictions as standard quantum mechanics for all experiments that are currently feasible, but differ from the predictions of quantum mechanics for some experiments involving macroscopic objects.

Much of the confusion surrounding quantum mechanics, which leads smart people to say foolish things, stems from the fact that, in the usual textbook presentations, we are not presented with a coherent physical theory. Typical textbook presentations incorporate something that is called the “collapse postulate.” This postulate tells you that, at the end of an experiment, you dispense with the usual rule for evolving quantum states, and replace the quantum state by one corresponding to the actual outcome of the experiment (which, typically, could not have been predicted from the quantum state).

If we want to apply the collapse postulate, we need guidance as to when to apply it, and when to use the usual quantum dynamics. Standard textbooks are invariably vague on this. In practice, this vagueness tends not to matter much. But a thought-experiment devised by Eugene Wigner [4] imagines a situation in which it does matter. Brukner’s thought-experiment is a combination of Wigner’s thought-experiment and tests of Bell inequalities.

Brukner’s version of the thought-experiment involves a pair of hermetically sealed labs, each containing an observer playing the role of Wigner’s friend, and an observer outside each of these labs. Each outside observer has a choice of experiments to do. One choice of experiment amounts to asking the friend what result was obtained, the other, to the sort of experiment Wigner is imagined to do. Brukner considers a situation in which an assumption of locality would entail the existence of pre-existing values for the results of both experiments, which are merely revealed if the experiment is done. His thought-experiment involves an entangled state of the labs for which this is in conflict with the quantum-mechanical statistical predictions. But we already know that any theory that reproduces the probabilistic predictions of quantum mechanics is going to have to reject any locality assumption that leads to Brukner’s conclusion; this is Bell’s theorem (see [5]). Moreover, in spite of the title of his paper, “A No-Go Theorem for Observer-Independent Facts,” Brukner explicitly mentions both of the ways that we’ve discussed—the de Broglie-Bohn theory, and collapse theories—for there to be observer-independent facts.

If we have a theory that tells us whether the quantum state collapses, and, if so, when it does, then that theory can be applied both to the Wigner-Brukner thought-experiment and to the actual experiment of Proietti et al.. The de Broglie-Bohm theory will predict the same thing as standard quantum mechanics for both. Collapse theories predict the result of the Proietti et al.experiment, but predict a departure from the predictions of any no-collapse theory for the full-blown Wigner-Brukner thought-experiment, if it could be realized.

There’s nothing new here, and nothing that prompts revision of any existing theory of quantum phenomena set in an observer-independent framework.


[1] “A quantum experiment suggests there’s no such thing as objective reality.MIT Technology Review, March 12, 2019.
[2] Proietti, Massimo, et al., “Experimental rejection of observer-independence in the quantum world.” arXiv:1902.05080v1 [quant-ph].
[3] Brukner, Časlav, “A No-Go Theorem for Observer-Independent Facts,” Entropy 2018, 20(5), 350.
[4] Wigner, Eugene, “Remarks on the mind-body question,” in The Scientist Speculates, I. J. Good (ed.).   London, Heinemann, 1961: 284–302.
[5] Myrvold, Wayne, Marco Genovese, and Abner Shimony, “Bell’s Theorem.” The Stanford Encyclopedia of Philosophy (Spring 2019 Edition), Edward N. Zalta (ed.).

Discussion welcome.

The post Philosophers On a Physics Experiment that “Suggests There’s No Such Thing As Objective Reality” appeared first on Daily Nous.

Ibn Khaldun on The political and epistemic Dangers of Astrology

Published by Anonymous (not verified) on Wed, 20/03/2019 - 11:40pm in


Religion, Science

Astrologers think that astrology...enables them to know the things that are going to be in the world of the elements, before they are created. The positions of the spheres and the stars (are) thus (taken to) indicate every single kind of future event, both universal and individual.
The ancient (astrologers) were of the opinion that the knowledge of astral powers and influence is acquired through experience. It (thus) is something that all (human) lives combined would not be able to achieve, because experience is obtained through numerous repetitions which make the obtainment of (empirical)knowledge or conjectures possible. Astral revolutions may be very long. Greatly extended periods of time are required for their repetition. Even (all) the lives in the world (combined) would be too short for (observing) them.
Ptolemy and his followers were of the opinion that the stars are able to indicate (the future) as the natural result of a temper they produce in the elemental existing things...
It makes the weakness of the achievements of astrology clear. Knowledge of,or conjectures about, things that come into being can only result from knowledge of all their causes, that is, agent, recipient, form, and end, as has been explained in the proper place....Furthermore, the astral powers are not the sole agents. There are other powers that act together with (the astral powers) upon the material element (involved), such as the generative power of father and species contained in the sperm, the powers of the particular quality distinguishing each variety within the species, and other things. When the astral powers reach perfection and are known, they (still) are only one among many causes that go into the making of a thing that comes into being...

Such is the situation (even) if one's knowledge of the astral powers is accurate and without defect. Now, that is difficult. The ability to calculate the courses of the stars is required in order to know their positions. Moreover, it is not proven that every star has its own particular power....All this speaks against the assumption that it is possible to predict things that will happen in the world of the elements with the help of astrology.
Furthermore, it is wrong to assume that the stars exercise an influence on (the world) below them. It has been proven deductively in the chapter on the Oneness of God, as one has seen, that there is no agent but God....The divine power (would seem to) tie the two together, as it does with all created things, (both) high and low, especially since the religious law attributes all happenings to the power of God and does not want to have anything to do with anything else....
Prophecy also denies the importance and influence of the stars....Thus, the worthlessness of astrology from the point of view of the religious law, as well as the weakness of its achievements from the rational point of view, are evident. In addition, astrology does harm to human civilization. It hurts the faith of the common people when an astrological judgment occasionally happens to come true in some unexplainable and unverifiable manner. Ignorant people are taken in by that and suppose that all the other (astrological) judgments must be true, which is not the case. Thus, they are led to attribute things to some (being) other than their Creator.
Further, astrology often produces the expectation that signs of crisis will appear in a dynasty. This encourages the enemies and rivals of the dynasty to attack(it) and revolt (against it). We have (personally) observed much of the sort. It is,therefore, necessary that astrology be forbidden to all civilized people, because it may cause harm to religion and dynasty.--Ibn Khaldun, Muqqadimah, Book 6, Para 29, Translated by Franz Rosenthal.

A few days ago, I claimed that Ibn Khaldun was flattering Timur in his meeting with Timur. One (further) reason to think that he is flattering is that in the meeting, he treats Timur's rise as astrologically foretold, yet in the Muqqadumah he is very critical of astrology. He is clearly assuming that while he is famous for the (massive) book, the folk eager to consult him about his expertise, will not have really read it carefully.  In the chapter (partially quoted), Ibn Khaldun offers five main reasons to reject astrology:

  1. Empirically, the cycles of the heaven are too long in time, for proper induction over them such that these patters can be mapped on yo the complexity of human affairs.
  2. So in virtue of this we lack the kind of causal knowledge of all the relevant mechanisms that would accurate prediction possible.
  3. There is no source in revelation that gives us access to the relevant data and causes. 
  4. He calls attention to religious prohibition against astrology in some authoritative hadiths. 
  5. Astrology is politically dangerous because it encourages crisis thinking.

The arguments underwriting Ibn Khaldun's first two reasons are clearly indebted to Al-Ghazali's views about empirical science (recall). The third one (no revelation) blocks a kind of move to somebody inspired by Al-Ghazali, who thinks that in the context of discovery, would be scientists require guided intuition of structure (or special properties). And that guidance, or inspiration, just is prophecy. Ibn Khaldun can grant the existence of prophecy (he does so in the Muqqadimah); all he needs for present purposes is that prophecy/guided intuition of structure is not sufficient to ground the kind of knowledge presupposed in astrology. (I omitted the details of his argument in the block quote--so you gotta trust me on this one.)

As an aside, I am fairly confident that Al-Ghazali is in the background here, because in this extended discussion, Ibn Khaldun relies  explicitly on a kind of occasionalist view of human affairs: there is no agent but God. Ibn Khaldun explicitly notes (in the quoted passage) that philosophy (deductive reason) and religious law (sharia) agree on this point. While others have defended occasionalism, it is, of those with a philosophical sensibility, something very much associated with Al-Ghazali.

The political argument against astrology is two-fold: first, it undermines civic religion -- here understood, in part, as a social practice like history conducive to political flourishing (recall) -- because when it gets lucky, it creates a new source of authority in the eyes of the people. For Ibn Khaldun, thinks that royal authority/sovereignty is founded on group cohesion, but this always presupposes a willing self-subordination to authority. This self-subordination is grounded in the (tacit) good opinion of the authority. (Of course, it's also shaped by it.) We may say that (recall) astrology risks becoming an influential political idol.

Second, astrology generates what we may call crisis thinking. People are encouraged to be on the look-out for extrinsic portents and signs that spell trouble. (I use 'extrinsic; to distinguish this from ordinary prudence and precautionary principles, which Ibn Khaldun praises.) As Socrates (Timaeus 40D) puts it, alarming portents of the things which shall come to pass hereafter become, Ibn Khaldun argues, coordination mechanisms and rallying cries for dissatisfaction and would be revolts. And so they become inherent source of instability. This is especially so because the purported meanings of signs are inherently malleable. 

We may, see, here, Ibn Khaldun challenging the art of statecraft as handed down by the ancients (Greek, Persian, Chinese, etc.) more generally. To sum up and simplify, the various competing schools taught the intellectual elites to either shape control of divination and astrology or to accept it and make it into an instrument of statecraft. Ibn Khaldun argues that this is a grounded on a mistake that undermines instability. Because the smart and educated benefit from the market for astrologers, this helps explain (recall) why Ibn Khaldun is so critical of the role of the learned in political life.



Philosopher Finnur Dellsén Wins Nils Klim Prize

Published by Anonymous (not verified) on Thu, 14/03/2019 - 11:59pm in

Finnur Dellsén, associate professor of philosophy at the University of Iceland and part-time associate professor at the Inland Norway University of Applied Sciences, is the 2019 winner of the Nils Klim Prize.

The Nils Klim Prize is 500,000 Norwegian Kroners (approximately $58,000). It is “awarded annually to a younger Nordic researcher who has made an outstanding contribution to research in the arts and humanities, social science, law or theology.” The award is sponsored by the Norwegian government and administered by the University of Bergen.

The prize committee praises Dellsén as an “outstanding and original philosopher” working at “the cutting edge of his field” who “writes about highly complex philosophical matters in a clear and concise style.” Last year he was awarded the Lauener Foundation Prize for Up-and-Coming Philosophers.

Dellsén works in philosophy of science and epistemology. The prize committee writes:

Dellsén pursues a cluster of questions in relation to the epistemic status of scientific realism: the problem of accounting for explanatory rivalry in the context of inferences to the best explanation; the distinction between accepting a scientific theory and believing it; and whether all scientific explanations are causal explanations. In addition, Dellsén shows how, from a realist perspective, the acceptance of a theory depends on social and historical factors, such as the structure of scientific communities, scientific dogmatism and the time that has passed since a theory first was accepted. In his recent works, Dellsén pursues the question of what constitutes scientific progress, arguing that it should be conceived in terms of the advancement of scientific understanding, instead of increased knowledge. Dellsén convincingly claims that the distinction between knowledge and understanding lies in the fact that understanding, as opposed to knowledge, requires neither belief nor justification. Through this line of argument, Dellsén is likely to move the primary focus of epistemology towards the conditions of understanding and the nature of acceptance of scientific explanations.

You can learn more about Dellsén’s work at his website, and more about his winning of the prize here.

(Thanks to Ole Hjortland for bringing this news to my attention.)

©Kristinn Ingvarsson

The post Philosopher Finnur Dellsén Wins Nils Klim Prize appeared first on Daily Nous.

How Science Can Get the Philosophy It Needs

Published by Anonymous (not verified) on Thu, 07/03/2019 - 4:32am in

A recent essay in the Proceedings of the National Academy of Sciences (PNAS) by an interdisciplinary group of scholars argues that philosophy has had “an important and productive impact on science” and provides recommendations for how to facilitate cooperation between philosophers and scientists.

The authors of “Why Science Needs Philosophy” are Lucie Laplane (CNRS) , Paolo Mantovani (Roehampton), Ralph Adolphs (Caltech), Hasok Chang (Cambridge), Alberto Mantovani (Humanitas, Margaret McFall-Ngai (Hawai’i), Carlo Rovelli (Aix-Marseille), Elliott Sober (Wisconsin), and Thomas Pradeu (CNRS, University of Bordeaux, Sorbonne).

They provide specific examples of major contributions philosophy has made to scientific work in areas such as stem cell research, cancer treatment, immunology, the microbiome, cognitive science, and others. These examples, they write, lead them to “see philosophy and science as located on a continuum”:

Philosophy and science share the tools of logic, conceptual analysis, and rigorous argumentation. Yet philosophers can operate these tools with degrees of thoroughness, freedom, and theoretical abstraction that practicing researchers often cannot afford in their daily activities. Philosophers with the relevant scientific knowledge can then contribute significantly to the advancement of science at all levels of the scientific enterprise from theory to experiment as the above examples show.

They note, though, that scientists often do not see the value in philosophical work. This may be owed, in part, to lack of familiarity and exposure. What can be done to bring philosophers and scientists together?

They issue the following recommendations:

  1. Make more room for philosophy in scientific conferences. This is a very simple mechanism for researchers to assess the potential usefulness of philosophers’ insights for their own research. Reciprocally, more researchers could participate in philosophy conferences, expanding on the efforts of organizations such as the International Society for the History, Philosophy, and Social Studies of Biology; the Philosophy of Science Association; and the Society for Philosophy of Science in Practice.

  2. Host philosophers in scientific labs and departments. This is a powerful way (already explored by some of the authors and others) for philosophers to learn science and provide more appropriate and well-grounded analyses, and for researchers to benefit from philosophical inputs and acclimatize to philosophy more generally. This might be the most efficient way to help philosophy have a rapid and concrete impact on science.

  3. Co-supervise PhD students. The co-supervision of PhD students by a researcher and a philosopher is an excellent opportunity to make possible the cross-feeding of the two fields. It facilitates the production of dissertations that are both experimentally rich and conceptually rigorous, and in the process, it trains the next generation of philosopher-scientists.

  4. Create curricula balanced in science and philosophy that foster a genuine dialogue between them. Some such curricula already exist in some countries, but expanding them should be a high priority. They can provide students in science with a perspective that better empowers them for the conceptual challenges of modern science and provide philosophers with a solid basis for the scientific knowledge that will maximize their impact on science. Science curricula might include a class in the history of science and in the philosophy of science. Philosophy curricula might include a science module.

  5. Read science and philosophy. Reading science is indispensable for the practice of philosophy of science, but reading philosophy can also constitute a great source of inspiration for researchers as illustrated by some of the examples above. For example, journal clubs where both science and philosophy contributions are discussed constitute an efficient way to integrate philosophy and science.

  6. Open new sections devoted to philosophical and conceptual issues in science journals. This strategy would be an appropriate and compelling way to suggest that the philosophical and conceptual work is continuous with the experimental work, in so far as it is inspired by it, and can inspire it in return. It would also make philosophical reflections about a particular scientific domain much more visible to the relevant scientific community than when they are published in philosophy journals, which are rarely read by scientists.

Thoughts on these recommendations, information about current examples of them, and suggestions for other ways of encouraging “a renaissance in the integration of science and philosophy” are welcome.

Jiyong Lee, “Segmentation Series 10”

The post How Science Can Get the Philosophy It Needs appeared first on Daily Nous.

Israel, the Nazis and the Condemnation of Racial Intermarriage

Published by Anonymous (not verified) on Thu, 28/02/2019 - 3:28am in

A little while ago I wrote a piece about how the Raelians’ original design for their embassy in Jerusalem was becoming increasingly accurate as a symbol of the increasingly fascistic nature of the Israeli state and its persecution of the Palestinians. The Raelians are a new religious movement, a sect that believes its leader and founder, Rael, real name Claude Vorilhon, was contacted and given a message for humanity by space aliens. These extraterrestrials, according to Rael, are the Elohim, one of the names for the Lord in the book of Genesis in the Bible. According to Rael, these aliens are due to return to Earth, where they will bring about a new age of peace and prosperity. Under their guidance, only certified geniuses will be allowed to rule, and all the menial work will be done by a specially genetically engineered slave race. The Holy City was chosen as the site of their embassy because that’s where Rael and his followers expect the Elohim to land and establish their centre of power on Earth.The society’s belief in ‘geniocracy’ – rule by the intelligent – has left it open to accusations of fascism. An accusation that probably wasn’t helped when they chose this as the design for their embassy in Jerusalem.

Yes, you’re seeing this correctly: it is a swastika in a Magen Dawid, a Star of David. And no, I don’t know why they chose this design. I suspect it’s because Rael, like a number of other new religious movements and occult sects since the 19th century, may have been impressed and drawn on eastern spirituality. In Hinduism and Buddhism, the swastika is a symbol of good. It also used to be like that over here before the rise of the Nazis. I think there’s even a town of Swastika in Canada, or there was.

Obviously, this didn’t go down at all well with the Israelis, who were justifiably and understandably outraged. The Raelians were forced to change their design, which is now a nice swirly galaxy in the Star of David instead.

But the symbol nevertheless suits the Israeli state, as it becomes more racist and Fascistic. And that Fascism has become blatant with Netanyahu’s new choice of coalition partners. A few days ago, the dedicated Jewish anti-Fascist and anti-Zionist, Tony Greenstein, blogged about how Netanyahu had made the Otzma Yehudit merge with another far right party, Jewish Home, which represents the settlers, so that they could join his wretched Likud in a governing coalition. Otzma Yehudit’s name means ‘Jewish Power’ in Hebrew, and they are Jewish Nazi party. It’s led by Michael Ben Ari, who takes his ideology from Meir Kahane’s wretched Kach, which was outlawed as a terrorist group. Kahane and his followers demanded the following

– Revocation of non-Jewish citizenship.
– Expulsion of non-Jews from Jerusalem and eventually Israel.
– The eventual imposition of slavery on Arabs and other non-Jews.
– Prohibition of contact between Jews and Arabs, including sexual relations.
– Segregated beaches.
– Prohibition of non-Jews living in Jewish neighborhoods.
– Forced dissolution of all intermarriages.

In 1988 Kach was banned by the Israeli Supreme Court when it looked like gaining four to eight seats in the Knesset.

Greenstein notes that not only did this come straight out of the Nazis’ vile Nuremberg Laws, but it also did little more than codify existing Israeli legislation.


Since 1948 successive Israeli governments have tried to forbid intermarriage between Arabs and Israelis. Mixed marriages are not recognised by Orthodox Judaism, the religion of the Israeli state. Which is one of the factors contributing to the outrage a little while ago when a couple of Israeli celebrities, who were respectively Jewish and Palestinian, got married, with the Jewish partner converting to Islam. Greenstein has also revealed on his blog that a number of municipalities in Israel are so keen to stop relationships between Jews and Arabs, that they are running courses in conjunction with the local police and religious organisations to discourage Jewish women from going out with Palestinian men.

The Nazis were also concerned to prevent intermarriage between Germans and those of what they considered to be inferior races, such as Poles and other Slavs. They were most fervently against gentile and Jewish Germans intermarrying. And the Jews also weren’t alone in being forced to wear identifying marks, in their case the Star of David. The Nazis developed a system of badges for the prisoners in the concentration camp, which identified the offence for which they were incarcerated. Gay men notoriously wore a pink triangle. The Gypsies, I think, were forced to wear a brown one. Red triangles were worn by socialists, Communists, Anarchists, other political dissidents and Freemasons.

There were also identification badges for ‘Jewish race defilers’. Men had to wear this

While women were identified by this badge

Clearly this represents the Nazis’ criminalisation of racial intermixing and the shaming of those, whose only crime was that they were Jewish married or in a relationship with a non-Jewish German. I also wonder if it was also foisted on non-Germans, who were incarcerated because of their marriage to a Jew.

But the Israelis are also attempting to discourage intermarriage between Jewish and non-Jewish citizens, and if Otzma Yehudit get their way, such liaisons will be criminalised. In which case I wonder if those convicted of such crimes will also have to wear similar vile symbols. 

Australian court's historic rejection of coal mine highlights the impact of climate change

Published by Anonymous (not verified) on Wed, 27/02/2019 - 2:01pm in

Hunter Valley coal

Rio Tinto's Mount Thorley mine, Hunter Valley 2014 – Courtesy Lock the Gate Alliance Flickr account (CC BY 2.0)

In a groundbreaking decision, a court has used the impact on climate change as one of the reasons for rejecting a new coal mine in Australia. The proposed Rocky Hill mine is near Gloucester in New South Wales’ Hunter Valley. Chief Justice Brian Preston of NSW's Land and Environment Court said that the open-cut mine “would be in the wrong place at the wrong time”.

He concluded that:

The construction and operation of the mine, and the transportation and combustion of the coal from the mine, will result in the emission of greenhouse gases, which will contribute to climate change.

Members of the local community activist group Groundswell Gloucester were ecstatic. Environmental lawyer Elaine Johnson shared their joy on twitter:

Lawyers, lobbyists, academics, environmentalists and economists have responded online about the implications of the historic judgment. Environmental activist John Englart framed a global message via Twitter:

Environmental law academic Justine Bell-James explored the future for climate-based litigation at The Conversation, concluding:

It is hard to predict whether his decision will indeed have wider ramifications. Certainly the tide is turning internationally – coal use is declining, many nations have set ambitious climate goals under the Paris Agreement, and high-level overseas courts are making bold decisions in climate cases.

Economist John Quiggin also looked at some of the longer term implications of the decision:

[…] miners will sooner or later face demands for compensation for the damage caused by climate change.

The strongest case will be against mines that have commenced operation after the need to leave remaining reserves in the ground was already clear. Anyone considering investing in, lending to or insuring such mines should be prepared for more decisions like Rocky Hill.

International professional services firm, Herbert Smith Freehills, offers to ‘help you realise opportunities while managing risk’. It has this advice for current and potential clients:

Proponents seeking consent for new projects, or modifications of existing projects, with ‘material’ greenhouse gas emissions across all industries in NSW should carefully assess climate change impacts, particularly if the proposal is not ‘carbon neutral’.

On the same day, leading Australian law firm Corrs Chambers Westgarth reinforced this view:

Future proponents will need to seriously consider the decision, as will banks and others who would traditionally invest in or support coal and other fossil fuel-dependent industries.

It is possible that the increasing recognition of causative links between fossil fuel developments and climate change could pave the way for future compensation claims of the kind now being seen in the United States.

Nevertheless, there are online climate skeptics challenging the science, with allies in the mainstream media such as Rupert Murdoch’s publications:

Meanwhile Swiss-based coal mining multinational Glencore has stated that it will cap its worldwide coal production for environmental reasons. Glencore is Australia’s largest coal producer.

The rejection of the mine may still be overturned by a legal appeal or government legislation. However, the risks of climate litigation for international greenhouse gas emitters are unlikely to go away in the near future.

The NSF and the Rise of Value-Free Philosophies of Science (guest post by Joel Katzav & Krist Vaesen)

Published by Anonymous (not verified) on Tue, 19/02/2019 - 12:47am in

Why were social, moral and political issues relatively neglected in philosophy of science during the 20th Century? Joel Katzav (Queensland) and Krist Vaesen (Eindhoven) continue their investigation of the institutional and sociological influences on the history and development of analytic philosophy in the following guest post.*

[Ad Reinhardt, Untitled (Red and Gray)]

The National Science Foundation and the Rise of Value-Free Philosophies of Science
by Joel Katzav & Krist Vaesen

In a series of papers that appeared in 2017 and 2018, we have shown that an important part of the explanation for the rise of analytic philosophy in the twentieth century was the takeover by analytic philosophers of generalist, British and American philosophy journals and the subsequent use of these journals in order to marginalize non-analytic approaches to philosophy (see here, here and, for an overview, here (published) or here (preprint)). In our most recent paper, we extend this work on the emergence of analytic philosophy in two ways. We show that at least one important funding body was also used to marginalize non-analytic philosophy and we examine how such marginalization affected the development of one specialization in philosophy, namely the philosophy of science (see here).

G.E. Moore by and large excluded philosophical psychology and work sympathetic to Neo-Hegelian idealism from Mind roughly in 1926, not long after he became the journal’s editor. In roughly 1948, analytic editors took over The Philosophical Review and turned it from a journal that was open to diverse approaches to philosophy into one that was basically only open to analytic philosophy. A similar, though slightly more complex story took place in The Journal of Philosophy just over a decade later. In these three cases, the primary form of philosophy excluded from publication was distinguished from analytic philosophy of the time by being speculative, that is, very roughly, by tending to make substantive claims about the world that are epistemically independent of established belief, including commonsense and science. Marginalized speculative work included Neo-Hegelian idealism, classical pragmatism, speculative phenomenology and existentialism, process philosophy and approaches to philosophy that grew out of these more familiar ones. The story of marginalization also occurred at other journals (e.g., at The Philosophical Quarterly in the late 1950s and at Philosophy and Phenomenological Research in 1980) and was supplemented by the creation of analytic only journals (e.g., Analysis, Philosophical Studies and Noûs). If one looks at which journals were affected by these sectarian practices, one will find a very familiar set of journals; it is roughly the set of the journals that are the most prestigious journals in philosophy today.

In our more recent paper, we provide evidence for thinking that a similar kind of marginalization occurred at the level of one of Anglo-American philosophy’s sub-fields, namely the philosophy of science.

The post-WWII years saw the growth of U.S. government funding of science, including the growth of financial support for the sciences by the National Science Foundation (NSF). The first-half of the 1950s saw political pressure, partly associated with McCarthyism, that meant there was hesitation to extend NSF funding to the social sciences. In the second half of the 1950s, however, these sciences acquired their own NSF program. The program included the sub-program “History and Philosophy of Science” (HPS). Philosophy of science, too, was in the money.

However, decisions about allocating NSF funding for philosophy of science were placed in the hands of logical empiricists. These philosophers, in their function as NSF advisors, allocated virtually all HPS money during the period 1958-1963 to value-free philosophies of science; similar preference for value-free philosophy of science is likely to have continued throughout the 1960s. Moreover, this occurred at a time when there were still many philosophers writing philosophy of science that did deal with social, moral and political concerns (value-laden approaches).

The allocation of NSF funds, together with the contemporaneous exclusion of value-laden approaches from the pages of Philosophy of Science (see here) and of pragmatism from The Journal of Philosophy (see here), contributed considerably to philosophy of science’s withdrawal from social concerns. Interestingly, some of the figures involved in marginalizing non-analytic work at The Philosophical Review and The Journal of Philosophy, namely Max Black and Sydney Morgenbesser, were also involved in what occurred at the NSF.

Our work on the HPS sub-program only allows us to say that value-laden philosophy of science was not funded by the HPS, a claim that is weaker than the claim that it only funded logical empiricist or analytic philosophy of science, though the identity of HPS advisors does suggest that this was also the case. It does seem that philosophy of science was following the same pattern found elsewhere in British and American philosophy.

That said, the pattern of marginalization in the philosophy of science had its own distinct characteristics. For example, NSF funding did not go to philosophy of science that had an historical dimension. While this fits what we find in, say, Mind at the time, it does not fit what we find in The Philosophical Review, which did publish a substantial amount of historical work in the 1950s. At least part of the reason for the difference is plausibly that, on the one hand, the logical empiricists at the NSF, like the then editor of Mind, Gilbert Ryle, had little sympathy for history informed philosophy while, on the other hand, one of those who was an editor at The Philosophical Review when it decided it would no longer publish speculative philosophy, namely Gregory Vlastos, was an historian of philosophy. A deeper explanation for the difference is that what drove the sectarian practices of analytic philosophy was the opposition to speculative philosophy. Beyond this, there was some room for the opinions of influential individuals to decide for themselves what kind of philosophy would be tolerated. Philosophy of science, as it happens, was under the strong influence of individuals with a particularly narrow view of their specialization.

The post The NSF and the Rise of Value-Free Philosophies of Science (guest post by Joel Katzav & Krist Vaesen) appeared first on Daily Nous.

John McDonnell Outrages Tories with Comments about Churchill’s Villainy

John McDonnell kicked up a storm of controversy this week when, in an interview with the Politico website on Wednesday, he described Winston Churchill as a villain. McDonnell was answering a series of quick-fire questions, and the one about Churchill was ‘Winston Churchill. Hero or villain?’ McDonnell replied ‘Tonypandy – villain’. This referred to the Tonypandy riots of 1910, when striking miners were shot down by the army after clashing with the police. According to the I’s article on the controversy on page 23 of Wednesday’s edition, Churchill initially refused requests to send in the troops, instead sending a squad of metropolitan police. Troops were also sent in to stand in reserve in Cardiff and Swindon. Following further rioting, Churchill sent in the 18th Hussars. He later denied it, but it was widely believed that he had given orders to use live rounds. There’s still very strong bitterness amongst Welsh working people about the massacre. The I quoted Louise Miskell, a historian at Swansea University, who said that ‘He is seen as an enemy of the miners’.

Boris Johnson, who has written a biography of Churchill, was naturally outraged, declaring ‘Winston Churchill saved this country and the whole of Europe from a barbaric fascist and racist tyranny, and our debt to him is incalculable’. He also said that McDonnell should be ashamed of his remarks and withdraw them forthwith.

McDonnell, speaking on ITV news, said that although he didn’t want to upset people, he’d give the same answer again to that question if he was honest, and said that he welcomed it if it has prompted a more rounded debate about Churchill’s role. He said that Churchill was undoubtedly a hero during the Second World War, but that this was not necessarily the case in other areas of his life. He said ‘Tonypandy was a disgrace.: sending the troops in, killing a miner, tryinig to break a strike and other incidents in his history as well.’

The I then gave a brief list of various heroic and villainous incidents. These were

* Saving Britain from the Nazis during and helping to lead the Allies to victory during the Second World War.

* Introducing the Trade Boards Bill of 1909, which established the first minimum wages system for various trades across the UK.

* Making the famous speech about an Iron Curtain coming down across Europe in 1946.

* According to his biographer, John Charmley, Churchill believed in a racial hierarchy and eugenics, and that at the top of this were White Protestant Christians.

* Saying that it was ‘alarming and nauseating’ seeing Gandhi ‘striding half-naked up the steps of the vice-regal palace.’ He also said ‘I hate Indians. They are a beastly people with a beastly religion’.

* Three million people died in the Bengal famine of 1943, in which Churchill refused to deploy food supplies.

It’s in the context of the Bengal famine that Churchill made his vile remarks about Indians. The Bengalis starved because their grain had been sequestered as back up supplies to fee British troops. In the end they weren’t needed, according to one video I’ve seen on YouTube. Churchill also said that the famine was their fault for having too many children.

He also supported the brief British invasion of Russia to overthrow the Communist Revolution, and the use of gas on Russian troops. Just as he also wanted to use gas to knock out, but not kill, Iraqi troops in Mesopotamia when they revolted in the 1920s against British rule.

He also said that ‘Keep Britain White’ was a good slogan for the Tories to go into the 1951 general election.

It’s clearly true that Churchill’s determined opposition to the Nazis did help lead to a free Europe and the defeat of Nazi Germany. But according to the historian of British Fascism, Martin Pugh, he did not do so out of opposition to Fascism per se. He was afraid that Nazi Germany posed a threat to British interests in the North Sea. The Conservative journo, Peter Hitchens, is very critical of Churchill and Britain’s entry into the Second World War. He rightly points out that Churchill wasn’t interested in saving the Jews, but that we went in because of the treaties we had signed with Poland and France. As for defeating Nazism, historians have for a long time credited the Soviet Red Army with breaking the back of the Wehrmacht. In one of Spike Milligan’s war memoirs, he jokes that if Churchill hadn’t sent the troops in, then the Iron Curtain would begin about Bexhill in Kent. Churchill also went on a diplomatic visit to Mussolini’s Italy after the Duce seized power, though privately he remarked that the man was ‘a perfect swine’ after the Italian dictator declared that his Blackshirts were ‘the equivalent of your Black and Tans’. For many people, that’s an accurate comparison, given how brutal and barbaric the Black and Tans were. And as an authoritarian, Churchill also got on very well and liked General Franco. And George Orwell also didn’t take Churchill seriously as the defender of democracy. In the run-up to the outbreak of war, he remarked that strange things were occurring, one of which was ‘Winston Churchill running around pretending to be a democrat’.

Now I don’t share Hitchen’s view that we shouldn’t have gone into the Second World War. The Nazis were determined to exterminate not just Jews, Gypsies and the disabled, but also a large part of the Slavic peoples of eastern Europe. One Roman Catholic site I found had an article on Roman Catholic and Christian martyrs under the Nazis. This began with the Nazis’ attempts to destroy the Polish people, and particularly its intellectuals, including the Polish Roman Catholic Church. It quoted Hitler as saying that war with Poland would a be a war of extermination. Hitler in his Table Talk as also talks about exterminating the Czechs, saying that ‘It’s them or us.’ Churchill may have gone into the War entirely for reasons of British imperial security, but his action nevertheless saved millions of lives right across Europe. It overthrew a regime that, in Churchill’s words, threatened to send the continent back into a new Dark Age, lit only by the fire of perverted science’.

Having said that does not mean he was not a monster in other areas. The General Strike was a terrible defeat for the British working class, but if Churchill had been involved it would almost certainly have been met with further butchery on his part. Again, according to Pugh, Churchill was all set to send the army in, saying that they were ready to do their duty if called on by the civil authority. The Tory prime minister, Stanley Baldwin, was all too aware of what would happen, and when another minister of civil servant suggested finding him a position in the Post Office or the department looking after the radio, he enthusiastically agreed, because it would keep Churchill out of trouble.

As for the Bengal famine, I think that still haunts Indian nationalists today. I was looking at the comments on Al-Jazeera’s video on YouTube about the UN finding severe poverty in Britain a few months ago. There was a comment left by someone with an Indian name, who was entirely unsympathetic and said he looked forward to our country being decimated by starvation. My guess is that this vicious racist was partly inspired in his hatred of Britain by the famine, as well as other aspects of our rule of his country.

I think McDonnell’s remarks, taken as a whole, are quite right. McDonnell credited him with his inspiring leadership during the War, but justifiably called him a villain because of the Tonypandy massacre. And eyewitnesses to the rioting said that the miners really were desperate. They were starving and in rags. And Churchill should not be above criticism and his other crimes and vile statements and attitudes disregarded in order to create a sanitized idol of Tory perfection, as Johnson and the other Tories would like.

‘I’ Newspaper on Labour’s Plans to Liberate University Regulator from Market Forces

Today’s I for Saturday, 16th February 2019 has an article by Florence Snead on page 4 reporting Labour’s plans to overhaul the universities regulator, and remove the free market ideology currently underpinning its approach to higher education in the UK. The piece, entitled ‘Universities ‘should not be left to the mercy of market forces’ runs

Labour has unveiled how it would overhaul the higher education system as it claimed the system’s new regulator was “not fit for purpose”.

The shadow Education Secretary Angela Rayner will criticize the Office for Students – established by the Government in 2018 – in a speech today at the annual University and Colleges Union conference.

She will say the regulator represents a system “where market logic is imposed on public goods” and where “forces of competition run rampant at the expense of students, staff and communities.”

Labour said it wants the regulator to report on diversity in university staff and student bodies and to take action to make universities “genuinely representative of the communities they serve”.

Staff should also be represented on the regulator’s board to ensure their views are heard, it added.

The party said it would also ban vice chancellors sitting on their own remuneration committees.

Ms Rayner is also expected to address the issue of universities being on the brink of bankruptcy, as previously revealed by I.

“Students would be left with immense uncertainty about their futures and entire communities would lose one of their major academic, economic and social institutions.”

Universities minister Chris Skidmore responded: “Universities know they can’t trust Corbyn as his plans would crash the economy, mean less investment in our higher education, compromising its world class quality”.

Actually, if anything’s trashed our world class education system, it’s been the Thatcherite programme of privatization and free market ideology. Scientific research at UK universities has been hampered ever since Thatcher decided that university science departments should go into partnership with business. Which has meant that universities can no longer engage in blue sky research, or not so much as they could previously, and are shackled to producing products for private firms, rather than expanding the boundaries of knowledge for its own sake. Plus some of the other problems that occur when scientific discoveries become the property of private, profit driven industries.

Then there’s the whole problem of the introduction of tuition fees. This should not have been done. I was doing my Ph.D. at Bristol when Mandelson and Blair decided to do this, and it’s immediate result was the scaling down of certain departments and shedding of teaching staff. Those hardest hit were the departments that required more funding because of the use of special equipment. This included my own department, Archaeology, where students necessarily go on digs, surveys and field expeditions. This means that the department had to have transport to take its staff and students to wherever they were excavating, provide digging equipment, although many students had their own trowels. They also needed and trained students in the use of specialist equipment like the geophysical magnetometers used to detect structures beneath the soil through the measurement of tiny changes in the strength of the Earth’s magnetic field, as well as labs to clean up and analyse the finds, from the type of soil in which they were found, the material out of which the finds were made, chemical composition of various substances, like food residue in pots, so you can tell what people were eating and drinking, and the forensic examination of human and animal remains.

I’ve no doubt that this situation was made worse when Cameron and Clegg decided to raise tuition fees to their present exorbitant level. Which has meant that students are now saddled with massive debt, which may make it difficult for some ever to afford to buy their own homes. Student debt was already an issue just after I left college, when the Tories decided to end student grants. After the introduction of tuition fees it has become an even more critical issue.

Then there’s the whole issue of proper pay and conditions for university lecturers. This is nowhere near as high as it should be. A friend of mine in the ’90s was one of the Student Union officers at our old college/uni. He told me one day just what some of the highly skilled and educated lecturers were earning. And it was low. Many of them were on part-time work, and I think the pay for some of them was at average wage level or below. And that was then. I’ve no idea what it’s like now. I’ve come across reports of a similar crisis at American universities and colleges, where the pay for the managers has skyrocketed while that of teaching staff has fallen catastrophically. And this is all part of the general pattern throughout industry as a whole, where senior management has enjoyed massively bloated pay rises and bonuses, while staff have been laid off and forced on to short term or zero hours contracts and low pay.

All this has been done in the name of ‘market forces’ and the logic of privatization.

I am not remotely surprised that British higher education is in crisis, and that an increasing number of colleges and universities are facing bankruptcy. This was always on the cards, especially as the population surge that inspired many colleges and polytechnics to seek university status on the belief that there would be enough student numbers to support them, is now over. Market logic would now dictate that, as the universities are failing, they should be allowed to collapse. Which would deprive students and their communities of their services.

The structure of British higher education needs to be reformed. The entire Thatcherite ethic of privatization, free markets, and tuition fees needs to be scrapped. Like everything else Thatcher and her ideological children ever created, it is a bloated, expensive and exploitative failure. My only criticism about Corbyn’s and Rayner’s plans for the unis isn’t that they’re too radical, but that they’re too timid.