Skip to content

The Gravitational Field as Cause

April 3, 2014

    Channeling, to some degree, Bertrand Russell (1912-1913), Jonathan Schaffer (2008) insisted that there is no room for causation in proper scientific practice. Science only requires natural laws and unfolding history (one physical event after the other). He remarked:

…causation disappears from sophisticated physics. What one finds instead are differential equations (mathematical formulae expressing laws of temporal evolution). These equations make no mention of causation.30 Of course, scientists may continue to speak in causal terms when popularizing their results, but the results themselves—the serious business of science—are generated independently.[1]

       Considerations such as those in the quoted pericope above quite naturally breed an argument for causal reductionism, the view that obtaining causal relations are not a part of the world’s fundamental structure, and that as a consequence causal facts reduce to (are nothing above and beyond) a species of non-causal facts. For Schaffer would add that if sound scientific practice can proceed without causation, making use instead of natural nomicity and history solely, then causal reductionism is true. Thus, causal reductionism is true.

      I find Schaffer’s justification for the claim that praise worthy scientific practice does without causation to be problematic. I would argue that with respect to extremely empirically successful theories arrived at on the basis of sound scientific practice, the notion of causation shows up primitively and indispensably in the respective interpretations of the underlying formalism of those theories. In the present post, I have space only to explore one such scientific scientific theory, viz., the general theory of relativity (GTR).

Read more…

Where Are We in the Multiverse?

March 17, 2014

There are two avenues from modern physics to the belief that the universe we see around us is not all there is, but is instead one of infinitely many like it. The first is inflationary cosmology; the second is quantum mechanics.  Though very different, these two multiverse models share two features: first, they both posit objective physical probabilities that tell us how likely we are to be in some portion of the multiverse rather than telling us how likely the multiverse is to be some way or another; and second, they both have a problem with prediction and confirmation.  I’ll discuss the relationship between self-locating probability and confirmation in these theories.

Read more…

Causation of Everything

January 28, 2014

We’ve had causation come up a few times on the blog before (particularly in Mike’s discussion of miracles). For this post, I want to raise some questions about what to say when causation get really big—when we start talking about the state of everything at one moment in time causing everything at the next—and whether such talk is sensible. Such questions are particularly relevant in the context of cosmology.

Usually when we make causal claims or explanations, we’re talking about a local causation: a particular event (or event-type, or state of affairs) that occurs in a relative small finite region of space-time, causing an event at another. We might speak of the high-pitched tone causing the glass to break, or CO2 emissions causing increased polar ice melting. While we might have some difficulty in identifying very diverse and diffuse causes and effects, we presume that the causes and effects are still local.

But what about system-wide causal claims? Assuming that a set of billiard balls, for example, constitutes an effectively closed system, could we claim that the entire configuration of billiard balls at one time causes the entire configuration at the next moment in time? Or could we claim that the state of the entire universe at one time causes the state of the universe at a later time? (Note that I’ll be leaving ‘a moment in time’ as a loose and vague notion here—whatever account of causation we use will have to be compatible with general relativity, but I won’t go into how that might be done here.)

In ‘On the Notion of Cause’ (1912−3), Russell famously argued that we should jettison the notion of causation altogether. His main concern was that, given the global laws we have in fundamental physics, nothing less than the entire state of the system at a given time would be enough (given the laws) to necessitate any event in the next. So as far as we think causation requires nomic necessitation, we would need to consider the entire state of the system as a cause. And he took this to be a reductio of the position: if causes were also to be general types of events, of the type science could investigate, there could be no cause-based science. Here’s Russell:

In order to be sure of the expected effect, we must know that there is nothing in the environment to interfere with it. But this means that the supposed cause is not, by itself, adequate to insure the effect. And as soon as we include the environment, the probability of repetition is diminished, until at last, when the whole environment is included, the probability of repetition becomes almost nil. (Russell 1912−13, pp. 7−8)

Either the causes would be so extensive and detailed as to be unique, and not the subject of scientific investigation, or they would not necessitate their effects. Science has to study repeatable events, and system-wide states would never be repeatable in the way required.

So, in Russell’s work we actually have an argument in favour of system-wide causation: it allows us to take the causal relation to be necessitation by fundamental laws. But we also have an argument against system-wide causation: it isn’t about the kind of repeatable events that science is concerned with. The argument in favour of system-wide causation seems clear enough, but what are we to make of the argument against? It seems that cosmology is precisely a field that is interested in non-repeatable events. Perhaps cosmology does not describe events at a sufficiently fine-grained level to explain how many actual events are nomically necessitated, but what about large-scale phenomena? Surely cosmology aims to account for those?

However, some later developments that followed Russell’s work didn’t advocate for system-wide causation, but dropped the requirement that causes had to nomically necessitate their effects. Instead, we were to explain what was going in causation using counterfactuals and notions of intervention. Claiming a causes b is roughly to claim that had we intervened on a in a suitably surgical of way, this would have been a way of influencing b. The interventions themselves were characterised as causal processes, as processes that that disrupt some of the causal chains already present in the system, while leaving others intact, and so allows us to test what causal chains there are. The main expositors of this approach are Woodward (2003) and Pearl (2000). These interventionist approaches don’t attempt to reduce causation to something else, but instead offer an elucidation of various causal notions in terms of other ones.

However, under such the interventionist approach, it’s hard to see how we can talk about system-wide causation. Interventions were envisaged as process that originated from outside the system we were studying. How is this approach to work when there is nothing outside the system? Here is Pearl on the issue:

…scientists rarely consider the entirety of the universe as an object of investigation. In most cases the scientist carves a piece from the universe and proclaims that piece in – namely, the focus of investigation. The rest of the universe is then considered out or background and is summarized by what we call boundary conditions. This choice of ins and outs creates asymmetry in the way we look at things, and it is this asymmetry that permits us to talk about “outside intervention” and hence about causality and cause-effect directionality. (Pearl 2000, p. 350).

But

‘If you wish to include the entire universe in the model, causality disappears because interventions disappear – the manipulator and the manipulated loose their distinction. (Pearl 2000, p. 349−50).

We might claim this as a feature of the interventionist approach: the approach makes it clear how the causal structure of the world is tied to a particular limited perspective we take on it as experimenter or intervener, dividing the world up, and is not something that can be understood outside of these divisions. Are we then content to cease dealing with causal notions in sciences like cosmology?

There are a few remaining options on the table that I’ll note briefly. One is to start with the interventionist framework, but then extend our causal notions so that they can be system-wide. We might, for example, attempt to reduce the interventionist counterfactuals to law-based ones that can then be applied to whole systems—I take Albert (2000) and Lower (2007) to follow this route. Or we might keep causation as a primitive relation that holds between local events, and build up system-wide causes out of those. Another quite different option is to go pluralist about the notion of causation: perhaps we were wrong to think that a single notion applied to all contexts. We should keep nomic necessitation as what counts for cosmology, and intervention as what counts for other contexts.

Whatever option we take here, the case of cosmology seems a useful testing ground for accounts of causation and their commitments regarding global causes.

References:

Albert, David Z. 2000. Time and Chance. Cambridge, Mass.: Harvard University Press.

Loewer, Barry. 2007. Counterfactuals and the Second Law. in Causation, Physics, and the Constitution of Reality, ed. Huw Price and Richard Corry, 293−326. Oxford: Oxford University Press.

Pearl, Judea. 2000. Causality. New York: Cambridge University Press.

Russell, Bertrand. 1912−13. On the Notion of Cause. Proceedings of the Aristotelian Society, New Series. 13: 1-26.

Woodward, James. 2003. Making Things Happen: A Theory of Causal Explanation. Oxford: Oxford University Press.

Quantum Fluctuations as Seeds of Large Scale Structure

December 16, 2013
By Ward Struyve, Rutgers.
 

On very large scales (over hundreds of megaparsecs) the universe appears to be homogeneous. This fact formed one of the original motivations for inflation theory. According to inflation theory, the very early universe went through a phase of accelerated expansion that stretched a tiny portion of space to the size of our entire observable universe, stretching initial inhomogeneities over unobservable distances. On smaller scales, the universe is far from homogeneous; one can identify all kinds of structures, such as stars, galaxies, clusters of galaxies etc. According to inflation theory, these structures are considered to originate from quantum vacuum fluctuations. The usual story is that during the inflationary period these quantum fluctuations grew to become classical inhomogeneities of the matter density. These inhomogeneities then gave rise to structures through gravitational clumping. The primordial quantum fluctuations are also the source of the temperature fluctuations of the microwave background radiation.

This explanation of the origin of structures is regarded as part of the success story of inflation theory. One aspect of the explanation is, however, problematical, namely how the quantum fluctuations became classical. The quantum fluctuations are described by a state (a wave function) which is homogeneous and the Schrödinger dynamics does not spoil this homogeneity. So how can we end up with classical fluctuations which are no longer homogeneous? According to standard quantum theory this could only happen through wave function collapse. Such a collapse is supposed to happen upon measurement. But the notion of measurement is rather vague and therefore it is ambiguous when exactly collapses happen. This is the notorious measurement problem. This problem is especially severe in the current context. Namely, in the early universe there are no measurement devices or observers which could cause a collapse. Moreover, structures such as measurement devices or observers (which are obviously inhomogeneous) are themselves supposed to be generated from the quantum fluctuations. In order to deal with the measurement problem and with the quantum-to-classical transition, we need to consider an alternative to standard quantum theory that is free of this problem. A number of such alternatives exist: Bohmian mechanics, many worlds, and collapse theories.
Read more…

Statistical Mechanics and Unificationist Explanation

November 30, 2013

Today I want to write about the way in which statistical mechanical explanations fit into more general accounts of explanation. In particular, I’m going to make some bold (and fairly weakly substantiated) claims about how statistical mechanical explanations fit with the, nowadays relatively unpopular, unificationist view of explanation.

We can think of there being a couple of major families of approaches to explanation. The first I’m going to call dependence-based explanation. The idea here is that there is some underlying dependence structure and to explain an event is to show what it depends upon. Causal explanation, where to explain something is to give information about it’s causal history, is an example of this type of explanation. The other family is unificationist explanation. On this approach to explain something is to show how it fits into the general patterns of the world.

Read more…

Bohmian Mechanics: FAQ

November 29, 2013

Bohmian Mechanics: FAQ

Wondering how Bohmian mechanics handles the two-slit experiment, how the Bohmians understand the uncertainty principles, or what Bohmians do to finesse no-hidden-variables theorems?  This video series can answer all your questions about the oldest heterodox interpretation of the quantum world.  Thanks to Shelly Goldstein for the pointer.

Could Miracles Happen?

November 14, 2013

Another great article on Aeon magazine this week is about why no one should believe in miracles, by Lawrence Shapiro.  Shapiro takes a tasty stock of Hume’s argument against miracles, adds a dash of Bayesian epistemology, and rounds things off with a nice discussion of the base-rate fallacy—surely worth a read.  But after reading it, I wondered why we don’t use this much simpler argument against supernatural intervention:

THE A PRIORI ARGUMENT:

  1. Miracles violate the laws of nature.
  2. The laws of nature are exceptionless—that is, they are (expressed by) true universal generalizations
  3. Conclusion: There are no miracles.

The argument is valid, and both of its premises have a claim not merely to truth, but to conceptual truth. The first premise is a characterization of what makes God’s miraculous action supernatural: miracles contravene or override the natural laws which govern the world.  The second premise is guaranteed by most views about the laws of nature, but anyway here’s a quick argument for it: the laws of nature are nomically necessary, and necessity implies truth.  So the laws are true.  Unless something has gone wrong, we don’t merely have inductive reasons to doubt that miracles have happened (as Hume and Shapiro claim) but a priori reason: the very idea is conceptually incoherent. But of course this argument is too quick: though we may have good reason to doubt that miracles have happened, that reason is not conceptual incoherence.  What went wrong?

Read more…

Follow

Get every new post delivered to your Inbox.

Join 119 other followers