Today I want to write about the way in which statistical mechanical explanations fit into more general accounts of explanation. In particular, I’m going to make some bold (and fairly weakly substantiated) claims about how statistical mechanical explanations fit with the, nowadays relatively unpopular, unificationist view of explanation.
We can think of there being a couple of major families of approaches to explanation. The first I’m going to call dependence-based explanation. The idea here is that there is some underlying dependence structure and to explain an event is to show what it depends upon. Causal explanation, where to explain something is to give information about it’s causal history, is an example of this type of explanation. The other family is unificationist explanation. On this approach to explain something is to show how it fits into the general patterns of the world.
Importantly, on the dependence approach, when we explain an event we appeal to facts that are prior to the event. This doesn’t mean the facts are temporally prior to the event (though they probably are) but they are prior in terms of dependence. So, for example, on a causal approach to explanation where causation is a metaphysically robust relation in the world, t the facts which explain an event are metaphysically prior to the event. On the unificationist view the facts that explain do not have to be prior in this way.
I want to suggest that we should take distinctively statistical mechanical explanations to be instances of unificationist explanation, or at least that they have an important unificationist component.
Consider the generic claim: ice cubes melt in warm water. This is the type of claim statistical mechanics is designed to explain. Why is it that, looking forwards through time, we see ice cubes melting in warm water, but not spontaneously forming in warm water? Let’s assume we know the fundamental (and let’s assume deterministic) physical laws. And, let’s imagine, we know the truth of the Past Hypothesis, the claim that the universe started in a very low-entropy state. The central consideration is that the fundamental laws and the Past Hypothesis seem to be too sparse a base from which to explain the melting of ice cubes. We know that there are many initial conditions consistent with the Past Hypothesis where the fundamental laws would lead to ice cubes not melting in warm water. Without a reason to ignore these conditions we cannot explain why ice cubes melt.
One reason for ignoring these conditions is to note that there are very few of them in comparison to the conditions that lead to ice melting. However, this claim makes assumptions about the relevant measure; on some measures the set of conditions that lead to ice cubes growing in warm water is bigger than the set of conditions that lead to the ice cube melting. So again, the laws and the PH are not enough, we need some added material that allows us to privilege a measure (or at least to rule out the misbehaving measures).
Where does this material come from? Here are three natural options? (1) It comes from the facts about the actual initial conditions. (2) It comes from added ontology, for example, just directly privileging a certain measure. (3) It comes from facts about the (non-initial) events of the actual world, e.g. facts about the times it takes for certain ice cubes to melt. Let’s take these in turn.
(1) The first option is to add to the fundamental laws and the PH the facts about the precise initial condition of the universe. That (given our assumption that the laws are deterministic) is clearly enough to entail that actual world ice cubes melt in warm water. The problem is that to do this is effectively to stop explaining in a distinctively statistical mechanical way. We would just be explaining a fact using the fundamental laws and the initial conditions. There is no statistical aspect here.
(2) The second option is to add ontology. For example, to directly add to your ontology a privileged measure. This would allow us to give a statistical explanation of ice melting. The problem is that adding extra ontology in this way seems unattractive and ad hoc. Perhaps arguments can be given that this addition is not so ad hoc, but prima facie it would be better if we have a different option.
(3) The last option is to add facts about the non-initial events of the actual world. We can interpret versions of the best system account of laws as doing this: we use facts about the occurrent facts of the world to privilege a measure (the measure is part of the best way of systemising those facts). Also, Michael Strevens’ version of the typicality account can be interpreted in this way. He rejects the idea that we need to add a measure over initial conditions to the laws but does add facts about frequencies of conditions of various subsystems, for example, the frequency of actual ice cubes that are in a certain microstate. Such accounts encourage us to explain phenomena like ice cubes melting in terms of the laws and the patterns of the non-initial facts of the world. Particular facts are being explained (partially) in terms of more general facts about patterns (and it is not the case that these more general facts are prior to the ones being explained). For example, when we explain the melting of the ice cube by citing it’s high probability according to the measure privileged by the best system we are showing how the melting fits into the general patterns we see in the world, since if the melting did not fit into such a pattern it would not have a high probability. The probability could even be thought of as a measure of how well a certain event fits into the patterns of the world.
If the take this last option it seems like, notwithstanding the unpopularity of the unificationist account, there is an important unificationist element in statistical mechanical explanation. And plausibly this last option is the best one.
Another great article on Aeon magazine this week is about why no one should believe in miracles, by Lawrence Shapiro. Shapiro takes a tasty stock of Hume’s argument against miracles, adds a dash of Bayesian epistemology, and rounds things off with a nice discussion of the base-rate fallacy—surely worth a read. But after reading it, I wondered why we don’t use this much simpler argument against supernatural intervention:
THE A PRIORI ARGUMENT:
- Miracles violate the laws of nature.
- The laws of nature are exceptionless—that is, they are (expressed by) true universal generalizations
- Conclusion: There are no miracles.
The argument is valid, and both of its premises have a claim not merely to truth, but to conceptual truth. The first premise is a characterization of what makes God’s miraculous action supernatural: miracles contravene or override the natural laws which govern the world. The second premise is guaranteed by most views about the laws of nature, but anyway here’s a quick argument for it: the laws of nature are nomically necessary, and necessity implies truth. So the laws are true. Unless something has gone wrong, we don’t merely have inductive reasons to doubt that miracles have happened (as Hume and Shapiro claim) but a priori reason: the very idea is conceptually incoherent. But of course this argument is too quick: though we may have good reason to doubt that miracles have happened, that reason is not conceptual incoherence. What went wrong?
The Cosmology Group’s summer school at the University of California, Santa Cruz opened yesterday with two talks by David Albert on the foundations of thermodynamics and statistical mechanics. The talks are available on youtube under the cosmology group’s channel, Phil Cosmogroup. You can also find them on the summer school’s webpage, We’ll keep posting regularly as the school progresses, so keep an eye on our page–and if you have any questions about the talks, post them in the comments here!
Dr. Albert’s discussion focusses on the temporal asymmetries of thermodynamics; his first lecture lays the groundwork for understanding the second law of thermodynamics, outlining three formulations of it and arguing that they are deeply connected to apparently unrelated arrows of time. His second lecture runs over Boltzmann’s arguments motivating the second law from statistical mechanics, and introduces the reversibility objections to these arguments.
For a complete schedule of the talks at the UCSC summer institute, look here. We’ll be posting links to the talks there as we post the talks on youtube.