Statistical Mechanics and Unificationist Explanation
Today I want to write about the way in which statistical mechanical explanations fit into more general accounts of explanation. In particular, I’m going to make some bold (and fairly weakly substantiated) claims about how statistical mechanical explanations fit with the, nowadays relatively unpopular, unificationist view of explanation.
We can think of there being a couple of major families of approaches to explanation. The first I’m going to call dependence-based explanation. The idea here is that there is some underlying dependence structure and to explain an event is to show what it depends upon. Causal explanation, where to explain something is to give information about it’s causal history, is an example of this type of explanation. The other family is unificationist explanation. On this approach to explain something is to show how it fits into the general patterns of the world.
Importantly, on the dependence approach, when we explain an event we appeal to facts that are prior to the event. This doesn’t mean the facts are temporally prior to the event (though they probably are) but they are prior in terms of dependence. So, for example, on a causal approach to explanation where causation is a metaphysically robust relation in the world, the facts which explain an event are metaphysically prior to the event. On the unificationist view the facts that explain do not have to be prior in this way.
I want to suggest that we should take distinctively statistical mechanical explanations to be instances of unificationist explanation, or at least that they have an important unificationist component.
Consider the generic claim: ice cubes melt in warm water. This is the type of claim statistical mechanics is designed to explain. Why is it that, looking forwards through time, we see ice cubes melting in warm water, but not spontaneously forming in warm water? Let’s assume we know the fundamental (and let’s assume deterministic) physical laws. And, let’s imagine, we know the truth of the Past Hypothesis, the claim that the universe started in a very low-entropy state. The central consideration is that the fundamental laws and the Past Hypothesis seem to be too sparse a base from which to explain the melting of ice cubes. We know that there are many initial conditions consistent with the Past Hypothesis where the fundamental laws would lead to ice cubes not melting in warm water. Without a reason to ignore these conditions we cannot explain why ice cubes melt.
One reason for ignoring these conditions is to note that there are very few of them in comparison to the conditions that lead to ice melting. However, this claim makes assumptions about the relevant measure; on some measures the set of conditions that lead to ice cubes growing in warm water is bigger than the set of conditions that lead to the ice cube melting. So again, the laws and the PH are not enough, we need some added material that allows us to privilege a measure (or at least to rule out the misbehaving measures).
Where does this material come from? Here are three natural options? (1) It comes from the facts about the actual initial conditions. (2) It comes from added ontology, for example, just directly privileging a certain measure. (3) It comes from facts about the (non-initial) events of the actual world, e.g. facts about the times it takes for certain ice cubes to melt. Let’s take these in turn.
(1) The first option is to add to the fundamental laws and the PH the facts about the precise initial condition of the universe. That (given our assumption that the laws are deterministic) is clearly enough to entail that actual world ice cubes melt in warm water. The problem is that to do this is effectively to stop explaining in a distinctively statistical mechanical way. We would just be explaining a fact using the fundamental laws and the initial conditions. There is no statistical aspect here.
(2) The second option is to add ontology. For example, to directly add to your ontology a privileged measure. This would allow us to give a statistical explanation of ice melting. The problem is that adding extra ontology in this way seems unattractive and ad hoc. Perhaps arguments can be given that this addition is not so ad hoc, but prima facie it would be better if we have a different option.
(3) The last option is to add facts about the non-initial events of the actual world. We can interpret versions of the best system account of laws as doing this: we use facts about the occurrent facts of the world to privilege a measure (the measure is part of the best way of systemising those facts). Also, Michael Strevens’ version of the typicality account can be interpreted in this way. He rejects the idea that we need to add a measure over initial conditions to the laws but does add facts about frequencies of conditions of various subsystems, for example, the frequency of actual ice cubes that are in a certain microstate. Such accounts encourage us to explain phenomena like ice cubes melting in terms of the laws and the patterns of the non-initial facts of the world. Particular facts are being explained (partially) in terms of more general facts about patterns (and it is not the case that these more general facts are prior to the ones being explained). For example, when we explain the melting of the ice cube by citing it’s high probability according to the measure privileged by the best system we are showing how the melting fits into the general patterns we see in the world, since if the melting did not fit into such a pattern it would not have a high probability. The probability could even be thought of as a measure of how well a certain event fits into the patterns of the world.
If the take this last option it seems like, notwithstanding the unpopularity of the unificationist account, there is an important unificationist element in statistical mechanical explanation. And plausibly this last option is the best one.