Skip to content

Does the Past Hypothesis Need an Explanation?

June 2, 2012

There is a very nice interview with Craig Callender about philosophy of physics and metaphysics at 3ammagazine.

An excerpt from the interview is below. Craig brings up the question of whether the low entropy condition of the universe- the “past hypothesis” PH- requires explanation in that without an explanation of the PH a fundamental theory is deeply incomplete.  He is skeptical that the PH “cries out” for an explanation any more than any apparently fundamental law does and suggests that it may be a fundamental law. I am with Craig  because the claim (made for example by Penrose) that the low entropy condition is very unlikely and so demands an explanation already assumes that it is not a law and also since I think that on David Lewis’ BS account of laws (properly understood)  it is plausible that the PH is lawful; that it is a component of the ideal best theory (true theory that best maximizes simplicity, informativeness etc.)  I would be interested in what others think about this or the other issues that come up in the interview.  Let’s see where the conversation goes.

“Philosophers are raised reading Socrates, who tried to be a gadfly to conventional wisdom. Here I’m just trying to be a similar kind of pest to a prevailing opinion in physics. Scores and scores of our best physicists say that one of the great unresolved problems of physics is that it doesn’t explain the initial low entropy state of the universe. I want to express some healthy skepticism about this claim.

At least two thoughts motivate this skepticism. First, suppose we judge the constraint on initial conditions to be lawlike. (I think that there are some powerful arguments for this.) Then all the universes that don’t begin in a low entropy state are, strictly speaking, unphysical and have zero probability. The initial state is then hardly monstrously unlikely (hence demanding explanation), but rather has probability one!

Second, won’t the problem just creep up on you again? The new theory that explains the initial state of the universe will have unexplained explainers in it too. Every theory does. Then presumably one of the great unsolved problems of the new physics will be to explain those unexplained explainers. How deep in unobserved physics do we go before we say enough?”

Advertisements
9 Comments leave one →
  1. June 2, 2012 5:09 pm

    It seems like we have already been over this ground before in these discussions. At best, it could be argued that focusing on a supposed low-entropy state of the early universe is a misplaced emphasis. The early universe, with matter and energy in a random equilibrium state, is in a condition of maximum entropy for its relevant conditions of spatial confinement. Only if one assumes that the universe subsequently expands, but the contained matter and energy somehow fails to keep up (in a way that would not be consistent with general relativity), does the entropy of the universe become relatively low, in the sense that it occupies only a fraction of the newly available phase space. See the very clear discussion of all this in Paul Davies’ chapter in the collection “From Complexity to Life”, (Oxford, 2003). A simple analogy is a gas at equilibrium in a closed container with a valve that is suddenly opened, connecting the container to a second evacuated container. Before the valve is opened, the gas has maximum entropy, but when the valve is opened, the entropy of the gas is suddenly relatively low, and will subsequently increase to a new maximum as the gas achieves equilibrium in its newly available phase space.

    This is all a distraction. One should not be asking about physical laws that enforce a supposed initial low entropy universe. It is the whole combination of initial spatial confinement, subsequent expansion, gravitational clumping, heavy nuclear genesis in stars, the development of complexity on planets, etc., that makes our universe what it is. In ignoring the fine-tuning aspects of all this, and in denying the possibility of multiple universes and “anthropic” selection, we are forcing a conclusion that all of this is the result of laws, where clearly at least some of this might be better classified as the result of initial conditions and chaotic dynamics, not just laws. Concepts of multiple universes that are currently widely discussed, whether of the branching quantum state variety or massive quantum fluctuations of the vacuum sort may not be attractive, but these do not exhaust the available alternatives. In dismissing these avenues of inquiry, Craig Callender has probably isolated himself from the ultimate answers we seek.

    • June 6, 2012 12:43 pm

      Part of this seems to be just a terminological issue and part requires a careful discussion of fundamental concepts in statistical mechanics/thermodynamics. The part that is terminological is this: the standard Big Bang model ascribes to the early universe a state that has many special features, which include features of uniformity, vanishing Weyl curvature, spatio-temporal expansion, etc. So the main question is whether these features require, or are capable of, themselves being explained, and if so how? The merely terminological dispute is whether it is accurate to characterize this macrostate as “low entropy”. Even if one can argue that this is not the right characterization, the exact same problem remains.

      As to the terminological question itself, I don’t find the Davies analogy convincing. An equilibrium state is macroscopically stationary, as is the state of the gas “at equilibrium” before the valve is opened. The opening of the value is a change on the gross constraints on the system, which allows it to change its state. That change is clearly one in which the entropy of the system increases: it is a paradigm example of an irreversible process. So in the analogy, the entropy of the system does go up, hence the initial state is at low entropy relative to the later state. In the case of the universe as a whole, though, the idea of a “gross constraint” on the system that can be changed—like opening the valve—makes so sense: nothing constrains the system from the outside, since the universe contains all there is. Furthermore, the early macrostate of the universe is not at all stationary, so I can see no justification for calling it “equilibrium”. And the expansion of the universe is also a paradigm irreversible process: pace Thomas Gold, if the universe were to recontract (thinking of the volume of the universe as the “gross constraint”) it would not return to the Big Bang state. That’s what Penrose’s discussion of Weyl curvature is all about. So the whole history of the universe is one of increasing entropy, in an obvious sense, which means that the initial condition is low entropy, in an obvious sense. There are, of course, hard conceptual questions about how to even define entropy for non-equilibrium systems, and even harder questions about the idea of the entropy of the whole universe, where the state/gross constraint distinction makes no sense. But I see no harm in characterizing the Big Bang state as low entropy.

      • June 6, 2012 6:15 pm

        I’m not sure that we really disagree here. Yes, we assume that the entropy of the universe is non-decreasing as stated in the second law. And, in the early universe, the necessarily adiabatic expansion from its initial spatial confinement insures that the entropy increases, even though at any instant it remains in a state of maximum disorder which would constitute equilibrium if the condition were static. The point of my comment was that the initial spatial confinement (and other features that you list) of the early universe are only a part of what led to our very special current universe, with its obvious examples of highly complex organization, including life. It is not clear to me why the initial low entropy is more of a mystery than the many other examples of fine tuning that have made these results possible. (One can view the noted properties of the early universe as an example of fine tuning of the initial conditions.) The features of our universe can all be the results of very special laws as Callender appears to suggest, or can just be the various properties of the particular one of an arbitrarily large number of universes that we find ourselves in. I suggest that the second of these possibilities is the more likely of the two approaches to lead to an eventual science-based explanation.

        • June 8, 2012 9:23 pm

          [Rather than a terminological issue, the discussion revolves around the fundamental concepts in statistical mechanics/thermodynamics used to explain the state of the early universe and the dynamic results that are not likely to have occurred.
          If the physical universe we inhabit today came into being with the Big Bang, and we conclude that prior to the BB there was nothing, then none of the fundamental concepts (thermodynamics, space, time, entropy, etc.) can be used to assume anything about the universe before the BB; all these physical concepts applicable to the present universe are based on the existence of physical mass.
          I do believe that prior to the BB there could not have been anything like dimensions, time space, or entropy. These are concepts used to explain the changes evidenced on a physical mass as a result of an application of energy.]

        • June 10, 2012 11:56 am

          The Big Bang, conceived to be a space-time singularity, is an extrapolation backwards in time of what are seen to be later conditions of universe expansion. There is no evidence that this extreme earliest condition really occurred, and no obvious reason to believe that it did. Instead, the universe could have been initially formed with dense but finite initial conditions at some slightly later moment, avoiding the singularity and its associated discontinuities.

  2. June 7, 2012 12:07 pm

    A few random thoughts on topics related to this…

    1. Terminology. Here is a sense in which I think what Tim calls the terminological question may matter. Call the distant past state “low entropy” if you like, but if you can’t get the stat mech apparatus going (or at least the nonequil Boltzmannian bit), then one can’t think of the universe as whole as transitioning between successively more and more likely states. Then the claim that that past state is unlikely is just nakedly an intuition that that state is psychologically surprising. (That’s fine…often that’s the way science goes…but then the claim that we *know* the low entropy past is one of science’s greatest problems, etc, is over-stated, to say the least.)

    2. Macroscopic Autonomous Eqs. One question that’s interesting to me is whether there are the appropriate sort of macroscopic autonomous eqs in cosmology to get entropy increasing. Well, I think that there are, but maybe not for the whole universe. That is, if we are able to define a nonequilibrium Boltzmann entropy for some early macrostate (a big if), we still wouldn’t know whether that entropy was likely to go up or down. But if we knew that there were some macroscopic autonomous equation operant, then with some other assumptions, we might be confident that it did. I’m thinking here along the lines suggested in this nice paper: http://arxiv.org/abs/cond-mat/0508089

    3. Priors. At a conference I recently ran
    http://philosophyfaculty.ucsd.edu/faculty/ccallender/timeconfindex.html
    Sean Carroll surprised me by saying that all this talk of the low entropy condition (and others) being likely or unlikely is really not about probabilities but rather about setting one’s priors. (Sean – correct me if I’m wrong.) I’ve always thought that something like this must be the case, but I never thought physicists would think so. Hence the surprise… (Why do I think it’s the case? One reason: if you think of probabilities as coming from the laws, and you think the low entropy condition is lawlike (because thermodynamics generalizations seem projectible), then probabilities over the laws themselves are metalaws – and what reason would we have for positing these?) But if that’s right, then it seems like there is a possible mismatch between the practice and the view. That is, there is a huge amount of very impressive and challenging work on devising measures according to which the flatness, horizon (etc) problems are problems, i.e., monstrously unlikely. See all the lit on the Hawking-Gibbons measure. Is all that just to motivate a prior?

  3. June 7, 2012 8:08 pm

    I think that we are converging on an important point. A particular probability is only low in the context of a given sample space with assumed probabilities. The initial conditions of the universe in the “standard” cosmology has a fabulously low probability if we assume that, for example, the initial configuration of the universe could have been almost anything at all, as Penrose seems to have done. Similar statements can be made for other “finely tuned” features of our universe. On the other hand, consider the same probabilities conditioned on the premise that life eventually develops (a form of the “anthropic” principle). Then, the probabilities will be very different. This is not a cause and effect relationship or a law, but merely the result of a selection of a subset of the total original sample space.

    If we start with this point of view, then we ask very different questions. Specifically, what is the possible nature of the full sample space? How are the multiple universes generated? What is the set of properties that can vary among them that matters, and what are the ranges of the associated parameters? The variable features can certainly include the initial space-time configuration of the universal material. An example is Penrose’s twister theory, a relativistic scheme in which the twistors are fundamental objects that precede any specific space-time configuration. Instead, they are the carriers of quantum entanglement, and “ordinary space-time notions … are to be constructed from them.” It would seem that such an approach would allow any number of universes to be formed from a set of fundamental objects through operations that take the form of projections (within groups) and defined initial space-time configurations, all without changing the objects themselves. Some of these universes might then look like our own.

  4. Alison Fernandes permalink
    June 17, 2012 10:48 am

    One thing I wanted to hear a little more about is how the law-status of the PH is supposed to affect its need for explanation. In a previous comment I was sceptical about there being a direct link between an event’s having low probability and it’s requiring explanation. Both high and low probability events require explanation and the lack of either may point to gaps in our current theories—the difference being that if an event occurs with low probability, the explanans need not imply its occurrence was probable. So, while I actually agree that the PH doesn’t cry out for explanation, I don’t see how the fact that the PH is used in determining an event’s probability (and so has probability 1) means that it needn’t itself be explained.

    Now, the argument might be only that, given this probabilistic account of what requires explanation, the past hypothesis does not require explanation. In which case I have no objection. But there seems to be a suggestion that the lawlike status of the PH exempts it from explanation altogether, given any reasonable account of explanation. And I find this curious. Certainly we would not hold this to be true of (non-fundamental) laws in general. And particularly if we adopt a best-systems account of laws, why think that being part of a true theory that best maximizes simplicity, informativeness etc., exempts something from being explained? (Presumably if we were working with causal explanation, it would also be obvious that the PH was exempt. The controversy suggests there are some other more general notions of explanation in mind.)

    One response might be that (regardless of what notion of explanation we adopt) explanations will eventually have to appeal to fundamental physical theory, and so be committed to whatever unexplained explainers the theory itself is committed to. So these unexplained explainers are necessarily exempt from explanation. But I’m not yet convinced. Firstly, we might have a situation where different parts of fundamental theory can explain other parts of it (with no obvious means of deciding which is to count as more fundamental). And secondly, we need to look more closely at the connection between fundamental laws and explanation. If we define being a fundamental law as simply being a law that explains other laws and cannot itself be explained, then this doesn’t help settle the controversy over the PH—it just remerges as a controversy over whether the PH is fundamental. And if we have some other way of defining a fundamental law which makes it reasonably clear that the PH is one (perhaps the BS account) then we need to show how being fundamental, in this sense, implies that it cannot or need not be explained.

    I don’t think I’m actually disagreeing with what Craig Callender says—and I agree with his comments that follow, that whether having the PH as an unexplained posit is a defect of current theory will depend on how our theories develop, and is not something that can be decided in advance.

    Craig- I was also curious to hear more about the view which suggests talk about the likelihood of the PH is really talk about assigning priors. Could you elaborate a little on your reasons for being tempted by it? And what problems you think it solves?

    • June 19, 2012 6:11 pm

      Alison,

      Yes, I was thinking what you say where you write, “One response might be that…” Two quick replies…

      First, yes, you’re right, really this just relocates the debate to whether the PH is a fundamental law. From a systems perspective, I can see arguments for and against this. For: it’s strong and simple. Against: really it’s not simple — in the basic vocabulary. From a non-systems perspective, well, probably being a basic explainer is a symptom of being a fundamental law, even if not constitutive. In this case, it would be very hard to peer behind the scenes and disentangle its putative fundamentally and its ultimate explainer status.

      Second, could it be a fundamental law but also still be explained by something else? There are views of science that permit this. I just skimmed Jim Weatherall’s paper on explanation and geodesic motion recently placed on the philsci archive. In it he suggests a “puzzleball” picture of science, where what is derived from what depends on the context. If one had this kind of view, why not? I guess one could also believe in meta-laws. But I can’t make sense of them. All claims of metalaws either seem to me to be just plain old laws or not even laws.

      On priors, I guess I think that it doesn’t introduce new problems like it would if they were genuine probabilities — not so much that it solves problems, but it doesn’t introduce them. Some bits of theory are psychologically more surprising to some than other bits, and often these surprises lead to sci progress. But elevating these surprises to real probabilities just introduces trouble. Are these probabilities propensities? Systems-chances? Frequencies? Trouble every place you look…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: