]]>

David Lewis’s Humean supervenience thesis (HST) says that the world’s fundamental structure consists of the arrangement of qualitative, intrinsic, categorical, and natural properties of space-time points (or perhaps some other suitable replacement[1]) and the spatio-temporal relations of such points. All derivative structure globally supervenes on such fundamental structure. Furthermore, “Humean supervenience” writes Lewis, “is named in honor of the greater denier of necessary connections. It is the doctrine that *all there is to the world is a vast mosaic* of *local matters of particular fact*, *just one little thing and then another*.”[2] The HST therefore entails that the fundamental physical state of the world is separable.[3] However, if fundamental physics delivers to us an end-game fundamental physical theory that is non-separable, then the HST is false. Say that a fundamental physical theory is non-separable when,

… given two regions A and B, a complete specification of the states of A and B separately fails to fix the state of the combined system A + B. That is, there are additional facts—nonlocal facts, if we take A and B to be spatially separated—about the combined system, in addition to the facts about the two individual systems.[4]

Many theoreticians have pointed out how the HST is untenable by reason of quantum physics.[5] The existence of entangled quantum states is an implication of every interpretation of quantum mechanics.[6] Entangled quantum states do not globally supervene on local matters of particular fact, “[t]hat is, the local properties of each particle separately do not determine the full quantum state and, specifically, do not determine how the evolutions of the particles are linked.”[7] In fact, the non-separability of quantum mechanics was one reason why Einstein believed the theory to be incomplete.[8] There are responses to this objection from quantum mechanics, but I will not pursue that specific debate here. What I *will* argue is that one of the leading theories of quantum gravity is non-separable (and not because of quantum entanglement).

*Canonical Quantization of GTR*

The leading canonical quantum gravity model (CQG) is loop quantum gravity (LQG). Following Rickles (2008), I note that according to the CQG approaches, GTR provides the details about how the metric (which characterizes the evolution of space-time) evolves.[9] Normally on CQGs, space and time come apart, where the former evolves against the background of the latter. Such separation is obtained by the introduction of an approximate equivalence and a foliation:

(1) ℳ ≅ ℝ x *s*

where *s* is a 3D hypersurface that is compact, and where the foliation is:

(2) 𝔍_{t}: *s* → (∑_{t} ⊂ ℳ)

Every hypersurface ∑_{t} amounts to a temporal instant, and the manifold then is an agglomeration of such instants understood as a one-parameter family. In the context of CQGs, there are a number of avenues from such an agglomeration to a *bona fide* manifold. The fact that there are such avenues is generated by the diffeomorphism gauge symmetry of GTR. The diffeomorphism constraint that is a vector field, the Hamiltonian constraint that is a scalar field, plus various gauge functions on the spatial manifold generates diffeomorphism gauge transformations[10] Moreover, these constraints and functions evolve space forward one space-like hypersurface at a time.[11] The entire theory remains generally covariant and so the laws hold for coordinate systems related by coordinate transformations that are both arbitrary and smooth.[12] CQGs, therefore, understand both the geometry of the manifold and the gravitational field in terms of the evolutions of various fields, which are defined over space-like hypersurfaces ∑_{i} on an assumed foliation.

*Loop Quantum Gravity*

Again, the leading and most popular CQG is loop quantum gravity.[13] Proponents of this approach maintain that GTR can be simplified, and that one can understand the theory in terms of gauge fields.[14] Quantum gauge fields can be understood in terms of loops. By analogy with electrodynamics, we can say that space-time geometry is encoded in electric fields of gravitational gauge fields. The loops appropriately related to such electric fields weave the very tapestry of space itself.[15] According to LQG then, the fundamental objects are networks of various interacting discrete loops.[16] Many proponents of LQG maintain that these fundamental networks are arrangements of spin networks.[17]

Spin networks do an amazing amount of work for LQG. They not only provide one with the means to solve the Wheeler-de Witt equation (see Jacobson and Smolin (1988)), but arrangements of such networks give rise to both the geometry of space-time (Markopoulou (2004, p. 552)), and a fundamental orthonormal basis for the Hilbert space in LQG’s theory of gravity.[18] Furthermore, the role of spin networks in LQG recommends that LQG is non-separable. The causal structure of space-time is not determined by the categorical and local qualitative properties of space-time points and their spatio-temporal relations, nor by individual loops and spatial relations in which such loops stand. Let me explain.

On one interpretation of LQG, spin networks are types of causal sets, and so LQG in the quantum cosmology context has some similarities with quantum causal history (QCH) approaches.[19] Thus, LQG implies that the causal structure of the cosmos is determined by partially ordered and locally finite (in terms of both volume and entropy) sets. Such sets are regarded as events which one associates with Hilbert spaces with finitely many dimensions. These pluralities identified as events are “regarded as representing the fundamental building blocks of the universe at the Planck scale.”[20] Notice that these building blocks are pluralities of loops.[21] Individual loops themselves do nothing to determine causal structure. Moreover, some loops are joined in such a way that *they are not susceptible to separation even though they are in no way linked* (*e.g*., Borromean rings).[22] The spatio-temporal relations of such loops do nothing to determine that self-same structure, for (again) spin networks of loops weave together space-time geometry itself.[23] What is more, even on non-causal set approaches to LQG the very dynamics and evolution of quantum gravitational systems on LQG involve shifts from spin networks to spin networks. On orthodox LQG (without causal sets) quantum states are sets “of ‘chunks’, or quanta of space, which are represented by the nodes of the spin network, connected by surfaces, which are represented by the links of the spin networks.”[24] Causal structure is therefore determined by interrelated systems of loops, not individual loops and their spatial-temporal relations.

I conclude that on varying approaches to LQG, the fundamental entities of the theory determine spatio-temporal relations while failing to bear such relations. Such entities also constitute systems, the fundamental parts of which, are non-separable. Thus, if LQG is approximately correct with respect to what it has said so far about physics at the Planck scale, the universe is non-separable. LQG suggests that the HST is false.

**Click Here for References**

* Thanks to Dean Rickles and Aron C. Wall for comments on an earlier draft of this post. Any mistakes that remain are mine.

[1] The deliverances of physics determine whether or not the replacement is suitable.

[2] Lewis (1986a, p. ix) emphasis mine. John Hawthorne summarized the HST by stating that derivative facts supervene “on the global distribution of *freely recombinable fundamental properties*.” Hawthorne (2006, p. 245). Hawthorne does not endorse the HST.

[3] Lewis (1986b, p. 14); *cf*. the discussion in Maudlin (2007, p. 120) who characterized the separability of the view as follows, “[t]he complete physical state of the world is determined by (supervenes on) the intrinsic physical state of each space-time point (or each pointlike object) and the spatio-temporal relations between those points.” Maudlin (2007, p. 51).

[4] Wallace (2012, p. 293).

[5] See the discussions in Lewis (1986a, p. xi); Loewer (1996, pp. 103-105); and Maudlin (2007, pp. 61-64).

[6] Schrödinger (1935, p. 555) said that entanglement is the “…the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought”.

[7] Loewer (1996, p. 104).

[8] See Einstein (1948); *cf*. Brown who remarked, “…[Einstein’s] opposition to quantum theory was based on the fact that, if considered complete, the theory violates a principle of separability for spatially separated systems.” Brown (2005, p. 187)

[9] I am following Rickles’s discussion of CQGs. See Rickles (2008, pp. 323-327).

[10] Rickles (2008, p. 324); Sahlmann (2012, p. 189).

[11] In the present context, the Hamiltonian is a sum of the aforementioned constraints.

[12] I should add that if one says of space that it is 3+1 dimensional the theory breaks general covariance.

[13] See Rovelli (2011a), (2011b); Smolin (2001, pp. 125-145), (2002), (2004, pp. 501-509).

[14] The insight is Ashtekar’s (1986) who leaned on Sen (1981); *cf*. Smolin (2004, p. 501).

[15] “…the loops of the quantized electric field do not live anywhere in space. Instead, their configuration defines space.” Smolin (2004, p. 503).

[16] Some of these loops are knotted, meaning that “…it is impossible, by smooth motions within ordinary Euclidean 3-space, to deform the loop into an ordinary circle, where it is not permitted to pass stretches of the loop through each other…” Penrose (2005, p. 944).

[17] A spin network is a “graph, whose edges are labeled by integers, that tell us how many elementary quanta of electric flux are running through the edge. This translates into quanta of areas, when the edge pierces a surface” Smolin (2004, p. 504). Such networks constitute eigenstates of operators representing volume and area (Rickles (2005, p. 704)). The idea comes to us from Penrose (1971). Before the use of spin-networks theorists used multi-loop states. See Rovelli (2008, p. 28).

[18] There are theorems which establish each result. See Smolin (2004).

[19] Markopoulou and Smolin (1997) join “the loop representation formulation of the canonical theory [of gravity] to the causal set formulation of the path integral.” (ibid., p. 409). See also Hawkins, Markopoulou, and Sahlmann (2003, p. 3840); Rovelli (2008, p. 35); and Smolin (2005, p. 19).

[20] Hawkins, Markopoulou, and Sahlmann (2003, p. 3840).

[21] In fact, one should understand a spin network state in terms of “a sum of loop states.” Spin network states are quantum states (they are the very eigenstates of observables which help us get at volumes and areas via measurement) understood as pluralities of loop states. Quotations in this note come from Smolin (2005, p. 13).

[22] See Penrose (2005, p. 944).

[23] As Rovelli stated,

“[a] spin network state does not have position…a spin network state is not

inspace: itisspace. It is not localized with respect to something else: something else (matter, particles, other fields) might be localized with respect to it. To ask ‘where is a spin network’ is like asking ‘where is a solution of the Einstein equations.’” (2004, pp. 20-21 emphasis in the original)

[24] Rovelli (2008, p. 38). Even if one wanted to rid LQG of the redundancy in spin networks (due to symmetry) by switching to spin-knots or s-knots (diffeomorphism equivalence classes of spin-networks), my case for the non-separability of LQG would only be strengthened (see Rovelli (2004, pp. 263-264); *cf*. Rickles (2005, p. 710)).

]]>

Channeling, to some degree, Bertrand Russell (1912-1913), Jonathan Schaffer (2008) insisted that there is no room for causation in proper scientific practice. Science only requires natural laws and unfolding history (one physical event after the other). He remarked:

…causation disappears from sophisticated physics. What one finds instead are differential equations (mathematical formulae expressing laws of temporal evolution). These equations make no mention of causation.

^{30}Of course, scientists may continue to speak in causal terms when popularizing their results, but the results themselves—the serious business of science—are generated independently.[1]

Considerations such as those in the quoted pericope above quite naturally breed an argument for causal reductionism, the view that obtaining causal relations are not a part of the world’s fundamental structure, and that as a consequence causal facts reduce to (are nothing above and beyond) a species of non-causal facts. For Schaffer would add that if sound scientific practice can proceed without causation, making use instead of natural nomicity and history solely, then causal reductionism is true. Thus, causal reductionism *is* true.

I find Schaffer’s justification for the claim that praise worthy scientific practice does without causation to be problematic. I would argue that with respect to extremely empirically successful theories arrived at on the basis of sound scientific practice, the notion of causation shows up primitively and indispensably in the respective *interpretations* of the underlying formalism of those theories. In the present post, I have space only to explore one such scientific scientific theory, *viz*., the general theory of relativity (GTR).

In GTR, the principle of equivalence (PE) implies that inertial mass just is gravitational mass.[2] What is more, the effects of gravity fade away in coordinizations (or systems of coordinates) that are locally inertial inside some gravitational field, given PE. The evolutions of systems not appearing in gravitational fields are described by the same equations which depict the evolutions of said systems *in* gravitational fields, provided that the equations are generally covariant (no privileged coordinate system or local Lorentz frame).[3] All of the above indicates that the laws of GTR reduce to those of STR given a flat metric, and that the *effects* of gravity are not sensed by observers experiencing free fall.[4] We also know from Einstein’s (PE) that gravitation is strongly related to the curvature of spacetime.[5] Einstein’s equation details the relationship, though it says a bit more since it also describes the relationship between the stress-energy tensor (Tab) and the Riemann curvature (R) of space.[6] Famously, the equation also “…relates the spacetime geometry to…matter distribution.”[7]

The gravitational field represented by the Lorentzian metric (gab),

“…interacts with every other one and thus determines the relative motion of the individual components of every object we want to use as rod or clock. Because of that, it admits a metrical interpretation.”[8]

Geodesics of (gab) correspond to worldlines of objects. These geodesics are paths of “of the” curved “spacetime metric”.[9] The metric itself can be measured through the use of clocks and rods.[10]

I believe that the best interpretation of Einstein’s equation indicates that the gravitational field *causally influences* both photons and massive bodies (the causal language was already explicit in the excerpt from Rovelli above). In fact, the Lorentzian metric of Einstein’s equation is commonly interpreted as the field which ** causally influences** objects constraining their movement, determining their relative motions, and even coupling with and influencing all other fields. Moreover, its behavior is quite different from the behavior of other fields since (quoting Hawking) “…it shapes the arena in which it acts”.[11] This gravitational field has also been described as a “causal field” by philosophers of science.[12] Furthermore, gravitation, say the physicists, is that “which affects every particle in the same way.”[13] And this understanding of spatial curvature or gravitation as causally interacting and influencing “matter in relativity, via Einstein’s equation, is regarded as

The idea that affine structure plays a quasi-causal role in explaining the motions of bodies figures significantly in Einstein’s criticism of Newtonian mechanics and SR and in his subsequent understanding of GR.[15]

…the fact that it [Newtonian absolute space] acted without being acted upon was held up as problematic [by Einstein]; a ‘defect’ not shared by the spacetimes of GR (Einstein, 1922, 61-62).[16]

So I believe GTR gives us a good reason for believing that causation shows up in correct interpretations of the mathematical formalism of highly successful physical theories.

No doubt the causal reductionist will question my interpretation of the formalism. She will ask, “is it not true that the presence of massive bodies interacts (perhaps causally) with the self-same field?” Is not the famous dictum of John A. Wheeler the claim that “spacetime tells matter how to move; matter tells spacetime how to curve”?[17] Does this not suggest that if there are real obtaining causal relations involved, then the gravitational field causes a material body to behave *x*-ly, while the material body’s behaving *x*-ly causes the field to behave *y*-ly? Does this not breed a circle? Should we not prohibit such causal circles?

I do not find these questions to be very troubling. The gravitational field is present even in the absence of all material bodies, as Harvey Brown noted,

Once it is recognized that g

_{mn}is an autonomous field (or fields) in its own right, nothing in Mach’s philosophy implies thatits dynamical behaviour or very existence,must be determined by the presence of (other kinds of) matter.[18]

Again, the gravitational field does not owe its very existence to the presence of material bodies (sorry relationalists). Spacetime curvature and geometric structure is prior.[19] I confess to such priority because it’s the very existence of the gravitational field which causally influences the behavior of fields and fundamental particles both massive and massless. I therefore avoid a causal circle since what material bodies influence is the inertial structure of spacetime, not its existence. Oliver Pooley made this clear, the “[i]nertial structure, as encoded in g_{ab}, *is influenced but not determined* by the matter content of spacetime.”[20] So the existence of the gravitational field causally influences particles and fields by constraining their trajectories, and determining their relative motions while the presence of material bodies influences the inertial structure of spacetime.

Carl Hoefer (2009, pp. 703-704) has argued that if GTR implied that there are certain obtaining causal relations, or if it’s best interpretation requires the use of causal notions, the reductionist should not be worried, for GTR is not itself a fundamental physical theory. GTR’s picture of the world is not the quantum mechanical picture of the world, and the thought is that GTR will have to yield to QM in ways that would rub out any attempt to understand the causal activity of the gravitational field as fundamental physical activity. But I could very well argue that in string theory the graviton plays a causal role. I could also argue that in the context of certain string theories, the interactions of branes are best interpreted as causal interaction involving the exertion of causal influence on one another in a way that is not exactly circular or symmetric. Unfortunately, space constraints prohibit me from trotting out such interpretations, but I do believe there remains a promising response to Hoefer’s worry that can be expressed in a sound bite (not really). Look back to my characterization of Schaffer’s argument for causal reductionism. Notice that one of the premises of the argument states that sound scientific practice implies causal reductionism. That premise does not say that such practice must be peculiar to *fundamental* physical investigation solely. Obviously, sound scientific practice is what physicists leaned on when developing GTR. And GTR is of course an extremely successful physical theory, and that is precisely why any quantum theory of gravity must recover its predictive success. Thus, Hoefer’s complaint should not worry the anti-reductionist about causation.

Causal reductionists will no doubt judge my appeal to GTR to be cheap and shallow. They will insist that the authorities I have invoked are merely describing matters with a particular gloss. Surely we can do without causal talk.

In the absence of both a successful reductive analysis of causation and a correct reductive metaphysical theory of the causal relation, I do not see why we should believe that causal talk in the work of physicists should be understood as redundant and imprecise talk. One cannot dismiss such causal language without providing a worthy proxy or substitute for it. The appropriate substitute arrives at the end of a careful reductive analysis of causal facts and an ontological reduction of the causal relation. The problem is that after a great many years of trying, attempts to reductively analyze and ontologically reduce causation have pretty much universally failed. As two foremost experts on the topic, L.A. Paul and Ned Hall concluded:

After surveying the literature in some depth, we conclude that, as yet, there is no reasonably successful reduction of the causal relation. And correspondingly, there is no reasonably successful conceptual analysis of a philosophical causal concept. No extant approach seems able to incorporate all of our desiderata for the causal relation, nor to capture the wide range of our causal judgments and applications of our causal concept. Barring a fundamental change in approach, the prospects of a relatively simple, elegant and intuitively attractive, unified theory of causation, whether ontological reduction or conceptual analysis, are dim.[21]

I therefore conclude that the direct argument for causal reductionism from sound scientific practice is undercut by a proper understanding of the gravitational field.

___________________________________________________________________

* I would like to thank Tom Banks and Aron C. Wall for their comments on an earlier version of this piece. Any mistakes that remain are mine.

[1] Schaffer (2008, p. 92). See also Hitchcock (2007); and Norton (2007a), (2007b). Bertrand Russell said, “In the motions of mutually gravitating bodies, there is nothing that can be called a cause, and nothing that can be called an effect; there is merely a formula.” Russell (1912-1913, p. 14)

[2] Zee (2013, p. 258). There are strong and weak forms of this principle for which see Brown (2005, pp. 169-172). Ciufolini and Wheeler (1995, p. 13) discuss three versions of the principle.

[3] Weinberg (2008, p. 511).

[4] Misner, Thorne, and Wheeler (1973, pp. 312-313).

[5] Penrose (2005, p. 459).

[6] It, of course, says even more than this.

[7] Wald (1984, p. 68).

[8]Rovelli (1997, p. 194)

[9] Wald (1984, p. 67).

[10] Brown (2005, p. 160).

[11] Hawking (1996, p. 5).

[12] Hoefer (2009, p. 687), though Hoefer does not believe there is bona fide causation in GTR.

[13] Hawking and Ellis (1973, p. 2).

[14] Geroch (1978, p. 180) emphasis mine.

[15]Pooley (2013, p. 541) See Einstein (1916, pp. 112-113)

[16]Pooley (2013, p. 541)

[17] As quoted by Wheeler and Ford (1998, p. 235), quoting John A. Wheeler.

[18] Brown (2005, pp. 159-160) emphasis mine.

[19] And here I’m agreeing (at least in part) with Balashov and Janssen,

“Does the Minkowskian nature of spacetime explain why the forces holding a rod together are Lorentz invariant or the other way around?…Our intuition is that the geometrical structure of space(-time) is the explanans here and the invariance of the forces the explanandum” Balashov and Janssen (2003, p. 340).

[20] Pooley (2013, p. 541).

[21] Paul and Hall (2013, p. 249).

You can download the references page for this blog post here.

]]>

Our first avenue to the multiverse is cosmological: many inflationary models predict that the early inflation of our universe is eternal, continuously spinning off bubble universes in a sea of expansion. This leads to infinitely many distinct universes, each with its own fundamental constants and ratio of dark energy to dark matter and ordinary matter (for more on this see [6]).

The second comes from one interpretation of quantum mechanics. The Everett, or many worlds, interpretation holds that the world is completely characterized by a universal quantum wavefunction which never collapses. After any experiment, the wavefunction—and the world—splits, with a branch corresponding to every possible measurement outcome. So, for example, if I am measuring the spin of an electron, after my measurement there are two descendants of me: one who measured spin up, and one who measured spin down, each living in his own local universe. We should note that this is just one—very controversial—way of understanding quantum mechanics.

To keep things simple we’ll call the totality of all that there is ‘the multiverse’ and smaller, isolated, universe-like regions ‘local universes’. We typically take these two theories as providing us with very different multiverses: the cosmological multiverse is a collection of matter-filled regions (local universes) separated by infinitely expanding space, whereas the many worlds model of quantum mechanics gives us one wavefunction in a superposition of states, each of which corresponds to a local universe. But some cosmologists think these might be linked: we won’t discuss this here, but see [1] and [2] if you’re interested.

These two theories share a problem: they are apparently unfalsifiable. Since they predict that all measurement outcomes occur in some local universe, there are no results which are incompatible with either theory. Even if falsifiability is not the arbiter of scientific worthiness, the problem remains. We gain evidence for a theory by testing a its predictions; but since these theories claim that every experimental outcome occurs somewhere they don’t seem to predict anything about any particular experiment. So it is difficult to see how any experimental result or observation could possibly count as evidence for either.

In the case of multiverse expansion models, the model predicts that every possible ratio of matter to dark energy exists in some universe; it predicts that there is some universe for every way of setting of the (at least some of) the fundamental constants, and for every distribution of matter (this isn’t universally agreed upon, although consensus is growing; for an overview see [3], and for dissent from the inflationary paradigm see [5]). But we take features of our local universe—such as its vacuum energy or the uniformity of the microwave background radiation—to be evidence for the theory. How is this possible if the theory predicts that there are infinitely universes without these features?

In the case of Many Worlds Quantum Mechanics, the theory predicts that every experimental outcome occurs on some branch. But we take the results we observe—such as the frequency of spin-up results in a Stern-Gerlach experiment–to be evidence* *for the theory. How is this possible if we know that the theory predicts infinitely many branches with different frequencies?

Call this the *evidence challenge.* The answer given by both theories is roughly the same: although we know that, for each experiment, every possible result shows up somewhere, we can still have a probability that *we* are in some region of the multiverse. We get a probability that *our* area is like *this* rather than like *that.*

What’s weird about this is that this is not a probability for the multiverse to develop in some way. We know exactly how the multiverse will develop. Instead, this probability that we are in some part of it rather than another. It’s essentially self-locating or indexical. (Philosophers call this sort of probability *de se*). We know what the multiverse is like with certainty; our predictions, and so our evidence, are predictions about where we are instead of predictions about what happens.

Confirmation, on this model, involves two steps: First, we gather information *E* about our local universe. We then assume that we are in a typical part of the multiverse—a region that’s like most. Our evidence *E* confirms the theory if and only if the theory says that *E* holds in most places. To show that a theory can be confirmed, then, we must show that the theory gives us a natural measure which can tell us what *most* universes are like.

But this talk about *most* is a distraction. We know that there are infinitely many of each type of local universe? Coming up with the right measure of *most* is the cosmological measure problem. There is not yet a consensus about what the correct measure is. Without an agreed-upon measure on the table, it’s hard to tell whether the measure in question could give evidence for the theory. Most measures involve finding a preferred ordering of observations, and cutting off these observations before this sequence diverge, and then taking the limiting relative frequency. (For a recent overview of the options, see [4].) We then assume that we are equally likely—according to the measure—to be any observer. This assumption is called the *typicality *or *Copernican *assumption.

Proponents of many worlds quantum mechanics agree that the natural measure over branches is the Born rule—which tells us that the likelyhood we’re in some local universe is proportional to that local universe’s amplitude in the universal wave function. More branches are like ours if our branch has a high amplitude. The trick, for many worlds, is not figuring out what the correct measure is. It’s in justifying* **using* this measure to gain evidence for the theory. Most justifications go via decision theory; they argue that an agent in a many world universe will use the born rule to weight their decisions. To their opponents, these justifications seem too pragmatic. (For a thorough exploration of this strategy, see [7] or [8].)

There’s a knee-jerk reaction to all of this, which is to reject the idea that objective physical probabilities can be self-locating. Physics should tell us how likely the universe is to have some property, or how likely things are to develop in a certain way, or how likely an experimental outcome is. It’s supposed to give us probabilities which are *about the world. *

This seems to be a requirement if these objective probabilities are going to feature in explanations of our surroundings, which physical probabilities surely do. Self-locating probabilities don’t seem like the sort of thing that can do this. How can the likelihood that I’m over here, rather than over there, explain why this electron is spin up? How can we explain the structure of *our* universe by citing the likelihood that we end up here rather than somewhere else?

And one reaction to this knee-jerk is to reject an underlying intuition about explanations and physical probability—that the probabilities must guide the world, and that explaining A requires showing how A was produced.. Doing so requires us to think of physical probabilities as deeply related to us: on this view, physical probabilities are just the best way of encoding information about what we should expect. Explanation is also closely connected to telling us what we *ought* to have expected, or showing how what we observe is part of a unified system. This is a revisionary take on physical probability, but one that many of us might already accept.

But even if we accept this us-directed notion of physical probability, both theories still have to justify the inference procedure described above. For one might be doubtful that* **any *inferences of the sort described are justified. Doing so requires us to rely on a* **typicality* principle: that our local universe is like most; to make a prediction we must assume that our locality is like most consistent with our evidence. But what could justify this principle? Perhaps, like Hume’s Principle of the Uniformity of Nature (PUN), this is something we must accept to do science, but cannot justify. Still, a proponent of this sort of reasoning now has two basic epistemic assumptions: PUN and Typicality.

Comments welcome!

References:

[1] Aguirre, Anthony, and Max Tegmark (2012). “Born in an Infinite Universe: a Cosmological Interpretation of Quantum Mechanics.” arXiv:1008.1066v2

[2] Bousso, Raphael, and Leonard Susskind (2011) “The Multiverse Interpretaton of Quantum Mechanics.” arXiv:1105.3796v3

[3] Davies, Paul C. W. (2004) “Multiverse Cosmological Models.” arXiv:astro-ph/0403047

[4] Freivogel, Ben (2011). “Making Predictions in the Multiverse.” arXiv:1105.0244v2

[5] Ijjas, Anna, Paul Steinhardt, and Abraham Loeb (2013). “Inflationary Paradigm in Trouble After Planck 2013.” arXiv:1304.2785v2

[6] Susskind, Leonard (2003). “The Anthropic Landscape of String Theory.” arXiv:hep-th/0302219

[7] Wallace, David (2005). “Quantum Probability from Subjective Likelihood: Improving on Deutsch’s Proof of the Probability Rule.” arXiv:quant-ph/0312157v2

[8] Wallace, David (2012). *The Emergent Multiverse. *Oxford: Oxford University Press.

]]>

Usually when we make causal claims or explanations, we’re talking about a local causation: a particular event (or event-type, or state of affairs) that occurs in a relative small finite region of space-time, causing an event at another. We might speak of the high-pitched tone causing the glass to break, or CO_{2} emissions causing increased polar ice melting. While we might have some difficulty in identifying very diverse and diffuse causes and effects, we presume that the causes and effects are still local.

But what about system-wide causal claims? Assuming that a set of billiard balls, for example, constitutes an effectively closed system, could we claim that the entire configuration of billiard balls at one time causes the entire configuration at the next moment in time? Or could we claim that the state of the entire universe at one time causes the state of the universe at a later time? (Note that I’ll be leaving ‘a moment in time’ as a loose and vague notion here—whatever account of causation we use will have to be compatible with general relativity, but I won’t go into how that might be done here.)

In ‘On the Notion of Cause’ (1912−3), Russell famously argued that we should jettison the notion of causation altogether. His main concern was that, given the global laws we have in fundamental physics, nothing less than the entire state of the system at a given time would be enough (given the laws) to necessitate *any* event in the next. So as far as we think causation requires nomic necessitation, we would need to consider the entire state of the system as a cause. And he took this to be a reductio of the position: if causes were also to be general types of events, of the type science could investigate, there could be no cause-based science. Here’s Russell:

In order to be sure of the expected effect, we must know that there is nothing in the environment to interfere with it. But this means that the supposed cause is not, by itself, adequate to insure the effect. And as soon as we include the environment, the probability of repetition is diminished, until at last, when the whole environment is included, the probability of repetition becomes almost nil. (Russell 1912−13, pp. 7−8)

Either the causes would be so extensive and detailed as to be unique, and not the subject of scientific investigation, or they would not necessitate their effects. Science has to study repeatable events, and system-wide states would never be repeatable in the way required.

So, in Russell’s work we actually have an argument in favour of system-wide causation: it allows us to take the causal relation to be necessitation by fundamental laws. But we also have an argument against system-wide causation: it isn’t about the kind of repeatable events that science is concerned with. The argument in favour of system-wide causation seems clear enough, but what are we to make of the argument against? It seems that cosmology is precisely a field that is interested in non-repeatable events. Perhaps cosmology does not describe events at a sufficiently fine-grained level to explain how many actual events are nomically necessitated, but what about large-scale phenomena? Surely cosmology aims to account for those?

However, some later developments that followed Russell’s work didn’t advocate for system-wide causation, but dropped the requirement that causes had to nomically necessitate their effects. Instead, we were to explain what was going in causation using counterfactuals and notions of intervention. Claiming **a** causes **b **is roughly to claim that had we intervened on **a** in a suitably surgical of way, this would have been a way of influencing **b**. The interventions themselves were characterised as causal processes, as processes that that disrupt some of the causal chains already present in the system, while leaving others intact, and so allows us to test what causal chains there are. The main expositors of this approach are Woodward (2003) and Pearl (2000). These interventionist approaches don’t attempt to reduce causation to something else, but instead offer an elucidation of various causal notions in terms of other ones.

However, under such the interventionist approach, it’s hard to see how we can talk about system-wide causation. Interventions were envisaged as process that originated from outside the system we were studying. How is this approach to work when there is nothing outside the system? Here is Pearl on the issue:

…scientists rarely consider the entirety of the universe as an object of investigation. In most cases the scientist carves a piece from the universe and proclaims that piece in – namely, the focus of investigation. The rest of the universe is then considered out or background and is summarized by what we call boundary conditions. This choice of ins and outs creates asymmetry in the way we look at things, and it is this asymmetry that permits us to talk about “outside intervention” and hence about causality and cause-effect directionality. (Pearl 2000, p. 350).

But

‘If you wish to include the entire universe in the model, causality disappears because interventions disappear – the manipulator and the manipulated loose their distinction. (Pearl 2000, p. 349−50).

We might claim this as a feature of the interventionist approach: the approach makes it clear how the causal structure of the world is tied to a particular limited perspective we take on it as experimenter or intervener, dividing the world up, and is not something that can be understood outside of these divisions. Are we then content to cease dealing with causal notions in sciences like cosmology?

There are a few remaining options on the table that I’ll note briefly. One is to start with the interventionist framework, but then extend our causal notions so that they can be system-wide. We might, for example, attempt to reduce the interventionist counterfactuals to law-based ones that can then be applied to whole systems—I take Albert (2000) and Lower (2007) to follow this route. Or we might keep causation as a primitive relation that holds between local events, and build up system-wide causes out of those. Another quite different option is to go pluralist about the notion of causation: perhaps we were wrong to think that a single notion applied to all contexts. We should keep nomic necessitation as what counts for cosmology, and intervention as what counts for other contexts.

Whatever option we take here, the case of cosmology seems a useful testing ground for accounts of causation and their commitments regarding global causes.

**References:**

Albert, David Z. 2000. *Time and Chance*. Cambridge, Mass.: Harvard University Press.

Loewer, Barry. 2007. Counterfactuals and the Second Law. in *Causation, Physics, and the Constitution of Reality*, ed. Huw Price and Richard Corry, 293−326. *Oxford:** *Oxford University Press.

Pearl, Judea. 2000. *Causality*. New York: Cambridge University Press.

Russell, Bertrand. 1912−13. On the Notion of Cause. *Proceedings of the Aristotelian Society, New Series*. 13: 1-26.

Woodward, James. 2003. *Making Things Happen: A Theory of Causal Explanation*. Oxford: Oxford University Press.

]]>

On very large scales (over hundreds of megaparsecs) the universe appears to be homogeneous. This fact formed one of the original motivations for inflation theory. According to inflation theory, the very early universe went through a phase of accelerated expansion that stretched a tiny portion of space to the size of our entire observable universe, stretching initial inhomogeneities over unobservable distances. On smaller scales, the universe is far from homogeneous; one can identify all kinds of structures, such as stars, galaxies, clusters of galaxies etc. According to inflation theory, these structures are considered to originate from quantum vacuum fluctuations. The usual story is that during the inflationary period these quantum fluctuations grew to become classical inhomogeneities of the matter density. These inhomogeneities then gave rise to structures through gravitational clumping. The primordial quantum fluctuations are also the source of the temperature fluctuations of the microwave background radiation.

This explanation of the origin of structures is regarded as part of the success story of inflation theory. One aspect of the explanation is, however, problematical, namely how the quantum fluctuations became classical. The quantum fluctuations are described by a state (a wave function) which is homogeneous and the Schrödinger dynamics does not spoil this homogeneity. So how can we end up with classical fluctuations which are no longer homogeneous? According to standard quantum theory this could only happen through wave function collapse. Such a collapse is supposed to happen upon measurement. But the notion of measurement is rather vague and therefore it is ambiguous when exactly collapses happen. This is the notorious measurement problem. This problem is especially severe in the current context. Namely, in the early universe there are no measurement devices or observers which could cause a collapse. Moreover, structures such as measurement devices or observers (which are obviously inhomogeneous) are themselves supposed to be generated from the quantum fluctuations. In order to deal with the measurement problem and with the quantum-to-classical transition, we need to consider an alternative to standard quantum theory that is free of this problem. A number of such alternatives exist: Bohmian mechanics, many worlds, and collapse theories.

According to many worlds theories, the wave function always evolves according to Schrödinger’s equation and never collapses. In this case, the quantum-to-classical transition could be explained through decoherence. Decoherence could lead to the identification of worlds which are themselves not homogeneous. Various possible sources for decoherence have been considered. However, it seems fair to say that no conclusive results have been obtained yet, neither on the source of the decoherence, nor on the time-scales over which this could happen.

According to collapse theories, collapses happen objectively, at random times. A number of research groups have explored possible collapse models that could account for the quantum-to-classical transition in inflation theory, see e.g. [1-4]. Especially Daniel Sudarsky and collaborators have been very active on this topic, see e.g. [2] and references therein. (Sudarsky also wrote a detailed exposition of the problem of the quantum-to-classical transition [5].)

Bohmian mechanics provides a different possible explanation of the quantum-to-classical transition. According to Bohmian mechanics, the fluctuations are described by an actual field configuration. There is also a wave function, the same as in standard quantum theory, but it never collapses, i.e., it always evolves according to Schrödinger’s equation. Its role is to guide the actual field in its motion. So the equation of motion of the actual field depends on the wave function. Now, the actual field is typically inhomogeneous, even though the wave function may be homogeneous. This is how Bohmian mechanics explains the inhomogeneity of the primordial vacuum fluctuations. Furthermore, Bohmian mechanics allows for a simple analysis of the quantum-to-classical transition. The classical limit is obtained whenever the actual field approximately evolves according to the usual classical field equations. Together with Nelson Pinto-Neto and Grasiele Santos, I have looked into this and we found that the actual field starts behaving classically during the inflationary period, exactly at the expected time (see [6] and my talk at the Santa Cruz summer school, see also [7] for a similar discussion for bouncing models).

- A. Perez, H. Sahlmann, D. Sudarsky, On the quantum origin of the seeds of cosmic structure, Class.Quant.Grav. (2006); arXiv:gr-qc/0508100
- P. Cañate, P. Pearle and D. Sudarsky, CSL Wave Function Collapse Model as a Mechanism for the Emergence of Cosmological Asymmetries in Inflation , Phys. Rev. D (2013); arXiv:1211.3463
- J. Martin, V. Vennin and P. Peter, Cosmological Inflation and the Quantum Measurement Problem, Phys. Rev. D (2012); arXiv:1207.2086
- S. Das, K. Lochan, S. Sahu, T.P. Singh, Quantum to Classical Transition of Inflationary Perturbations – Continuous Spontaneous Localization as a Possible Mechanism, Phys. Rev. D (2013); arXiv:1304.5094
- D. Sudarsky, Shortcomings in the Understanding of Why Cosmological Perturbations Look Classical, Int. J. Mod. Phys. D (2011); arXiv:0906.0315
- N. Pinto-Neto, G.B. Santos and W. Struyve, Quantum-to-classical transition of primordial cosmological perturbations in de Broglie-Bohm quantum theory, Phys. Rev. D 85, 083506 (2012); arXiv:1110.1339
- N. Pinto-Neto, G.B. Santos and W. Struyve, Quantum-to-classical transition of primordial cosmological perturbations in de Broglie-Bohm quantum theory: the bouncing scenario, Phys. Rev. D (2013);arXiv:1309.2670

]]>

We can think of there being a couple of major families of approaches to explanation. The first I’m going to call dependence-based explanation. The idea here is that there is some underlying dependence structure and to explain an event is to show what it depends upon. Causal explanation, where to explain something is to give information about it’s causal history, is an example of this type of explanation. The other family is unificationist explanation. On this approach to explain something is to show how it fits into the general patterns of the world.

Importantly, on the dependence approach, when we explain an event we appeal to facts that are prior to the event. This doesn’t mean the facts are temporally prior to the event (though they probably are) but they are prior in terms of dependence. So, for example, on a causal approach to explanation where causation is a metaphysically robust relation in the world, the facts which explain an event are metaphysically prior to the event. On the unificationist view the facts that explain do not have to be prior in this way.

I want to suggest that we should take distinctively statistical mechanical explanations to be instances of unificationist explanation, or at least that they have an important unificationist component.

Consider the generic claim: ice cubes melt in warm water. This is the type of claim statistical mechanics is designed to explain. Why is it that, looking forwards through time, we see ice cubes melting in warm water, but not spontaneously forming in warm water? Let’s assume we know the fundamental (and let’s assume deterministic) physical laws. And, let’s imagine, we know the truth of the Past Hypothesis, the claim that the universe started in a very low-entropy state. The central consideration is that the fundamental laws and the Past Hypothesis seem to be too sparse a base from which to explain the melting of ice cubes. We know that there are many initial conditions consistent with the Past Hypothesis where the fundamental laws would lead to ice cubes not melting in warm water. Without a reason to ignore these conditions we cannot explain why ice cubes melt.

One reason for ignoring these conditions is to note that there are very few of them in comparison to the conditions that lead to ice melting. However, this claim makes assumptions about the relevant measure; on some measures the set of conditions that lead to ice cubes growing in warm water is bigger than the set of conditions that lead to the ice cube melting. So again, the laws and the PH are not enough, we need some added material that allows us to privilege a measure (or at least to rule out the misbehaving measures).

Where does this material come from? Here are three natural options? (1) It comes from the facts about the actual initial conditions. (2) It comes from added ontology, for example, just directly privileging a certain measure. (3) It comes from facts about the (non-initial) events of the actual world, e.g. facts about the times it takes for certain ice cubes to melt. Let’s take these in turn.

(1) The first option is to add to the fundamental laws and the PH the facts about the precise initial condition of the universe. That (given our assumption that the laws are deterministic) is clearly enough to entail that actual world ice cubes melt in warm water. The problem is that to do this is effectively to stop explaining in a distinctively statistical mechanical way. We would just be explaining a fact using the fundamental laws and the initial conditions. There is no statistical aspect here.

(2) The second option is to add ontology. For example, to directly add to your ontology a privileged measure. This would allow us to give a statistical explanation of ice melting. The problem is that adding extra ontology in this way seems unattractive and ad hoc. Perhaps arguments can be given that this addition is not so ad hoc, but prima facie it would be better if we have a different option.

(3) The last option is to add facts about the non-initial events of the actual world. We can interpret versions of the best system account of laws as doing this: we use facts about the occurrent facts of the world to privilege a measure (the measure is part of the best way of systemising those facts). Also, Michael Strevens’ version of the typicality account can be interpreted in this way. He rejects the idea that we need to add a measure over initial conditions to the laws but does add facts about frequencies of conditions of various subsystems, for example, the frequency of actual ice cubes that are in a certain microstate. Such accounts encourage us to explain phenomena like ice cubes melting in terms of the laws and the patterns of the non-initial facts of the world. Particular facts are being explained (partially) in terms of more general facts about patterns (and it is not the case that these more general facts are prior to the ones being explained). For example, when we explain the melting of the ice cube by citing it’s high probability according to the measure privileged by the best system we are showing how the melting fits into the general patterns we see in the world, since if the melting did not fit into such a pattern it would not have a high probability. The probability could even be thought of as a measure of how well a certain event fits into the patterns of the world.

If the take this last option it seems like, notwithstanding the unpopularity of the unificationist account, there is an important unificationist element in statistical mechanical explanation. And plausibly this last option is the best one.

]]>

Wondering how Bohmian mechanics handles the two-slit experiment, how the Bohmians understand the uncertainty principles, or what Bohmians do to finesse no-hidden-variables theorems? This video series can answer all your questions about the oldest heterodox interpretation of the quantum world. Thanks to Shelly Goldstein for the pointer.

]]>

THE *A PRIORI *ARGUMENT:

- Miracles violate the laws of nature.
- The laws of nature are exceptionless—that is, they are (expressed by) true universal generalizations
- Conclusion: There are no miracles.

The argument is valid, and both of its premises have a claim not merely to truth, but to *conceptual *truth. The first premise is a characterization of what makes God’s miraculous action *super*natural: miracles contravene or override the *natural* laws which govern the world. The second premise is guaranteed by most views about the laws of nature, but anyway here’s a quick argument for it: the laws of nature are nomically necessary, and necessity implies truth. So the laws are true. Unless something has gone wrong, we don’t merely have inductive reasons to doubt that miracles have happened (as Hume and Shapiro claim) but *a priori* reason: the very idea is conceptually incoherent. But of course this argument is too quick: though we may have good reason to doubt that miracles have happened, that reason is not conceptual incoherence. What went wrong?

We could deny premise 1: perhaps there’s a way of characterizing supernatural intervention that doesn’t rely on it’s being above the petty rules which govern mortal mechanics. We’ll return to this idea in a bit. First, though, I’d like to look into relaxing the second premise. Could a law of nature be false?

Some people think so—Nancy Cartwright chief amongst them. But she’s an outlier, and most theories of natural law back premise two. Foremost amongst these is dispositional essentialism: According to this view, advocated by Brian Ellis and Alexander Bird, the laws express the essential natures of the properties they involve. So if Coulomb’s law is a law of nature, it’s an essential property of charge that charged objects obey Coulomb’s law. Since things have their essential properties at every world in which they exist, charged objects must—and do—conform strictly to Coulomb’s law.

Humeans, on the other hand, take laws to be mere regularities, not backed by essences or necessity. Now these regularity theorists have some explaining to do: why are some generalizations laws, and others mere accidents? What is the difference between “Like charged particles repel one another” and “all of my coffee mugs are dirty”?

The regularity theorist’s answer is pragmatic: laws are tools used to organize our knowledge into a deductive system. “like charged particles repel one another” is inferentially very useful; “all of my coffee mugs are dirty” is not. This insight leads us to the Best Systems Account of laws (BSA), associated with John Stuart Mill, Frank Ramsey, and David Lewis: the laws of nature are those true generalizations which, taken together, form the simplest, strongest axiomatic system of all of the truths of the world—where a system is simpler if it has fewer axioms, and stronger if it implies more truths.

We can imagine assigning a score to each potential lawbook: points are gained by having true consequences, deducted for having more axioms. The group of true generalizations which scores highest is the lawbook of our world.

This characterization of laws gives regularity theorists more room to maneuver than dispositional essentialists. The dispositional essentialist held that laws are true because they are metaphysically necessary; the Humean holds that laws are true because true generalizations better organize knowledge than false ones.

So it’s not against the spirit of Humeanism to relax the truth condition if adding some *false* generalization to our deductive system would yield a simpler system from which very many truths and very few falsehoods could be inferred. We’d just need to tweak our scoring rules a bit: a potential system of laws gets points added for each true consequence, points deducted for each axiom, and points deducted for each false consequence. Presumably, these will be weighted—one false consequence should remove many more points than each true consequence. Call this the Good Enough System Account of laws (GESA). The laws of the Good Enough System can have exceptions, provided the exceptions are few, and the laws are otherwise quite useful.

Now, if the GESA of laws is right, we shouldn’t be so sure of Premise 2 of the *a priori* argument. We might have good reason to think that miracles don’t happen, but they aren’t ruled out by fiat.

Of course, we might *also* want to deny premise 1. Remember, Premise 1 sought to express what was miraculous about miracles: God’s direct interventions violate the laws that govern mortal mechanics. But God’s interventions must be *interventions*, that is, they must really cause things. And causation requires subsumption under laws. So while in order for divine intervention to be divine, it must break the natural laws, in order for it to be intervention, it must obey *some* law. What gives?

Here, I think, we should distinguish between fundamental and nonfundamental lawhood. Even in mortal contexts, we are willing to countenance not-strictly-speaking-true nonfundamental laws (read: the special sciences) but not false fundamental laws (read: physics). This makes the GESA more closely aligned with how we think of special sciences, and the BSA—with its stipulation that the laws must be true—closer to how we think of fundamental science. (The view we’ve arrived at is similar to Craig Callender and Jonathan Cohen’s Better Best System account, but allows us to distinguish the fundamental laws from the nonfundamental: the fundamental laws are *true*, whereas the nonfundamental laws may not be).

The believer in miracles, then, takes the fundamental law to be divine: “what God intends comes to pass”. But this doesn’t leave her bereft of mortal mechanics: instead of being strictly true, the natural laws of physics are nonfundamental laws: most of their consequences are true, but their usefulness to us isn’t impugned by those miraculous occasions when they lead us astray.

Don’t get me wrong, though—while I think the *a priori* argument is unsound, denying it shouldn’t make us more willing to countenance miraculous intervention. Hume’s argument, and Shapiro’s, should remind us that believing miracles *actually* happen is, nearly always, irrational.

]]>

Cosmology group researcher Tim Maudlin has a great post in Aeon Magazine about cosmic fine-tuning. Read it there and feel free to discuss it in the comments here!

]]>