Skip to content

International Philosophy of Cosmology Conference in Tenerife

August 21, 2014

This September, our philosophy of cosmology group is joining with its sister group in the UK to host a large international conference in Tenerife, Spain! Topics include the arrow of time, the relationship between quantum foundations and cosmology, the case for fine-tuning, string theory and emergent spacetime, and the purpose and enterprise of philosophy of cosmology itself. The schedule and abstracts are here.

Advertisement

Loop Quantum Gravity and Non-Separability

May 3, 2014

[This post is taken from part of an early draft of Christopher G. Weaver’s paper entitled “Against Causal Reductionism”]

     David Lewis’s Humean supervenience thesis (HST) says that the world’s fundamental structure consists of the arrangement of qualitative, intrinsic, categorical, and natural properties of space-time points (or perhaps some other suitable replacement[1]) and the spatio-temporal relations of such points. All derivative structure globally supervenes on such fundamental structure. Furthermore, “Humean supervenience” writes Lewis, “is named in honor of the greater denier of necessary connections. It is the doctrine that all there is to the world is a vast mosaic of local matters of particular fact, just one little thing and then another.”[2] The HST therefore entails that the fundamental physical state of the world is separable.[3] However, if fundamental physics delivers to us an end-game fundamental physical theory that is non-separable, then the HST is false. Say that a fundamental physical theory is non-separable when,

… given two regions A and B, a complete specification of the states of A and B separately fails to fix the state of the combined system A + B. That is, there are additional facts—nonlocal facts, if we take A and B to be spatially separated—about the combined system, in addition to the facts about the two individual systems.[4]

     Many theoreticians have pointed out how the HST is untenable by reason of quantum physics.[5] The existence of entangled quantum states is an implication of every interpretation of quantum mechanics.[6] Entangled quantum states do not globally supervene on local matters of particular fact, “[t]hat is, the local properties of each particle separately do not determine the full quantum state and, specifically, do not determine how the evolutions of the particles are linked.”[7] In fact, the non-separability of quantum mechanics was one reason why Einstein believed the theory to be incomplete.[8] There are responses to this objection from quantum mechanics, but I will not pursue that specific debate here. What I will argue is that one of the leading theories of quantum gravity is non-separable (and not because of quantum entanglement).

Canonical Quantization of GTR

     The leading canonical quantum gravity model (CQG) is loop quantum gravity (LQG). Following Rickles (2008), I note that according to the CQG approaches, GTR provides the details about how the metric (which characterizes the evolution of space-time) evolves.[9] Normally on CQGs, space and time come apart, where the former evolves against the background of the latter. Such separation is obtained by the introduction of an approximate equivalence and a foliation:

(1) ℳ ≅ ℝ x s

where s is a 3D hypersurface that is compact, and where the foliation is:

(2) 𝔍t: s → (∑t  ⊂ ℳ)

     Every hypersurface ∑t amounts to a temporal instant, and the manifold then is an agglomeration of such instants understood as a one-parameter family. In the context of CQGs, there are a number of avenues from such an agglomeration to a bona fide manifold. The fact that there are such avenues is generated by the diffeomorphism gauge symmetry of GTR. The diffeomorphism constraint that is a vector field, the Hamiltonian constraint that is a scalar field, plus various gauge functions on the spatial manifold generates diffeomorphism gauge transformations[10] Moreover, these constraints and functions evolve space forward one space-like hypersurface at a time.[11] The entire theory remains generally covariant and so the laws hold for coordinate systems related by coordinate transformations that are both arbitrary and smooth.[12] CQGs, therefore, understand both the geometry of the manifold and the gravitational field in terms of the evolutions of various fields, which are defined over space-like hypersurfaces ∑i on an assumed foliation.

Loop Quantum Gravity

     Again, the leading and most popular CQG is loop quantum gravity.[13] Proponents of this approach maintain that GTR can be simplified, and that one can understand the theory in terms of gauge fields.[14] Quantum gauge fields can be understood in terms of loops. By analogy with electrodynamics, we can say that space-time geometry is encoded in electric fields of gravitational gauge fields. The loops appropriately related to such electric fields weave the very tapestry of space itself.[15] According to LQG then, the fundamental objects are networks of various interacting discrete loops.[16] Many proponents of LQG maintain that these fundamental networks are arrangements of spin networks.[17]

     Spin networks do an amazing amount of work for LQG. They not only provide one with the means to solve the Wheeler-de Witt equation (see Jacobson and Smolin (1988)), but arrangements of such networks give rise to both the geometry of space-time (Markopoulou (2004, p. 552)), and a fundamental orthonormal basis for the Hilbert space in LQG’s theory of gravity.[18] Furthermore, the role of spin networks in LQG recommends that LQG is non-separable. The causal structure of space-time is not determined by the categorical and local qualitative properties of space-time points and their spatio-temporal relations, nor by individual loops and spatial relations in which such loops stand. Let me explain.

     On one interpretation of LQG, spin networks are types of causal sets, and so LQG in the quantum cosmology context has some similarities with quantum causal history (QCH) approaches.[19] Thus, LQG implies that the causal structure of the cosmos is determined by partially ordered and locally finite (in terms of both volume and entropy) sets. Such sets are regarded as events which one associates with Hilbert spaces with finitely many dimensions. These pluralities identified as events are “regarded as representing the fundamental building blocks of the universe at the Planck scale.”[20] Notice that these building blocks are pluralities of loops.[21] Individual loops themselves do nothing to determine causal structure. Moreover, some loops are joined in such a way that they are not susceptible to separation even though they are in no way linked (e.g., Borromean rings).[22] The spatio-temporal relations of such loops do nothing to determine that self-same structure, for (again) spin networks of loops weave together space-time geometry itself.[23] What is more, even on non-causal set approaches to LQG the very dynamics and evolution of quantum gravitational systems on LQG involve shifts from spin networks to spin networks. On orthodox LQG (without causal sets) quantum states are sets “of ‘chunks’, or quanta of space, which are represented by the nodes of the spin network, connected by surfaces, which are represented by the links of the spin networks.”[24] Causal structure is therefore determined by interrelated systems of loops, not individual loops and their spatial-temporal relations.

     I conclude that on varying approaches to LQG, the fundamental entities of the theory determine spatio-temporal relations while failing to bear such relations. Such entities also constitute systems, the fundamental parts of which, are non-separable. Thus, if LQG is approximately correct with respect to what it has said so far about physics at the Planck scale, the universe is non-separable. LQG suggests that the HST is false.

 

Click Here for References


 

* Thanks to Dean Rickles and Aron C. Wall for comments on an earlier draft of this post. Any mistakes that remain are mine.

[1] The deliverances of physics determine whether or not the replacement is suitable.

[2] Lewis (1986a, p. ix) emphasis mine. John Hawthorne summarized the HST by stating that derivative facts supervene “on the global distribution of freely recombinable fundamental properties.” Hawthorne (2006, p. 245). Hawthorne does not endorse the HST.

[3] Lewis (1986b, p. 14); cf. the discussion in Maudlin (2007, p. 120) who characterized the separability of the view as follows, “[t]he complete physical state of the world is determined by (supervenes on) the intrinsic physical state of each space-time point (or each pointlike object) and the spatio-temporal relations between those points.” Maudlin (2007, p. 51).

[4] Wallace (2012, p. 293).

[5] See the discussions in Lewis (1986a, p. xi); Loewer (1996, pp. 103-105); and Maudlin (2007, pp. 61-64).

[6] Schrödinger (1935, p. 555) said that entanglement is the “…the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought”.

[7] Loewer (1996, p. 104).

[8] See Einstein (1948); cf. Brown who remarked, “…[Einstein’s] opposition to quantum theory was based on the fact that, if considered complete, the theory violates a principle of separability for spatially separated systems.” Brown (2005, p. 187)

[9] I am following Rickles’s discussion of CQGs. See Rickles (2008, pp. 323-327).

[10] Rickles (2008, p. 324); Sahlmann (2012, p. 189).

[11] In the present context, the Hamiltonian is a sum of the aforementioned constraints.

[12] I should add that if one says of space that it is 3+1 dimensional the theory breaks general covariance.

[13] See Rovelli (2011a), (2011b); Smolin (2001, pp. 125-145), (2002), (2004, pp. 501-509).

[14] The insight is Ashtekar’s (1986) who leaned on Sen (1981); cf. Smolin (2004, p. 501).

[15] “…the loops of the quantized electric field do not live anywhere in space. Instead, their configuration defines space.” Smolin (2004, p. 503).

[16] Some of these loops are knotted, meaning that “…it is impossible, by smooth motions within ordinary Euclidean 3-space, to deform the loop into an ordinary circle, where it is not permitted to pass stretches of the loop through each other…” Penrose (2005, p. 944).

[17] A spin network is a “graph, whose edges are labeled by integers, that tell us how many elementary quanta of electric flux are running through the edge. This translates into quanta of areas, when the edge pierces a surface” Smolin (2004, p. 504). Such networks constitute eigenstates of operators representing volume and area (Rickles (2005, p. 704)). The idea comes to us from Penrose (1971). Before the use of spin-networks theorists used multi-loop states. See Rovelli (2008, p. 28).

[18] There are theorems which establish each result. See Smolin (2004).

[19] Markopoulou and Smolin (1997) join “the loop representation formulation of the canonical theory [of gravity] to the causal set formulation of the path integral.” (ibid., p. 409). See also Hawkins, Markopoulou, and Sahlmann (2003, p. 3840); Rovelli (2008, p. 35); and Smolin (2005, p. 19).

[20] Hawkins, Markopoulou, and Sahlmann (2003, p. 3840).

[21] In fact, one should understand a spin network state in terms of “a sum of loop states.” Spin network states are quantum states (they are the very eigenstates of observables which help us get at volumes and areas via measurement) understood as pluralities of loop states. Quotations in this note come from Smolin (2005, p. 13).

[22] See Penrose (2005, p. 944).

[23] As Rovelli stated,

“[a] spin network state does not have position…a spin network state is not in space: it is space. It is not localized with respect to something else: something else (matter, particles, other fields) might be localized with respect to it. To ask ‘where is a spin network’ is like asking ‘where is a solution of the Einstein equations.’” (2004, pp. 20-21 emphasis in the original)

[24] Rovelli (2008, p. 38). Even if one wanted to rid LQG of the redundancy in spin networks (due to symmetry) by switching to spin-knots or s-knots (diffeomorphism equivalence classes of spin-networks), my case for the non-separability of LQG would only be strengthened (see Rovelli (2004, pp. 263-264); cf. Rickles (2005, p. 710)).

The Gravitational Field as Cause

April 3, 2014

[This post is taken from an early draft of Christopher G. Weaver’s paper entitled “Against Causal Reductionism” now entitled “Fundamental Causation”]

    Channeling, to some degree, Bertrand Russell (1912-1913), Jonathan Schaffer (2008) insisted that there is no room for causation in proper scientific practice. Science only requires natural laws and unfolding history (one physical event after the other). He remarked:

…causation disappears from sophisticated physics. What one finds instead are differential equations (mathematical formulae expressing laws of temporal evolution). These equations make no mention of causation.30 Of course, scientists may continue to speak in causal terms when popularizing their results, but the results themselves—the serious business of science—are generated independently.[1]

       Considerations such as those in the quoted pericope above quite naturally breed an argument for causal reductionism, the view that obtaining causal relations are not a part of the world’s fundamental structure, and that as a consequence causal facts reduce to (are nothing above and beyond) a species of non-causal facts. For Schaffer would add that if sound scientific practice can proceed without causation, making use instead of natural nomicity and history solely, then causal reductionism is true. Thus, causal reductionism is true.

      I find Schaffer’s justification for the claim that praise worthy scientific practice does without causation to be problematic. I would argue that with respect to extremely empirically successful theories arrived at on the basis of sound scientific practice, the notion of causation shows up primitively and indispensably in the respective interpretations of the underlying formalism of those theories. In the present post, I have space only to explore one such scientific scientific theory, viz., the general theory of relativity (GTR).

Read more…

Where Are We in the Multiverse?

March 17, 2014

There are two avenues from modern physics to the belief that the universe we see around us is not all there is, but is instead one of infinitely many like it. The first is inflationary cosmology; the second is quantum mechanics.  Though very different, these two multiverse models share two features: first, they both posit objective physical probabilities that tell us how likely we are to be in some portion of the multiverse rather than telling us how likely the multiverse is to be some way or another; and second, they both have a problem with prediction and confirmation.  I’ll discuss the relationship between self-locating probability and confirmation in these theories.

Read more…

Causation of Everything

January 28, 2014

We’ve had causation come up a few times on the blog before (particularly in Mike’s discussion of miracles). For this post, I want to raise some questions about what to say when causation get really big—when we start talking about the state of everything at one moment in time causing everything at the next—and whether such talk is sensible. Such questions are particularly relevant in the context of cosmology.

Usually when we make causal claims or explanations, we’re talking about a local causation: a particular event (or event-type, or state of affairs) that occurs in a relative small finite region of space-time, causing an event at another. We might speak of the high-pitched tone causing the glass to break, or CO2 emissions causing increased polar ice melting. While we might have some difficulty in identifying very diverse and diffuse causes and effects, we presume that the causes and effects are still local.

But what about system-wide causal claims? Assuming that a set of billiard balls, for example, constitutes an effectively closed system, could we claim that the entire configuration of billiard balls at one time causes the entire configuration at the next moment in time? Or could we claim that the state of the entire universe at one time causes the state of the universe at a later time? (Note that I’ll be leaving ‘a moment in time’ as a loose and vague notion here—whatever account of causation we use will have to be compatible with general relativity, but I won’t go into how that might be done here.)

In ‘On the Notion of Cause’ (1912−3), Russell famously argued that we should jettison the notion of causation altogether. His main concern was that, given the global laws we have in fundamental physics, nothing less than the entire state of the system at a given time would be enough (given the laws) to necessitate any event in the next. So as far as we think causation requires nomic necessitation, we would need to consider the entire state of the system as a cause. And he took this to be a reductio of the position: if causes were also to be general types of events, of the type science could investigate, there could be no cause-based science. Here’s Russell:

In order to be sure of the expected effect, we must know that there is nothing in the environment to interfere with it. But this means that the supposed cause is not, by itself, adequate to insure the effect. And as soon as we include the environment, the probability of repetition is diminished, until at last, when the whole environment is included, the probability of repetition becomes almost nil. (Russell 1912−13, pp. 7−8)

Either the causes would be so extensive and detailed as to be unique, and not the subject of scientific investigation, or they would not necessitate their effects. Science has to study repeatable events, and system-wide states would never be repeatable in the way required.

So, in Russell’s work we actually have an argument in favour of system-wide causation: it allows us to take the causal relation to be necessitation by fundamental laws. But we also have an argument against system-wide causation: it isn’t about the kind of repeatable events that science is concerned with. The argument in favour of system-wide causation seems clear enough, but what are we to make of the argument against? It seems that cosmology is precisely a field that is interested in non-repeatable events. Perhaps cosmology does not describe events at a sufficiently fine-grained level to explain how many actual events are nomically necessitated, but what about large-scale phenomena? Surely cosmology aims to account for those?

However, some later developments that followed Russell’s work didn’t advocate for system-wide causation, but dropped the requirement that causes had to nomically necessitate their effects. Instead, we were to explain what was going in causation using counterfactuals and notions of intervention. Claiming a causes b is roughly to claim that had we intervened on a in a suitably surgical of way, this would have been a way of influencing b. The interventions themselves were characterised as causal processes, as processes that that disrupt some of the causal chains already present in the system, while leaving others intact, and so allows us to test what causal chains there are. The main expositors of this approach are Woodward (2003) and Pearl (2000). These interventionist approaches don’t attempt to reduce causation to something else, but instead offer an elucidation of various causal notions in terms of other ones.

However, under such the interventionist approach, it’s hard to see how we can talk about system-wide causation. Interventions were envisaged as process that originated from outside the system we were studying. How is this approach to work when there is nothing outside the system? Here is Pearl on the issue:

…scientists rarely consider the entirety of the universe as an object of investigation. In most cases the scientist carves a piece from the universe and proclaims that piece in – namely, the focus of investigation. The rest of the universe is then considered out or background and is summarized by what we call boundary conditions. This choice of ins and outs creates asymmetry in the way we look at things, and it is this asymmetry that permits us to talk about “outside intervention” and hence about causality and cause-effect directionality. (Pearl 2000, p. 350).

But

‘If you wish to include the entire universe in the model, causality disappears because interventions disappear – the manipulator and the manipulated loose their distinction. (Pearl 2000, p. 349−50).

We might claim this as a feature of the interventionist approach: the approach makes it clear how the causal structure of the world is tied to a particular limited perspective we take on it as experimenter or intervener, dividing the world up, and is not something that can be understood outside of these divisions. Are we then content to cease dealing with causal notions in sciences like cosmology?

There are a few remaining options on the table that I’ll note briefly. One is to start with the interventionist framework, but then extend our causal notions so that they can be system-wide. We might, for example, attempt to reduce the interventionist counterfactuals to law-based ones that can then be applied to whole systems—I take Albert (2000) and Lower (2007) to follow this route. Or we might keep causation as a primitive relation that holds between local events, and build up system-wide causes out of those. Another quite different option is to go pluralist about the notion of causation: perhaps we were wrong to think that a single notion applied to all contexts. We should keep nomic necessitation as what counts for cosmology, and intervention as what counts for other contexts.

Whatever option we take here, the case of cosmology seems a useful testing ground for accounts of causation and their commitments regarding global causes.

References:

Albert, David Z. 2000. Time and Chance. Cambridge, Mass.: Harvard University Press.

Loewer, Barry. 2007. Counterfactuals and the Second Law. in Causation, Physics, and the Constitution of Reality, ed. Huw Price and Richard Corry, 293−326. Oxford: Oxford University Press.

Pearl, Judea. 2000. Causality. New York: Cambridge University Press.

Russell, Bertrand. 1912−13. On the Notion of Cause. Proceedings of the Aristotelian Society, New Series. 13: 1-26.

Woodward, James. 2003. Making Things Happen: A Theory of Causal Explanation. Oxford: Oxford University Press.

Quantum Fluctuations as Seeds of Large Scale Structure

December 16, 2013
By Ward Struyve, Rutgers.
 

On very large scales (over hundreds of megaparsecs) the universe appears to be homogeneous. This fact formed one of the original motivations for inflation theory. According to inflation theory, the very early universe went through a phase of accelerated expansion that stretched a tiny portion of space to the size of our entire observable universe, stretching initial inhomogeneities over unobservable distances. On smaller scales, the universe is far from homogeneous; one can identify all kinds of structures, such as stars, galaxies, clusters of galaxies etc. According to inflation theory, these structures are considered to originate from quantum vacuum fluctuations. The usual story is that during the inflationary period these quantum fluctuations grew to become classical inhomogeneities of the matter density. These inhomogeneities then gave rise to structures through gravitational clumping. The primordial quantum fluctuations are also the source of the temperature fluctuations of the microwave background radiation.

This explanation of the origin of structures is regarded as part of the success story of inflation theory. One aspect of the explanation is, however, problematical, namely how the quantum fluctuations became classical. The quantum fluctuations are described by a state (a wave function) which is homogeneous and the Schrödinger dynamics does not spoil this homogeneity. So how can we end up with classical fluctuations which are no longer homogeneous? According to standard quantum theory this could only happen through wave function collapse. Such a collapse is supposed to happen upon measurement. But the notion of measurement is rather vague and therefore it is ambiguous when exactly collapses happen. This is the notorious measurement problem. This problem is especially severe in the current context. Namely, in the early universe there are no measurement devices or observers which could cause a collapse. Moreover, structures such as measurement devices or observers (which are obviously inhomogeneous) are themselves supposed to be generated from the quantum fluctuations. In order to deal with the measurement problem and with the quantum-to-classical transition, we need to consider an alternative to standard quantum theory that is free of this problem. A number of such alternatives exist: Bohmian mechanics, many worlds, and collapse theories.
Read more…

Statistical Mechanics and Unificationist Explanation

November 30, 2013

Today I want to write about the way in which statistical mechanical explanations fit into more general accounts of explanation. In particular, I’m going to make some bold (and fairly weakly substantiated) claims about how statistical mechanical explanations fit with the, nowadays relatively unpopular, unificationist view of explanation.

We can think of there being a couple of major families of approaches to explanation. The first I’m going to call dependence-based explanation. The idea here is that there is some underlying dependence structure and to explain an event is to show what it depends upon. Causal explanation, where to explain something is to give information about it’s causal history, is an example of this type of explanation. The other family is unificationist explanation. On this approach to explain something is to show how it fits into the general patterns of the world.

Read more…

Bohmian Mechanics: FAQ

November 29, 2013
by

Bohmian Mechanics: FAQ

Wondering how Bohmian mechanics handles the two-slit experiment, how the Bohmians understand the uncertainty principles, or what Bohmians do to finesse no-hidden-variables theorems?  This video series can answer all your questions about the oldest heterodox interpretation of the quantum world.  Thanks to Shelly Goldstein for the pointer.

Could Miracles Happen?

November 14, 2013

Another great article on Aeon magazine this week is about why no one should believe in miracles, by Lawrence Shapiro.  Shapiro takes a tasty stock of Hume’s argument against miracles, adds a dash of Bayesian epistemology, and rounds things off with a nice discussion of the base-rate fallacy—surely worth a read.  But after reading it, I wondered why we don’t use this much simpler argument against supernatural intervention:

THE A PRIORI ARGUMENT:

  1. Miracles violate the laws of nature.
  2. The laws of nature are exceptionless—that is, they are (expressed by) true universal generalizations
  3. Conclusion: There are no miracles.

The argument is valid, and both of its premises have a claim not merely to truth, but to conceptual truth. The first premise is a characterization of what makes God’s miraculous action supernatural: miracles contravene or override the natural laws which govern the world.  The second premise is guaranteed by most views about the laws of nature, but anyway here’s a quick argument for it: the laws of nature are nomically necessary, and necessity implies truth.  So the laws are true.  Unless something has gone wrong, we don’t merely have inductive reasons to doubt that miracles have happened (as Hume and Shapiro claim) but a priori reason: the very idea is conceptually incoherent. But of course this argument is too quick: though we may have good reason to doubt that miracles have happened, that reason is not conceptual incoherence.  What went wrong?

Read more…

Tim Maudlin on Fine Tuning

November 12, 2013

Tim Maudlin on Fine Tuning

Cosmology group researcher Tim Maudlin has a great post in Aeon Magazine about cosmic fine-tuning.  Read it there and feel free to discuss it in the comments here!