[Note from Shaun: The following is a guest post by Boud Roukema. Boud is a professor at the Toruń Centre for Astronomy at Nicolaus Copernicus University. Boud is one of the coauthors of the papers on the pro-backreaction side of the debate I referred to in this post. Boud also blogs at Topological Acceleration, where the following post first appeared on 22 January this year.]
The simplest explanation for "dark energy" is that it measures recent evolution of average negative curvature. We think that it mainly represents the recent formation of cosmic voids on scales of tens of megaparsecs; these voids dominate scalar averaged quantities. In other words, the onus of proof has been reversed, in a quantified way: dark energy as something beyond classical general relativity should be disfavoured by Occam's Razor unless a relativistic inhomogeneous cosmological model is used. This seems so far to have largely gone under the radar...
Observationally, there's no disputing the existence of dark energy in the restricted sense of providing a good observational fit to several of the main cosmological observational datasets, modulo a rather unrealistic assumption of the model used in the fitting procedure. The assumption is that the class of possible spacetimes, i.e., solutions of the Einstein equation of general relativity, is the FLRW (Friedmann-Lemaître-Robertson-Walker) family. The FLRW models require that after choosing a way to split up space and time (a foliation), the spatial slice (i.e., a 3-dimensional space) is homogeneous—the density is the same everywhere, so galaxies and voids cannot exist. In fact, cosmologists usually make a hack, modelling galaxies and voids by patching Newtonian gravity into an Einstein "background"—since using the Einstein equation is more tricky. This hack bypasses the basic problem without solving it.
Since in reality, galaxies, clusters of galaxies, the cosmic web and voids and supervoids exist beyond any reasonable doubt, the FLRW family should be expected to go wrong at recent epochs and at small (less than a few gigaparsecs) scales. And the small-scale, recent epoch is the only epoch at which a non-zero cosmological constant (or dark energy parameter ΩΛ) can (at present) be observationally distinguished from a zero cosmological constant. So it happens that just where and when we can expect things to go wrong with FLRW, ΩΛ suddenly appears, provided that we assume FLRW in our interpretation of the data despite expecting FLRW to be wrong! What is it that goes wrong? The picture above shows voids on the scales of a few tens of megaparsecs from the2dFGRS. From a relativistic space point of view, expansion rates are different in different regions. This also happens in the hack of adding Newtonian galaxy and void formation to Einsteinian expansion, but in that case the expansion is forced to be rigid, by assumption, preventing the Einstein equation from being applied correctly. Even when we interpret the observations from a rigid comoving space point of view, the numbers show that the ratio of the "peculiar velocities" of galaxies coming out of voids to the sizes of the voids is big: several hundred km/s divided by something like 10 Mpc, giving a few times 10 km/s/Mpc. This void peculiar expansion rate is not much smaller than the Hubble constant, which is about 70 km/s/Mpc. At an order of magnitude level, the expansion rate is definitely inhomogeneous. This is why interpreting the observations in terms of homogeneous expansion gives a big error.
In other words, unless we use a relativistic cosmological model that takes inhomogeneous curvature and virialisation into account, we cannot claim that the "detected" ΩΛ is anything other than a structure formation parameter of a fit through cosmological data using an oversimplified fitting function. The second picture at the right shows that going from right (early times) to left (today), the amount of inhomogeneity (the virialisation fraction) grows from almost nothing to a big fraction of the total mass density today. Alternatively, if we ignore the growth in inhomogeneity, then we get ΩΛ, interpreted from the data assuming homogeneity, growing from almost nothing to a big fraction (70%) of the total density today. If we ignore inhomogeneity, then miraculously dark energy appears instead!
Several relativistic structure formation cosmological models are available, though still in their infancy. However, what has been a little distracting from working on these is that some observational cosmologists thought that there existed a mathematical theorem—the Green and Wald formalism—showing that dark energy could not be a "fitting function" description of curvature and kinematical backreaction, the general-relativistic effects of treating both structure formation and expansion of the Universe together. This is why my colleagues and I had to publish a clarification showing the main flaws in this reasoning. In particular, the Green and Wald formalism is not applicable to the main relativistic structure formation cosmological models that have been proposed in the research literature over the past five years or so. Green and Wald's formalism remains an interesting contribution to the field of relativistic cosmology, but it does not "save" dark energy from being anything more exotic than spatially averaged, evolving negative curvature. After a few tweets [1] [2], a blog entry, and a reblog we can get back to work. :)
If you propose that, to simplify things a bit, you can explain lambda away with back reaction, you run into the age problem, because the Hubble constant on large (CMB) scales would, without lambda, indicate a universe which is too young.
ReplyDeleteAlso, all the talk of voids and so on assumes that most of the matter traces the light. What if it doesn't? Dark matter could be much more homogeneous than visible matter.
The age problem you're referring to is based on assuming decoupling of matter inhomogeneities from the expansion, i.e. it assumes no backreaction. In backreaction studies, "the averaged densities are directly coupled to the evolution of the averaged spatial curvature".
DeleteI agree with you Boud regarding whether Green and Wald have proven that backreaction is irrelevant and I can empathise with the frustration that must come from interacting with people who haven't had time to look at the situation closely and thus take G and W at face value.
ReplyDeleteMy thoughts on what side has the "onus is proof" is vaguely the following. When the accelerated expansion was first observed LCDM was a pre-existing, calculable model with just one new parameter. It still fits the data now in 2016. I'm unaware of any backreaction model in 2016 that is *calculable* and has only one free parameter and still fits all the data available today. Therefore, even if backreaction seemed very compelling at first, the fact that this alternative model has fitted the data well for 15 years suggests that either backreaction mimics that model very closely for some subtle and profound reason (maybe the backreaction effects exponentiate??), or that the effects of backreaction are small in our universe and Lambda is correct.
Also, the other work, not normally mentioned in these debates (surprisingly to me) of Adamek, Daveiro, Durrer and Kunz (latest being: this one) seems much more relevant. They don't assume that the universe is close to FRW *at all times*, only explicitly requiring it at early times and calculated forward in time from that initial state. If backreaction were true their framework should break down, but it doesn't appear to. There are certainly loopholes in their analysis, but the loopholes (e.g. the universe *not* being FRW at early times) seem unlikely.
Anyway, I don't think backreaction research should be given up on, I'm just stating my opinion on the probability that it is the explanation of the physics normally attributed to Lambda. I definitely don't think it has been proven to be irrelevant beyond any reasonable doubt.
Sean: you say you are "unaware of any backreaction model in 2016 that is *calculable* and has only one free parameter and still fits all the data available today."
DeleteActually, the timescape cosmology is calculable, fits the data for which the calculations have been done so far, and has the same number of free parameters as the spatially flat LCDM model; which for average expansion are two: the Hubble constant and the matter density parameter. The differences between timescape and the standard model are at the level that understanding systematic uncertainties in supernova data reduction is needed to distinguish the models (MNRAS 413 (2011) 367), but by the Clarkson-Basset-Lu test the timescape model will be distinguishable from the standard model using the Euclid satellite (Sapone et al, PRD (2014) 023012).
When it comes to the CMB anisotropy spectrum Ahsan Nazer found after two years of recoding of the standard MCMC analysis, that although we get likelihoods for the timescape model comparable to LCDM on Planck data, that when it comes to the (more than two) parameters of the standard model - such as baryon to photon ratio, spectral index etc - that the systematic uncertainties generated by uncertainties of less than 1/10^5 in energy density at last scattering nonetheless give systematic uncertainties of order 8-13% today ( PRD 91 (2015) 063519).
To do the job properly for the timescape case we actually have to consider backreaction in the primordial plasma - which nobody does. People assume, as Adamek et al do, or Ahsan and I did in starting our investigation, that since the universe is so close to smooth before last scattering then the FLRW approximation is good enough. But no! Actually, there is a fundamental issue at stake in assuming a single global FLRW universe ever. In my view, the Einstein equations are evolution equations that should provide a well posed causal initial value problem. Many would agree with that. But if the notion of spacetime itself is generated as a solution of the equations there is no requirement that the same notion of spacetime must hold on scales outside the particle horizon for which there is no causal contact. That is of course the same problems that inflation solves phenomenologically. But people just add scalar fields to a global spacetime to get the phenomenology. In my view this problem must have a more fundamental solution, and the notion of spacetime must be emergent to fundamentally deal with the very early universe.
Dealing with the fundamental issues is not easy; but any real solution to questions involving dark energy and the like do mean going back to first principles about the nature of gravity and the nature of spacetime, which many researchers do not find comfortable. I have discussed the sociology of this elsewhere in the blogosphere, and will not do so again here. But a few of us are working on calculable, testable models - just to do it with both conceptual and mathematical rigour is a slow, hard job.
Thanks for the comment David. I wasn't aware that timescape cosmology required the same number of parameters as LCDM, so that's good news. When you say it fits *all* the data have you calculated the expected ISW (or equivalent) effect and the slow down on growth of structure (as seen in e.g. galaxy cluster abundances and weak lensing signals)? These are both additional evidence for LCDM as well as the now usual supernovae, BAO and the CMB, but they also track how structures form, as well as just the background/average expansion. If the same two parameters can also fit ISW and structure growth rates that'd be awesome! (In LCDM both observables give similar values for Omega_m... though with a small amount of tension - so maybe there's room for timescape cosmology to do better than LCDM).
DeleteI definitely accept that the assumption that even at last scattering the universe is close to FRW is unrpoven and needs to be taken, for LCDM, as an assumption. It seems to work, but of course that also isn't proof that it is correct.
Thanks again for the comment.
Hi Sean: I said all data for which calculations have been done so far. ISW/Rees-Sciama is a hard one. I have ideas about how to approach ISW, which will take at least a few years of more than one good PhD student to do. We have started actually. As you know the amplitude of the ISW effect is observationally a big headache for LCDM, so that should not be held up as one of its successes. (Some fellow backreactionistas and I have a recent summary of the tensions for LCDM in arXiv:1512.03313.)
DeleteMy strategy for dealing with ISW is the same one which we hope may shed light on the large angle anomalies - namely ray tracing in Szekeres models that match actual structures in the Universe on < 70/h Mpc scales, as Krzysztof Bolejko, Ahsan Nazer and I do in arXiv:1512.07364. We can already match features of the local expansion that standard FLRW assumptions do not, and by the end of the year we hope to have a model with more actual structures that even better fits local expansion, that can then actually be tested in the CMB map making pipeline for its effect on anomalies. As we are at an early stage, I am not prepared to discuss much of this publically, but I have started to talk to people within the Planck collaboration.
For ISW, rather than looking at the features of our own "peculiar potential", one wants a statistical ensemble of nonlinear structures (Szekeres models being the best) that model the variety of structures we see. That is a much bigger problem. The ray-tracing effect itself is independent of the model of backreaction, and independent of timescape, as is our local "peculiar potential" work of arXiv:1512.07364. To go to ISW one has to take an statistical average - it is at that stage that the specific model of backreaction - timescape or otherwise, becomes important. That involves steps over and above the numerics of ray-tracing, which I assure you are already complex enough if you are dealing with thousands of sources from actual galaxy surveys, as we do.
Regarding ISW, you're right that I know about the problems there. Unfortunately, the anomaly I spent some time working on didn't re-appear when looking for structures in other areas of the universe. This leads me to err on the side of believing that the original was just a statistical fluke. I hope it isn't though.
DeleteThis paragraph is just me splitting tacks, but I *now* wouldn't call that anomaly an "ISW" anomaly, basically because where LCDM has predicted ISW signals to exist, they have been found and do match LCDM's predictions (e.g. here. The other stacking measurement of Granett et al. doesn't match LCDM, but nor should it be called "ISW" (in my current opinion).
Of course, if a backreaction calculation matched ISW where LCDM predicts it *and* also can explain what Granett et al. saw I would be ecstatic and would jump on that bandwagon right away!