This week ESA is hosting the first conference since Planck released its data. The conference is at ESTEC in the Dutch town of Noordwijk. I am attending this conference and will be doing my best to write updates about what was discussed during the week.You can read my introductory post where I give my motivation for doing this, here.
The CMB is not just useful for studying the primordial universe. As soon as the CMB forms, everywhere in the universe, it travels freely, in every direction, at the speed of light. This means that, in every direction, the CMB we measure here on Earth today has travelled to us from a point billions of light years away. In principle, this makes the CMB not just a really good probe of the state of the universe where and when it was emitted, but also of everything it passed on its way to us.
This secondary use for the CMB turns out to be very useful and many of the highlights from Planck relate to the way in which the CMB interacts on its way to us. The existence of matter in the universe affects the CMB gravitationally. This causes the CMB to bend towards regions of over-density and away from regions of under-density. It also causes the CMB's temperature to shift as it falls into and out of over and under-dense regions. This first effect is known as lensing and one of Planck's most impressive results is a map of the locations of matter in the universe through this lensing effect. The second effect is known as the Sachs-Wolfe effect, something I've written about in some detail.
There is a third way that the CMB is significantly affected by the intervening universe. Within clusters of galaxies there is a lot of hot gas. If the CMB passes through a cluster it can scatter off electrons in this hot gas. The effect of this scattering on the CMB is known as the Sunyaev-Zeldovich (SZ) effect. Therefore, we should be able to use the CMB to detect the lines of sight along which the most massive clusters lie.
We can. And Planck has.
These SZ detected clusters were what many of this morning's talks were on and the results do seem to be potentially interesting.
SZ detected clusters
One of the very nice things about detecting galaxy clusters with the SZ effect is that the probability of detection does not decrease for clusters that are further away from us. The other methods to detect clusters (e.g. by their emission of X-ray radiation) suffer from the fact that the light from the cluster gets dimmer the further away from us it is. However, because the SZ effect is not a light source being emitted by the cluster, but instead a scattering of the CMB, that is not the case. In fact, if a cluster gets too close to us it becomes more difficult to detect the cluster using the SZ effect because the scattered image in the CMB becomes too large and becomes more difficult to distinguish from an ordinary CMB fluctuation.
One of the not so nice things about detecting galaxy clusters with the SZ effect is that, because the detection efficiency is almost independent of our distance from the cluster it isn't really possible to determine how far away a detected cluster is. We can say that there probably is a cluster somewhere along a given line of sight, but not exactly where along that line of sight.
A second not so nice thing about SZ detections of galaxy clusters is that, because the CMB itself also has fluctuations, there is always a chance that one of these CMB fluctuations will mimic the SZ effect seen when the CMB passes through a cluster. This means that it is necessary to "follow up" any potential SZ cluster detection with some other probe in order to make sure it really is a cluster.
Many of Planck's candidate cluster detections had already been detected by other telescopes and for those candidates that were original detections Planck has verified their existence using the XMM X-ray telescope.
The principle behind doing cosmology with galaxy clusters is that one can use detection proxies like SZ signal significance or X-ray temperature to estimate the mass of the cluster. Then, one can use a theory to determine the number of clusters expected to exist in a given volume and given mass range. Then one can compare observation to theory.
One of the biggest problems in doing cosmology with clusters is that you don't know if you've seen every cluster that is out there. When you compare to theory you need to make sure that you are only comparing to the theoretical clusters that you definitely would have seen. Therefore, to do cosmology properly you don't just need to know the probability that a given cluster will exist, you also need to know the probability that you would have also detected it. This isn't always trivial to do. Some of the more interesting clusters are detected serendipitously (e.g. they were in the background of a photograph of another astrophysical object). Once they're detected, we definitely know they're there, but we have no idea how to quantify the probability we would have seen them. For these clusters, all we can do is wait until some other survey that detects clusters more algorithmically sees them as well. And this other survey would need to have definitely seen the cluster, without any prior knowledge. If we point a telescope at a cluster because we already know it is there, then that's cheating and we still can't quantify the probability we would have detected initially.
This is where Planck is perfect. It surveyed the whole sky and was always going to survey the whole sky. Therefore, even though it might not have detected a great number of new clusters itself, it does allow us to do cosmology with a number of clusters that we previously knew about, but had to just smile at forlornly.
Cosmology with Planck's SZ detected clusters
The results are unexpected. To create a sample of clusters to compare to theory, Planck only keeps the clusters with an SZ-detection significance above a certain value. They have seen cluster candidates with significances below this value, and they've even followed some of them up with XMM, but they haven't followed all of these up with XMM, so they don't know whether all of these lower significance candidates are real clusters. However, with their chosen threshold, they do know they have a complete sample.
They find that the best fit cosmological model from the Planck CMB data predicts almost twice as many clusters should have been observed compared to what was observed. This sounds very striking and it definitely is interesting, but it isn't quite as drastic as it sounds. There are two parameters in the standard cosmological model that cluster abundance is most sensitive to. These are the overall matter density (put simply, if you have more matter, you get more massive galaxy clusters) and the amplitude of the primordial density perturbations (clusters form in the peaks of the density field and if the amplitude of the perturbations is bigger, then there are more large mass peaks). The abundance of clusters is actually very sensitive to these two parameters. For example you would need to change the amplitude of primordial density perturbations by less than 10% of its value to double the expected number of clusters in Planck's sample.
However, within the standard cosmological model, these parameters have been very accurately measured and this discrepancy is still a significant one.
So what does this indicate? Why has Planck not seeing these missing clusters (100's of them)?
Here are some candidate explanations raised today during talks and in questions raised afterwards:
- Neutrinos have masses. This is now known. If neutrinos have big masses, then some of the dark matter is actually neutrinos. These neutrinos will suppress the growth of structures on small scales. Essentially, those clusters are missing because the matter that was meant to fall into them didn't quite manage to collapse into a large mass cluster because some of that matter is actually neutrinos and those neutrinos were moving too fast to fall into structures.
- Those clusters exist, but for some reason Planck hasn't seen them. What we predict is that a set of clusters of a given mass should exist. What we measure is a set of clusters with a given SZ-effect. What Planck does is calibrate the relationship between these two observables by looking at a sub-set of clusters for which there are multiple measurements of the clusters' masses (e.g. SZ effect, X-ray temperature, the lensing of the images of galaxies behind the cluster). An assumption is then made that certain relationships between the SZ-effect and mass found in that sub-set holds for the full sample. For Planck's observation to be explained by this assumption breaking down it would require there to be clusters that exist with large mass, but an anomalously small SZ-signal. This is actually what one might expect for the most massive clusters. These clusters are likely to be irregular, or still forming. Therefore, the hot intracluster gas might not be as hot as would be typical (for a cluster of the given mass). Therefore, they would exist, but Planck wouldn't see them. This sounds sensible, but don't forget, Planck is missing ~100 clusters (which is also about the number it has seen). That's a lot of missing, irregular, clusters.
- We know the masses of clusters better than we thought! This sounds ridiculous. How could over-estimating one's errors cause half of the universe's clusters to just disappear? Well, it's not that ridiculous actually. The expected number of galaxy clusters of a given mass falls off exponentially fast as mass increases. Therefore, there are always expected to be many, many more clusters that exist with smaller mass than larger ones. Now, suppose there is some scatter between the mass of an observed cluster and its SZ-detection efficiency. This means that for a cluster of a given mass sometimes it will have a slightly higher detection efficiency and sometimes a slightly lower one. If the Planck collaboration comes along and selects only the clusters with an SZ-detection efficiency above some threshold, then, because there are so many more low mass clusters, it is actually far more likely that they are seeing lower mass clusters that have unusually large SZ-detection efficiencies than that they are just seeing large mass clusters. Pulling this all together, when Planck predicts how many clusters they should see they are including the possibility that these low mass clusters have crossed this threshold. The wider you assume this scatter can be, the lower the limiting mass you are allowing to scatter up above this threshold. As I told you above, there are exponentially more low mass clusters than large mass clusters. Therefore, widening these errors on the cluster masses significantly increases your expected number of observations.
Other SZ cluster surveys
Planck isn't the only telescope capable of detecting clusters using the SZ effect. Two ground based CMB telescopes (ACT and SPT) actually have better resolution than Planck and thus can detect clusters with slightly lower masses (though they see less of the sky). It is curious to note that both ACT and SPT also see an under-abundance of clusters.
Andrei Linde "inflation can explain anything"
The last talk of the day today was by one of the founders of inflation, Andrei Linde, who boldly stated that inflation can explain anything. Inflation is most popular candidate for what generated the primordial density perturbations. One day I will try to explain what the paradigm of inflation actually is, but that day isn't today (in the meantime, google will give many other people's attempts).
A concerned reader might wonder how one can possibly do science with such a paradigm. If inflation really can explain anything, then how are we to determine whether it is true? Surely that isn't science!?
I have some thoughts on this that I've decided to share.
Yes, inflation can explain (almost) anything. But also, no, that doesn't mean we can't do science with inflation. The reason is that there are better models of inflation and there are worse models of inflation.
Before explaining why I think that matters, let me explain how I understand science works. Science works through the construction of models of nature and the observation of nature. Which model is chosen by science depends on two things: how believable the model is and how well it fits the data. Quantitatively speaking, this is just Bayes' Theorem, but I'm not going to speak quantitatively, because the "how believable a model is" aspect of science doesn't (usually) work quantitatively. Whenever a new measurement is made, all the competing models are judged on both of these criteria.
A perfect example of this is the Copernican model of the solar system vs the Ptolemaic model. When the Copernican model was first presented it actually fitted the observations of the solar system worse than the old Ptolemaic model. Nevertheless, it relatively quickly (I guess that's debatable) became established as the most believed model. Why is this? The reason is that it was just so much simpler. Instead of having epicycles upon epicycles, all of which needed to be put in by hand, there was instead just one simple assumption: the sun is at the centre of the solar system. If you're curious, the reason the model fitted the data worse is that it assumed circular orbits and the true orbits are elliptical, but the model still won.
Of course, on unsatisfying model can win too, if more and more data comes out in its favour. Cosmology has exactly this situation with the very unsatisfying dark energy (it just seems to work!).
What has this got to do with inflation? Well, when we make measurements, we test the various different models of inflation. If the measurements favour the simpler, or more believable (better) models then inflation has had a victory. If the measurements favour the less believable (worse) models, then inflation has suffered a defeat. Unfortunately, there is no alternative paradigm to inflation with an equally believable foundation, therefore no observation could (yet) cause a paradigm shift away from inflation. However, there could be observations that would make it less and less believable, fuelling desires to find alternative models. Linde is right that inflation could still explain those hypothetical observations, but if an alternative model came along that also explained those observations in a much simpler way, science would turn to that model.
So what is the true status of inflation models given Planck's results? I think many of the talks I will attend tomorrow will be on this topic. So, stay tuned... (the short answer is that inflation has done very well)
(Note however that tomorrow includes the conference dinner. Therefore, it is unlikely that I will find time to write a post tomorrow. More likely, I will write a summary of both tomorrow and Friday's talks sometime on Friday afternoon/evening)
[Some rumours from day three are now here]