tag:blogger.com,1999:blog-15137043782541202832024-02-20T08:36:20.290-08:00The Trenches of DiscoveryShaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.comBlogger144125tag:blogger.com,1999:blog-1513704378254120283.post-59805181137086808062021-09-09T19:27:00.000-07:002021-09-09T19:27:42.292-07:00Trenches of Cosmology<p>Hello good people, how are you?</p><p>While this blog has languished, my own attempts to share research with the world haven't (at least not entirely).</p><p>Recently I began live-streaming myself discussing cosmology research. <a href="https://www.youtube.com/channel/UC570WrRn_tBSK1tU1TzOcAA" target="_blank">You can find the channel where it happens here</a>.</p><p>I will be live-streaming tomorrow (Friday) at 9pm UTC as well, on whether there might be a "mirror world" around us, making up part of dark matter. The stream is embedded below...</p><p>If you liked the Trenches of Discovery blog, hopefully you'll like the Trenches of Cosmology livestreams 😀</p><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/dbzJQx8te28" width="320" youtube-src-id="dbzJQx8te28"></iframe></div><br /><p>I'll see if I can get James and Michelle to join on some livestreams sometime in the future and the blog can re-awaken as a sort of podcast maybe? </p><p>Let me know in the comments if you'd like that!</p>Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com0tag:blogger.com,1999:blog-1513704378254120283.post-70920568587727340022020-06-14T22:10:00.003-07:002021-09-09T19:29:32.643-07:00Cosmology TalksOn the off chance that anybody is still checking out this blog, and doesn't already know, I've made <a href="https://www.youtube.com/channel/UCstdttIo3HM6h3hDk_v2hug" target="_blank">a YouTube channel</a> with talks in it about cosmology.<br />
<br />
Unfortunately, the channel is aimed at a technical audience, whereas this blog was aimed at the general public. So it might not be what everyone reading this is looking for. 🤷<br />
<br />
Anyway, if you're a current (or ex) cosmologist, or want to put in a lot of background work to understand the concepts, you might want to check out the channel (and subscribe to it too!).<br />
<br />
The video below is the latest at the channel. It's on Fast Radio Bursts and how they can be used for cosmology (the expert in the video is Amanda Weltman, a professor at the University of Cape Town). The content is really good and cosmologists should definitely check it out, although be warned the video quality before the slides are brought out isn't great due to internet bandwidth issues.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/cqK18-O3ptA/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/cqK18-O3ptA?feature=player_embedded" width="320"></iframe></div>
<br />
<br />
Feel free to ask any questions here or there.<br />Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com0tag:blogger.com,1999:blog-1513704378254120283.post-29331146930367228642016-04-07T15:06:00.001-07:002016-04-07T15:07:36.312-07:00The shape of physical laws<span style="font-family: inherit;"><i>[The following is a guest post from</i><i style="background-color: white; line-height: 20.02px;"> <a href="http://www.ita.uni-heidelberg.de/~spirou/" style="text-decoration: none;" target="_blank">Bjoern Malte Schaefer</a> (<a href="http://trenchesofdiscovery.blogspot.co.nz/2014/03/quantum-mechanics-and-planck-spectrum.html" target="_blank">see his last guest post here</a>). Bjoern is still one of the curators of the <a href="http://cosmologyquestionoftheweek.blogspot.co.uk/" style="text-decoration: none;" target="_blank">Cosmology Question of the Week</a> blog, which is also still worth checking out. Enjoy!]</i></span><br />
<b><span style="font-family: inherit;"><br /></span></b>
<span style="font-family: inherit;"><b>Introduction</b>
</span><br />
<span style="font-family: inherit;"><b><br /></b>
The aim of theoretical physics is a mathematical description of the processes taking place in Nature. Science is <a class="https" href="https://en.wikipedia.org/wiki/Empirical_research" title="empirical">empirical</a>, meaning that its predictions need to comply with experimental results, but other categories are very important (but not decisive): Theoreticians look for elegance, consistency and simplicity in their descriptions, they aim for abstraction and unification, and look for <a class="https" href="https://en.wikipedia.org/wiki/Reductionism" title="reduction of the laws of Nature to a few fundamental principles">reduction of the laws of Nature to a few fundamental principles</a> and at the same time for <a class="https" href="https://en.wikipedia.org/wiki/Analogy" title="analogies in the description of different phenomena">analogies in the description of different phenomena</a>. The subject of this article is to show how these aspects are realized in <a class="https" href="https://en.wikipedia.org/wiki/Classical_physics" title="classical physics">classical physics</a>, although many of the arguments apply to <a class="https" href="https://en.wikipedia.org/wiki/Special_relativity" title="relativistic physics">relativistic physics</a> as well - or only show their true meaning in this context. I do apologize for some of the mathematics, and I promise to keep it as compact as possible.
</span><br />
<span style="font-family: inherit;"><b><br /></b>
<b>Formulation of physical laws with differential equations</b>
</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Physical laws are formulated with <a class="https" href="https://en.wikipedia.org/wiki/Differential_equation" title="differential equations">differential equations</a>, which relate the rate of change of a quantity to others, for instance the rate of change of the position with time to the velocity. This rate of change is called a <a class="https" href="https://en.wikipedia.org/wiki/Derivative" title="derivative">derivative</a>. The solution to these equations usually involve an initial value of a quantity under consideration, and compute the value at each instant in solving the equation. The fact that the laws of physics are formulated with differential equations is very advantageous because they separate the problems of evolution of physical systems from the choice of <a class="https" href="https://en.wikipedia.org/wiki/Initial_condition" title="initial conditions for the evolution">initial conditions for the evolution</a>. Using differential equation for e.g. deriving the motion of planets leads to the abstraction to what forces planets are subjected and how they move under these forces. It predicts naturally the <a class="https" href="https://en.wikipedia.org/wiki/Orbit" title="orbits of planets">orbits of planets</a> without fixing a priori the orbits themselves, as for example <a class="https" href="https://en.wikipedia.org/wiki/Johannes_Kepler" title="Johannes Kepler">Johannes Kepler</a> might have thought.
</span><br />
<span style="font-family: inherit;"><b><br /></b>
<b>Classical mechanics</b>
</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Let’s discuss a straightforward example: The motion of a body under the action of a force in <a class="https" href="https://en.wikipedia.org/wiki/Newtonian_dynamics" title="Newtonian dynamics">Newtonian dynamics</a>. Newton formulated an <a class="https" href="https://en.wikipedia.org/wiki/Equations_of_motion" title="equation of motion">equation of motion</a> for this problem, which stipulates that the acceleration of a body is equal to the force acting on it, divided by the mass of the body. If, in addition, the acceleration is defined as the rate of change of velocity with time and the velocity as the rate of change of position with time, we get the usual form of Newton’s equation of motion: The second derivative of position is equal to the force divided by mass: This is the prototype of a differential equation. It does not fix the trajectory of the body (the position of the object as a function of time) but leaves that open as a solution to the differential equation under specified initial conditions (the position of the object at the starting time).
</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Already in Newton’s equation of motion there are two very interesting details. Firstly, the solution to the equation without any force is found to be one with a constant velocity, or with a linearly increasing coordinate, which is known as <a class="https" href="https://en.wikipedia.org/wiki/Inertia" title="inertial motion">inertial motion</a>. And secondly, the equation of motion is a second order differential equation, because of the double time derivative. This has the important consequence that motion is <a class="https" href="https://en.wikipedia.org/wiki/Time_reversibility" title="invariant if time moved backwards instead of forwards">invariant if time moved backwards instead of forwards</a>.
</span><br />
<span style="font-family: inherit;"><b><br /></b>
<b>Classical gravity</b>
</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">A generalisation to this idea is the classical description of <a class="https" href="https://en.wikipedia.org/wiki/Gravity" title="gravity">gravity</a>. In a very similar way, the gravitational potential is linked through a second-order differential equation to the source of that field, i.e. a central mass. How would this work by analogy? In the mechanics example above, the source of motion was given by the force and both were linked by the second derivatives. Here, the second derivatives of the potential are linked to the sourcing mass again by a second order differential equation, which in this context is called the <a class="https" href="https://en.wikipedia.org/wiki/Poisson%27s_equation" title="Poisson-equation">Poisson-equation</a>, named after the mathematician <a class="https" href="https://en.wikipedia.org/wiki/Sim%C3%A9on_Denis_Poisson" title="Denis Poisson">Denis Poisson</a>.
</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Would this idea work in any number of <a class="https" href="https://en.wikipedia.org/wiki/Dimension" title="dimensions">dimensions</a>? It turns out that one needs at least three dimensions to have a field linked to the source by a second-order differential equation, if the field is required to vanish at large distances from the source and if the field is <a class="https" href="https://en.wikipedia.org/wiki/Isotropy" title="symmetric around its source">symmetric around its source</a>, which are all very sensible requirements. Surely the gravitational field generated by a point mass would be the same in every direction and the attracting effect of the gravitational field should decrease with increasing distance.
</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Is there an analogy to the forward-backward-symmetry of Newton’s equation of motion? The field equation is invariant if one <a class="https" href="https://en.wikipedia.org/wiki/Parity_(physics)" title="interchanges the coordinates by their mirror image">interchanges the coordinates by their mirror image</a>, therefore, Nature does not distinguish between left and right in fields, and not between forwards and backwards in motion. These are called invariances, in particular the invariance of the laws under time-reversal and parity-inversion. And finally, there’s an analogy to inertial motion, because no gravitational field is sourced in the absence of a massive object. The mass is the origin of the field in the same way as force is the reason for motion.
</span><br />
<span style="font-family: inherit;"><b><br /></b>
<b>Variational principles</b>
</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;"><a class="https" href="https://en.wikipedia.org/wiki/Joseph-Louis_Lagrange" title="Joseph Louis Lagrange">Joseph Louis Lagrange</a> discovered <a class="https" href="https://en.wikipedia.org/wiki/Variational_principle" title="a new way of formulating physical laws">a new way of formulating physical laws</a>, which is very attractive from a physical point of view and which is easily generalizable to all fields of physics. How it works can be seen in a very nice analogy, which is <a class="https" href="https://en.wikipedia.org/wiki/Fermat%27s_principle" title="Fermat’s principle">Fermat’s principle</a> for the propagation of light in optics. Clearly, light rays follow paths that are determined by the <a class="https" href="https://en.wikipedia.org/wiki/Refraction" title="laws of refraction">laws of refraction</a>, and computing a light path using <a class="https" href="https://en.wikipedia.org/wiki/Snell%27s_law" title="Snell’s law">Snell’s law</a> is very similar to using Newton’s equation of motion: At each instant one computes the rate of change of direction, which is dictated by the refractive index of the medium in the same way as the rate of change of velocity is given by the force (divided by mass). But <a class="https" href="https://en.wikipedia.org/wiki/Pierre_de_Fermat" title="Fermat">Fermat</a> formulated this very differently: Among all possible paths leading from the initial to the final point light chooses the fastest path. This formulation sounds weird and immediately poses a number of questions: How would the light know? Does it try out these paths? How would the light ray compare different paths? It is apparent that Fermat’s formulation is conceptually not easy to understand but one can show that it leads to exactly the right equation of motion for the light ray.
</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Lagrange’s idea was to construct an abstract function in analogy to the travel time of the light ray, and to measure a quantity called action. Starting from his action he could find a physically correct equation of motion by constructing <a class="https" href="https://en.wikipedia.org/wiki/Principle_of_least_action" title="a path that minimizes the action">a path that minimizes the action</a>, in complete analogy to Fermat’s principle. Lagrange found out that if one starts in his abstract function with squares of first derivatives of the dynamical quantities, they would automatically lead to second order equations of motions, so the basic parity and time-reversal symmetries are fulfilled. In addition he discovered, that if he based his abstract function on quantities that are identical to all observers, he could incorporate a <a class="https" href="https://en.wikipedia.org/wiki/Principle_of_relativity" title="relativity principle">relativity principle</a> and make a true statement about a physical system independent from the choice of an observer.
</span><br />
<span style="font-family: inherit;"><b><br /></b>
<b>Universality</b>
</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">The formulation of the laws of physics with differential equations is very attractive because it allows to describe different solutions that might exist for a physical problem. For the motion of the planets around the Sun there is a universal mathematical description, and the planetary orbits themselves only differ by choosing different initial conditions for the differential equation. There is, however, yet another feature present in the equation of motion or the field equation, which is related to Lagrange’s abstract description.
</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Clearly, any description of a process must be independent if the length-, time- and mass-scales involved are changed: This feature is referred to as universality or <a class="https" href="https://en.wikipedia.org/wiki/Mechanical_similarity" title="mechanical similarity">mechanical similarity</a>, because it allows to map solutions to the equation of motion onto others. For instance, the orbit of <a class="https" href="https://en.wikipedia.org/wiki/Mercury_(planet)" title="Mercury">Mercury</a> would be a scaled version of the orbit of <a class="https" href="https://en.wikipedia.org/wiki/Neptune" title="Neptune">Neptune</a>, the orbits can be mapped onto each other by a redefinition of the length- and time-scales involved. This was considered be an essential property of the laws of physics, because it implies that problems fall into certain universality classes and that there is no limit of validity of the solution. Coming back to the problem of the motion of objects in gravitational fields one finds <a class="https" href="https://en.wikipedia.org/wiki/Kepler%27s_laws_of_planetary_motion" title="Kepler’s third law">Kepler’s third law</a>, which states that whatever the orbit of a planet, the ratio between the third power of the orbital radius divided by the orbital time squared is always a constant. It is completely sufficient to solve the problem of an orbiting planet in principle, the orbits of other planets do not even require solving the differential equation again (with different initial conditions), but all possible solutions follow from a simple scaling operation. A more comical example are <a class="https" href="https://en.wikipedia.org/wiki/Astronaut" title="astronauts">astronauts</a> walking on the surface of <a class="https" href="https://en.wikipedia.org/wiki/Moon" title="the moon">the moon</a> with much smaller gravity: their movements appears to be in slow motion, but speeding up the playback would show them to move perfectly normal.
</span><br />
<span style="font-family: inherit;"><b><br /></b>
<b>Relativity</b>
</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">The last question is of course what the true meaning of Lagrange’s abstract function should be: It is very successful in deriving physically viable equations of motion and field equations, but before the advent of relativity it was unclear how it should be interpreted: It turns out that the Lagrange-function of moving objects is the <a class="https" href="https://en.wikipedia.org/wiki/Proper_time" title="proper time">proper time</a> and that the Lagrange-function of the gravitational field is the <a class="https" href="https://en.wikipedia.org/wiki/Scalar_curvature" title="spacetime curvature">spacetime curvature</a>. Objects move along <a class="https" href="https://en.wikipedia.org/wiki/Geodesics_in_general_relativity" title="trajectories that minimize the proper time elapsing on a clock moving with that object">trajectories that minimize the proper time elapsing on a clock moving with that object</a>, and the <a class="https" href="https://en.wikipedia.org/wiki/Einstein_field_equations" title="gravitational field is determined as the minimal curvature compatible with a source of the field">gravitational field is determined as the minimal curvature compatible with a source of the field</a>. These interpretations require that spacetime has at least four dimensions (instead of three), and they lead to viable second-order differential equations respecting time-reversal and parity-invariance. Both quantities, proper time and curvature, are invariant under changes of the reference frame, so relativity is respected, and are <a class="https" href="https://en.wikipedia.org/wiki/General_covariance" title="invariant under choosing new coordinates - this is in fact the expression of universality">invariant under choosing new coordinates - this is in fact the expression of universality</a>. And one has learned one additional thing, which must appear beautiful to everybody: The laws of Nature are geometric, a very complicated, position dependent geometry, whose properties are defined through differential equations. The lines of least proper time are straight in spacetime in the absence of a force, and <a class="https" href="https://en.wikipedia.org/wiki/Friedmann%E2%80%93Lema%C3%AEtre%E2%80%93Robertson%E2%80%93Walker_metric" title="considering gravitational fields in cosmology">considering gravitational fields in cosmology</a> it is even the case that the expansion of space is constant an empty universe, both as a reflection of inertial motion. But there is one new phenomenon: Gravitational fields do not vanish at large distances as Newton thought, rather, they start increasing at distances above 10^25 meters, where gravity becomes repulsive under the action of the <a class="https" href="https://en.wikipedia.org/wiki/Cosmological_constant" title="cosmological constant">cosmological constant</a>, and this feature brakes scale invariance.
</span><br />
<span style="font-family: inherit;"><b><br /></b>
<b>Summary</b>
</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;"> The formulation of the laws of Nature led physicists to a geometric description of physical processes in the form of differential equations, and variational principles are a very elegant way of formulating the origin of equations of motion and field equations. The true meaning of the variational principles only became apparent with the advent of <a class="https" href="https://en.wikipedia.org/wiki/General_relativity" title="relativity">relativity</a>. It is even the case that other forces, like <a class="https" href="https://en.wikipedia.org/wiki/Electromagnetism" title="electromagnetism">electromagnetism</a>, the <a class="https" href="https://en.wikipedia.org/wiki/Nuclear_force" title="strong">strong</a> and the <a class="https" href="https://en.wikipedia.org/wiki/Weak_interaction" title="weak">weak</a> nuclear force have a analogous description, involving an abstract geometry on their own. Finally, it was realised by <a class="https" href="https://en.wikipedia.org/wiki/Richard_Feynman" title="Richard Feynman">Richard Feynman</a> that the way in which Nature realizes variational principles was through the <a class="https" href="https://en.wikipedia.org/wiki/Wave_particle_duality" title="wave-particle duality">wave-particle duality</a> of quantum mechanics - but that is really the topic of another article.</span>Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com2tag:blogger.com,1999:blog-1513704378254120283.post-41177333319151146032016-02-11T02:27:00.002-08:002016-02-11T02:27:30.583-08:00LIGO's search for gravitational waves.Here is an infographic showing how LIGO searches for gravitational waves. We're happy for it to be shared, so long as it is attributed to <a href="https://twitter.com/ashadesigns" target="_blank">@ashadesigns</a> and The Trenches of Discovery.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0QIx_0GIufJ04a38PzpndbunNxUWuL_n6rI-Wia1Tjfde3p7sHPrtJke0QLohW0L5ZOpTXneOFezlE50MB9xJPUERqYpjvUM_p-Zyx1VhpiqGc4j9yhb_DfsasHa-5QQ0UHb2GcqLElI/s1600/12736932_1032547493435434_106378655_o.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="310" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0QIx_0GIufJ04a38PzpndbunNxUWuL_n6rI-Wia1Tjfde3p7sHPrtJke0QLohW0L5ZOpTXneOFezlE50MB9xJPUERqYpjvUM_p-Zyx1VhpiqGc4j9yhb_DfsasHa-5QQ0UHb2GcqLElI/s400/12736932_1032547493435434_106378655_o.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
<br />
For more details on why you should be interested in LIGO and gravitational waves, <a href="http://www.nature.com/news/gravitational-waves-6-cosmic-questions-they-can-tackle-1.19337" target="_blank">maybe start here</a> and see where your interest takes you. The press conference where LIGO will hopefully announce a detection of gravitational waves will be webcast at the location below, apparently:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe width="320" height="266" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/zyo4DFr4D4I/default_live.jpg" src="https://www.youtube.com/embed/zyo4DFr4D4I?feature=player_embedded" frameborder="0" allowfullscreen></iframe></div>
<br />Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com0tag:blogger.com,1999:blog-1513704378254120283.post-40460388423689221202016-02-09T11:03:00.000-08:002016-02-09T11:03:24.606-08:00Dark energy: onus of proof reversed<div class="separator" style="clear: both; text-align: left;">
<i>[Note from Shaun: The following is a guest post by <a href="http://cosmo.astro.umk.pl/~boud/" target="_blank">Boud Roukema</a>. Boud is a professor at the <a href="http://www.ca.umk.pl/en/frontpage" target="_blank">Toruń Centre for Astronomy</a> at <a href="http://www.umk.pl/en/" target="_blank">Nicolaus Copernicus University</a>. Boud is one of the coauthors of the papers on the pro-backreaction side of the debate I referred to <a href="http://trenchesofdiscovery.blogspot.co.nz/2015/07/cosmological-backreaction.html" target="_blank">in this post</a>. Boud also blogs at <a href="http://cosmo.torun.pl/blog" target="_blank">Topological Acceleration</a>, where the following post first appeared on 22 January this year.]</i></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZ47wb0p32m5iR9tMz2BK0t5RZdOBIk_5sQkxLJRDAqhpRnbKU27o5AQUXuqDf3Aj-J_DqOYN5QXwXsKEncna-hQLbtayZVW4i3sRz0PAST4z4aJnB46JCfPYCohWkU3WCFa4j_Bu2X_Y/s1600/voids_negcurv.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="263" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZ47wb0p32m5iR9tMz2BK0t5RZdOBIk_5sQkxLJRDAqhpRnbKU27o5AQUXuqDf3Aj-J_DqOYN5QXwXsKEncna-hQLbtayZVW4i3sRz0PAST4z4aJnB46JCfPYCohWkU3WCFa4j_Bu2X_Y/s400/voids_negcurv.jpg" width="400" /></a></div>
<br />
The simplest explanation for "dark energy" is that it <a href="http://arxiv.org/abs/0707.2153">measures recent evolution of average negative curvature</a>. We think that it mainly represents the recent formation of cosmic voids on scales of tens of megaparsecs; these voids dominate scalar averaged quantities. In other words, the onus of proof has been reversed, in a quantified way: dark energy as something beyond classical general relativity should be disfavoured by Occam's Razor <strong>unless</strong> a relativistic inhomogeneous cosmological model is used. This seems so far to have largely gone under the radar...<br />
<br />
Observationally, there's no disputing the existence of dark energy in the restricted sense of providing a good observational fit to several of the main cosmological observational datasets, modulo a rather unrealistic assumption of the model used in the fitting procedure. The assumption is that the class of possible spacetimes, i.e., solutions of the Einstein equation of general relativity, is the FLRW (Friedmann-Lemaître-Robertson-Walker) family. The FLRW models require that after choosing a way to split up space and time (a foliation), the spatial slice (i.e., a 3-dimensional space) is homogeneous—the density is the same everywhere, so galaxies and voids cannot exist. In fact, cosmologists usually make a hack, modelling galaxies and voids by patching Newtonian gravity into an Einstein "background"—since using the Einstein equation is more tricky. This hack bypasses the basic problem without solving it.<br />
<br />
Since in reality, galaxies, clusters of galaxies, the cosmic web and voids and supervoids exist beyond any reasonable doubt, the FLRW family should be expected to go wrong at recent epochs and at small (less than a few gigaparsecs) scales. And the small-scale, recent epoch is the only epoch at which a non-zero cosmological constant (or dark energy parameter Ω<sub>Λ</sub>) can (at present) be observationally distinguished from a zero cosmological constant. So it happens that just where and when we can expect things to go wrong with FLRW, Ω<sub>Λ</sub> suddenly appears, <em>provided that we assume FLRW in our interpretation of the data despite expecting FLRW to be wrong!</em> What is it that goes wrong? The picture above shows voids on the scales of a few tens of megaparsecs from the<a href="http://2dfgrs.net/">2dFGRS</a>. From a relativistic space point of view, expansion rates are different in different regions. This also happens in the hack of adding Newtonian galaxy and void formation to Einsteinian expansion, but in that case the expansion is forced to be rigid, by assumption, preventing the Einstein equation from being applied correctly. Even when we interpret the observations from a rigid comoving space point of view, the numbers show that the ratio of the "peculiar velocities" of galaxies coming out of voids to the sizes of the voids is big: several hundred km/s divided by something like 10 Mpc, giving a few times 10 km/s/Mpc. This void <em>peculiar expansion rate</em> is not much smaller than the Hubble constant, which is about 70 km/s/Mpc. At an order of magnitude level, the expansion rate is definitely inhomogeneous. <strong><em>This is why interpreting the observations in terms of homogeneous expansion gives a big error.</em></strong><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdOE0qnDvTv9_heNsTxyp9AHgHTTd3f3eL6oTDv2EcPR3EBxqicpHv8bzqV9LGWjs34TNNzEgx2oNjG-qYpuVxh-qkQccnI-8lB28hu2LgrNjiGGecdrF-45vaZKpjipy4oTxQUP_hj80/s1600/DE.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="236" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdOE0qnDvTv9_heNsTxyp9AHgHTTd3f3eL6oTDv2EcPR3EBxqicpHv8bzqV9LGWjs34TNNzEgx2oNjG-qYpuVxh-qkQccnI-8lB28hu2LgrNjiGGecdrF-45vaZKpjipy4oTxQUP_hj80/s320/DE.jpg" width="320" /></a></div>
In other words, <em>unless</em> we use a relativistic cosmological model that takes inhomogeneous curvature and <a href="http://arxiv.org/abs/1303.4444">virialisation into account</a>, we cannot claim that the "detected" Ω<sub>Λ</sub> is anything other than a structure formation parameter of a fit through cosmological data using an oversimplified fitting function. The second picture at the right shows that going from right (early times) to left (today), the amount of <span style="color: green;"><strong>in</strong>homogeneity (the virialisation fraction)</span> grows from almost nothing to a big fraction of the total mass density today. Alternatively, if we ignore the growth in inhomogeneity, then we get <span style="color: red;">Ω<sub>Λ</sub>, <em>interpreted</em> from the data assuming homogeneity,</span> growing from almost nothing to a big fraction (70%) of the total density today. If we ignore inhomogeneity, then miraculously dark energy appears instead!<br />
<br />
Several relativistic structure formation cosmological models are available, though still in their infancy. However, what has been a little distracting from working on these is that some observational cosmologists thought that there existed a mathematical theorem—the Green and Wald formalism—showing that dark energy could not be a "fitting function" description of curvature and kinematical backreaction, the general-relativistic effects of treating both structure formation and expansion of the Universe together. This is why my colleagues and I had to <a href="http://arxiv.org/abs/1505.07800">publish</a> a <a href="http://cqgplus.com/2016/01/20/the-universe-is-inhomogeneous-does-it-matter">clarification</a> showing the main flaws in this reasoning. In particular, the Green and Wald formalism is not applicable to the main relativistic structure formation cosmological models that have been proposed in the research literature over the past five years or so. Green and Wald's formalism remains an interesting contribution to the field of relativistic cosmology, but it does not "save" dark energy from being anything more exotic than spatially averaged, evolving negative curvature. After a few <a href="https://twitter.com/seanmcarroll/status/604325262197567490">tweets [1]</a> <a href="https://twitter.com/SyksyRasanen/status/690595794601271296">[2]</a>, a <a href="http://trenchesofdiscovery.blogspot.fi/2015/07/cosmological-backreaction.html">blog entry,</a> and a <a href="https://telescoper.wordpress.com/2016/01/20/the-universe-is-inhomogeneous-does-it-matter/">reblog</a> we can get back to work. :)
Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com7tag:blogger.com,1999:blog-1513704378254120283.post-83411153048643639702015-09-09T04:33:00.001-07:002015-09-09T04:33:45.424-07:00The future is now! The rise of genome editing<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihybyvmmI-B6kg8anEEEvDdrpP-amWD6vu23PoU5Tm1BJZ7qvfZU3Em5JlSfUNkoOt3CJsTwwcwNPUrjYhMCNjCQOs1vQ9wsedmw4aIOtMtD2B4KnKss_86cCXu-wJ6_lqIqZNv0xsWU8/s1600/gene-editing.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="150" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihybyvmmI-B6kg8anEEEvDdrpP-amWD6vu23PoU5Tm1BJZ7qvfZU3Em5JlSfUNkoOt3CJsTwwcwNPUrjYhMCNjCQOs1vQ9wsedmw4aIOtMtD2B4KnKss_86cCXu-wJ6_lqIqZNv0xsWU8/s200/gene-editing.jpg" width="200" /></a></div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
It’s an exciting time to be a
biologist! Every few years it seems like there is another significant technical
breakthrough that allows biological research either to speed up exponentially,
or to enter into areas that were previously inaccessible. In just the last
decade or so we’ve seen the publication and digitisation of the <a href="https://en.wikipedia.org/wiki/Human_Genome_Project">human genome</a>
(without which most current life sciences work would be either impossible or
impractical), the development of <a href="https://en.wikipedia.org/wiki/Super-resolution_microscopy">super-resolution microscopy</a> (allowing us for the
first time to see live biological processes on a truly molecular scale), the
facilitation of <a href="https://en.wikipedia.org/wiki/DNA_sequencing">DNA sequencing</a> (making it economical on a large scale), and the
invention or improvement of a whole range of technologies (enzyme-conjugation
systems, flow cytometry, fluorescence-activated cell sorting etc.) that won’t
mean much to anyone outside the field but that have revolutionised the way
research is done. It’s been a long road, but it finally seems like the
ambitions of researchers are starting to be matched by the available
technology, whether it be computational, mechanical, chemical, or biological. The
latest innovation that is taking the biology world by storm is the enormous progress
that has recently been made in an area that has incalculable potential in both
academic and clinical contexts: genome editing. In this post I will try to
explain these recent advancements, why researchers are excited, and why you
should be too!</div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
<b>What is genome editing?<o:p></o:p></b></div>
<div class="MsoNormal" style="text-align: justify;">
<b><br /></b></div>
<div class="MsoNormal" style="text-align: justify;">
Genome editing is pretty much
what you’d expect from the name; editing the DNA sequence within the genome of
a particular cell. This can involve adding DNA, removing DNA, swapping some DNA
for other DNA, or moving DNA around within the genome. It is difficult to
overstate how powerful a tool genome editing can be when it comes to biological
research. Much of the work done in molecular life sciences is trying to work
out how various molecules fit into the whole machine that is an organism –
genome editing allows researchers to directly tinker with these molecules
(typically proteins, which are of course encoded by a their associated DNA
sequence) and observe the effects. This could involve removing the gene
encoding a given protein from an organism and seeing what defects arise.
Alternatively, you could introduce a specific mutation in a gene to see if that
has functional relevance, or introduce DNA encoding fluorescent marker proteins
into the end of your protein of interest to see where it goes and what it’s up
to. Genome editing elevates researchers from the level of pure observers into
direct manipulators of a system. </div>
<div class="MsoNormal" style="text-align: justify;">
</div>
<a name='more'></a><br />
<br />
<div class="MsoNormal" style="text-align: justify;">
<b>Sounds great, what’s the catch?<o:p></o:p></b></div>
<div class="MsoNormal" style="text-align: justify;">
<b><br /></b></div>
<div class="MsoNormal" style="text-align: justify;">
The problem with genome editing
until the very recent past was that it was extremely labour intensive and
time-consuming. Possibly the simplest form of genome editing is to delete a
gene from a cell. Had I wanted to permanently delete a gene from a cell just a
few years ago, I would have typically have used a form of ‘<a href="https://en.wikipedia.org/wiki/Site-specific_recombination">site-specific recombinase</a>’ technology that makes use of the natural process of DNA
recombination to disrupt a specific gene. This is a long process and is only
effective for use in whole organisms as recombination only occurs during the
production of sperm and egg cells, not in normal dividing cells. So, I would
need to perform my site-specific recombination in a mouse egg then implant it
in a mouse and hope that I could get my ‘knock-out’ (as such organisms are
known) mouse at the end. This has several obvious limitations: it’s a long
process, it’s not particularly efficient, it involves the use of live animals, it
can’t be done for human cells, it still means I have to harvest my desired
cells from the mouse at the end, and it may not work at all if the gene
deletion that I’ve made somehow messes up the development of the mouse beyond
the foetal stage.</div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
<b>Bringing gene targeting to the masses<o:p></o:p></b></div>
<div class="MsoNormal" style="text-align: justify;">
<b><br /></b></div>
<div class="MsoNormal" style="text-align: justify;">
What was needed was a way to
target genes directly within cells that you already had in your incubator,
without the need to go around the houses making transgenic mice. Some technologies were successfully developed
that did achieve this goal, such as <a href="https://en.wikipedia.org/wiki/Zinc_finger_nuclease">ZFN</a>s and <a href="https://en.wikipedia.org/wiki/Transcription_activator-like_effector_nuclease">TALEN</a>s, but these involved the
generation of a new protein each time you wanted to target a different gene.
Designing new proteins is considerably harder and less predictable than just
making a new sequence of DNA, so these approaches are only really used by
specialists in genome editing. The first shimmer of hope that non-specialist
researchers might get in on the direct manipulation of gene activity came in
the late 1990s when the technique of <a href="https://en.wikipedia.org/wiki/RNA_interference">RNA interference</a> (RNAi) was successfully developed.
RNAi doesn’t actually edit the genome of your target cell, but it does the next
best thing. DNA in the genome only mediates its effects once it is transcribed
into <a href="https://en.wikipedia.org/wiki/RNA">RNA</a> and then translated into protein. RNAi takes advantage of a natural
defence mechanism of <a href="https://en.wikipedia.org/wiki/Eukaryote">eukaryotic</a> cells against viruses to trick the cell into
thinking that some of its own RNA comes from a virus. This means that the cell
will specifically destroy that RNA, and so the DNA from which it was
transcribed is effectively inactive. RNAi is simple to design as it only
involves the inserting into your cells some double-stranded RNA with the same
sequence that you want to disrupt, rather than designing a new protein every
time. You can make the effects permanent by inserting DNA into the genome of
the cell that encodes the specific double-stranded RNA so there’s no need to
keep adding more, the cell does it for you. RNAi was a sensation and was
rapidly adopted by researchers all across life sciences, earning its inventors
the 2006 Nobel Prize in Medicine.</div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
Revolutionary though RNAi
undoubtedly was, it still came with a long list of caveats. Firstly, it wasn’t
a true ‘knock-out’ – some of the target RNA would always make it through
unscathed and so there would still be low levels of the protein you wanted to
get rid of. For this reason, RNAi-treated cells are typically termed
‘knock-down’ rather than ‘knock-out’. Secondly, it could have off-target
effects since genes with similar sequences to the one you wanted to disrupt may
also be affected. Thirdly, it was still difficult to make the knock-down truly
permanent – cells would often slowly recover expression of the target protein. Fourth,
it was only good for knocking out gene function – inserting selective mutations
and other more subtle editing was not possible. Finally, it proved very tricky
to do this in whole organisms. It was great for simple organisms like yeast or
even larger model animals like nematodes and fruit flies, but putting it into
mammals proved to be a significant hurdle.
</div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
<b>Going beyond RNAi<o:p></o:p></b></div>
<div class="MsoNormal" style="text-align: justify;">
<b><br /></b></div>
<div class="MsoNormal" style="text-align: justify;">
The ideal approach would be one
that had the ease of use of RNAi but that would deliver an absolute, permanent change
to the genome, could be used to engineer whole organisms, and would allow
insertions and mutations as well as basic deletions. Given my enthusiastic
introduction to this post you’ve probably guessed that such an approach has, at
last, arrived. It’s informally known as <a href="https://en.wikipedia.org/wiki/CRISPR">CRISPR</a> (pronounced ‘<i>crisper</i>’), which stands for clustered
regularly spaced palindromic repeats, but the full name for the technology is
the CRISPR/Cas system. </div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
CRISPR is, in my humble opinion, fantastic!
It works by using a guide RNA sequence (which you customise to target your DNA
sequence of interest) to direct a bacterial protein (Cas9) to cut the genome
only at a specific point in your target gene. The cell then tries to repair it
in one of two ways: <a href="https://en.wikipedia.org/wiki/Non-homologous_end_joining">non-homologous end-joining</a> (NHEJ) or <a href="https://en.wikipedia.org/wiki/Homology_directed_repair">homology-directedrepair</a> (HDR). If you want to knock out a gene, you let NHEJ do its work. If
this correctly repairs the cut then the Cas9 can just cut it again, over and
over until it makes a mistake, which it will eventually. Once the mistake is
made, your gene will be ruined and its sequence will no longer match the guide
RNA so will stop getting cut by the Cas9. For more subtle modifications, you
can insert a separate segment of DNA into the cell, which will then be used in
HDR as a template for repairing the cut DNA. The repaired section will then
contain the sequence that you included in the extra DNA, which could be a
mutation, a tag, or whatever else.</div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4PRAPE07wPGvhW9zbEcP8yRhZ1RSGf7Epq1mbazy2ncNvgkP4CSOGdtzLXhGAGvqvAbUcMXrNuNWjjdTQJFFbMoZGIkcoo0MxCzGh-fJvCqCEa7ZjUWgQhuGSUq3uTt9q6BTACk8P_bg/s1600/crispr_3.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="207" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4PRAPE07wPGvhW9zbEcP8yRhZ1RSGf7Epq1mbazy2ncNvgkP4CSOGdtzLXhGAGvqvAbUcMXrNuNWjjdTQJFFbMoZGIkcoo0MxCzGh-fJvCqCEa7ZjUWgQhuGSUq3uTt9q6BTACk8P_bg/s400/crispr_3.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="color: #666666;">The principle of CRISPR: guide RNA (gRNA) directs the cutting <br />of a target DNA sequence to allow either NHEJ or HDR.</span></td></tr>
</tbody></table>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
Like RNAi, CRISPR takes advantage
of natural defence mechanisms to achieve its effects. The normal job of Cas9 is
to cut any foreign DNA that finds its way into bacteria, from viruses for
example, thereby helping to defend bacteria from infection. It was originally
discovered in 2007 by researchers at a Danish food company trying to make
better virus-resistant bacteria for food production, but it <a href="http://www.sciencemag.org/content/315/5819/1709">quickly became evident</a> that it could be much more significant.
It only took 5 years to go from this initial discovery to its successful use in
<a href="http://www.sciencemag.org/content/337/6096/816">genome editing in cultured human cells</a>.
Since then the field has exploded, with CRISPR being used to generate living
animals with edited genomes in a range of species including <a href="http://www.genetics.org/content/194/4/1029">flies</a>,
<a href="http://www.nature.com/nbt/journal/v31/n3/full/nbt.2501.html">fish</a>,
and <a href="http://www.sciencedirect.com/science/article/pii/S0092867413004674">mice</a>,
as well as non-viable embryos of <a href="http://www.nature.com/cr/journal/v25/n7/full/cr201564a.html">primates</a> and even <a href="http://link.springer.com/article/10.1007%2Fs13238-015-0153-5">humans</a> (anyone interested in a more detailed story of how CRISPR was developed and how
it works can find an excellent article in <i>Science</i>
<a href="http://www.sciencemag.org/content/341/6148/833.full">here</a>.</div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
Alongside such more ambitious
work, CRISPR is rapidly becoming an everyday technique for researchers, myself
included. It is now straightforward to generate edited cell lines to suit your
research question, which opens up avenues of work that were, until recently,
inaccessible. This has been facilitated by the fact that the CRISPR technology
is freely and cheaply available to any lab that wants it – I bought all of the
materials I needed to start using CRISPR for just 65 US$. This is to the credit
of those who have had the greatest contribution to its development, as such an
important technology could easily have been exploited for commercial gain. The
scope of the technology has expanded rapidly, with new versions of the system
popping up every couple of months with increased efficiency or specificity for
one application or another. One exciting new option is to buy (for about 400
US$) a library of CRISPR guide RNAs that target every gene in the human genome
individually. You can then expose millions of cells to these, look for cells
exhibiting a specific effect of interest, and then sequence the genomes of
those cells to see which genes were disrupted to cause the effect. What once
would have taken years or decades to work out can now be done in a few months.
It’s remarkable.</div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
<b>Where next for CRISPR?<o:p></o:p></b></div>
<div class="MsoNormal" style="text-align: justify;">
<b><br /></b></div>
<div class="MsoNormal" style="text-align: justify;">
The seemingly instantaneous
appearance, development, and adoption of CRISPR has left a lot of people in the
life sciences a little stunned and many people are still getting to grips with
the brave new world that it has opened up. It seems certain now that CRISPR
will very soon be used as routinely as many other molecular biology techniques.
The real question, is what is the limit of CRIPR’s potential in the field of
clinical science, and should we impose one? Clearly, something that is able to
edit genomes has enormous potential when it comes to genetic disease. It is
already possible to use CRISPR to <a href="http://www.nature.com/articles/srep07621">alter genes in mice at the stage offertilisation</a>,
using a technique called intracytoplasmic sperm injection that is already
widely used during IVF in humans. Soon it will be technically feasible to
correct mutations in the genes of children conceived by carriers of genetic
abnormalities. In principle this would give humanity the potential to slowly
eradicate all single-gene genetic disorders, since the correction would be
maintained in all of the descendents of the original child – a so-called
germline correction. </div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
Whilst this is an exciting
possibility, it is one that has caused significant concern within the
scientific community and beyond. It has been the general consensus for half a
century that germline editing was a step too far as the stakes (i.e. the
integrity of human genetics) are just too high. The UNESCO <a href="http://www.unesco.org/new/en/social-and-human-sciences/themes/bioethics/human-genome-and-human-rights/">UniversalDeclaration on the Human Genome and Human Rights</a>, for example, flatly states that germline editing could be “contrary to human dignity”. The
constancy of this viewpoint was supported in part by the fact that germline
editing was simply not possible and so fairly easy to write off, but that has
now changed and so the debate is alive again. The emergence of CRISPR as a
viable tool for germline editing has sparked a wave of cautionary remarks from
geneticists and clinicians, including from the <a href="http://www.sciencemag.org/content/348/6230/36.summary">co-developers of CRISPR</a>,
demanding that steps be taken now to prevent any viable germline intervention
in future. Indeed, some researchers have suggested that <a href="http://www.nature.com/news/don-t-edit-the-human-germ-line-1.17111">human germline cells in general should be off-limits</a> (even if not used to make a viable human) because the public and/or policy makers may not adequately distinguish between
this and non-germline editing and so lead to a harmful backlash against all
genome editing. </div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<br />
<div class="MsoNormal" style="text-align: justify;">
Personally, I don’t have any
objection to the principle of human germline editing, however I feel that the consequences
for genomic integrity have to be much, much better understood before such
approaches could begin to be applied. The explosion in the use of CRISPR in
labs has understandably made some people nervous that the path towards germline
editing will be similarly supersonic, however I don’t think we are in danger of
that. Everyone involved in genome editing is aware of the issues, and few areas
of molecular biology are under more public and political scrutiny than human
genetic modification, so I don’t envisage any steps being taken along that path
without extreme caution on all sides. Setting over-zealous policies into stone
at this stage could be highly detrimental to future research that would have to
labour under the yolk of rules from a more ignorant age. Having said that, we
have to acknowledge that any technology can be abused and if we do successfully
develop safe human germline editing there is probably not a lot we can do to
prevent it being used for private or political ends. Is working to develop this
technology whilst demanding that it only be used to cure disease a bit like if
original researchers into nuclear fission had naively demanded that their work
was only to be used in power plants and not bombs? If so, is it better to just
not develop it in the first place? There is possibly an argument for this,
but I don’t think so – sooner or later this work will get done by someone and I
think it is best that it is done in full public view with appropriate scrutiny
and debate. We can only have faith in humanity that it will not be abused.</div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
In the meantime, though, those of us with more humble goals are enjoying the possibilities that CRISPR has opened up for us. I can honestly envision that almost every paper I publish in the future could involve CRISPR in one way or another. This means that all of molecular biology will be lifted, so the pace of research will quicken, and new drugs, therapies, or just basic understanding will be faster to develop. These are, as I say, exciting times!</div>
James Felcehttp://www.blogger.com/profile/14031758835739415241noreply@blogger.com1tag:blogger.com,1999:blog-1513704378254120283.post-49645272508439827532015-08-26T09:05:00.003-07:002015-08-27T05:34:06.698-07:00Hypothesis: The future of peer review?If I could recreate the way research results are <a href="https://en.wikipedia.org/wiki/Academic_publishing" target="_blank">quality checked and revealed to the world</a>, I would probably change almost all of what is currently done. I think the isolated scientific paper is a product of the 20th century, being imposed on the 21st purely because of inertia. A better solution would be to give a <a href="https://en.wikipedia.org/wiki/Living_document" target="_blank">"living paper"</a> to each general research project an individual researcher has. This living paper <a href="http://www.trialsjournal.com/content/16/1/151" target="_blank">can then be updated as results change/improve</a>. In such a system I would probably have ~5 living papers so far in my career, instead of ~20 old-style papers. Or, even better, would be a large wiki edited, annotated, moderated and discussed by the science community as knowledge is gained.<br />
<br />
Even if you to wish to keep "the paper" as how science is presented, I think that the journal system, while invaluable in the 20th century, also exists in the 21st century only due to inertia. Pre-print servers like the <a href="http://arxiv.org/" target="_blank">arXiv</a> are already taking care of the distribution of the papers, and the peer review, which is responsible for the quality check side of things, can (<a href="https://telescoper.wordpress.com/2014/05/10/the-open-journal-for-astrophysics-project/" target="_blank">and might?</a>) be organised collectively by the community on top of that. But why should we stick with peer review anyway? <a href="https://hypothes.is/about/" target="_blank">Could there be a better way?</a><br />
<br />
Firstly, let me stress, peer review is definitely an incredibly effective way to progress knowledge accurately and rapidly. The best ideas are the ones that withstand scrutiny. The better an idea is, the more scrutiny it can withstand. Therefore, holding every idea up to as much scrutiny as possible is the best way to proceed. However, by peer review I simply mean criticism and discussion by the rest of the scientific community. I think <a href="http://www.merriam-webster.com/dictionary/peer%20review" target="_blank">the way peer review is currently done</a>, at least what people normally mean by "peer review" is very nearly worthless (and when you factor in the time required to review and respond to review, as well as the money spent facilitating it I'd be tempted to claim that it has a negative impact on research overall). The real peer review is what happens in informal discussions: via emails, at conferences, over coffee, in the corridor, on facebook, in other papers, etc. The main benefit the current method of peer review has is simply that the <i>threat</i> of peer review forces people to work harder to write good papers. If you removed that threat, without replacing it with something else, then over time people would get lazy and paper quality would degrade, probably quite a lot.<br />
<br />
But that would only happen if the 20th century form of peer review was removed without replacing it with something from the 21st century. I wrote above that the real form of peer review happens through conversations at conferences, in emails, etc. The rapid access to papers that we get now makes this possible. In the early-mid 20th century, because the (expensive) telephone was the only way to rapidly communicate with anyone outside your own institute, word of mouth would spread slowly. Therefore some a priori tick was needed, that confirmed the quality of a paper, before it was distributed; hence peer review. But now communication can and does happen much more rapidly. Today, if a paper in your field is good, people talk about it. This gets discussed in emails amongst collaborators, which then disperses into departmental journal clubs and the information about the quality of the paper is disseminated like that. It's worth emphasising that, at least in high energy physics and cosmology, this often happens long <i>before</i> the paper is technically "published" via the slow, conventional peer-review.<br />
<br />
However, this information probably still doesn't disseminate as widely or as quickly as might be ideal, given the tools of the web today. What would be ideal is to find a way for the discussions that do happen to be immediately visible somewhere. For example, what if, instead of having an anonymous reviewer write a review that only the paper's authors and journal editor ever sees, there was instead a facility for public review (either anonymous or not), visible at the same site where the paper exists, where the authors' replies are also visible, and where other interested people can add their views? The threat of peer review would still be there. If a paper was not written with care, people could add this in a review. This review would remain unless or until the paper was revised. Moreover, negative reviews that would hold up a paper could also be publicly seen. Then, if a reviewer makes unfair criticisms, or misunderstands a paper, the authors could makes this clear and the <i>readers</i> can judge who is correct. Or, even better, the <i>readers</i> can add to the discussion and perhaps enlighten both the authors and the reviewer (with words that all other readers can see)!<br />
<br />
<a name='more'></a>One way to achieve this would be to add comments/annotations to the arXiv. For various reasons, the people at arXiv are reluctant to do this. I can empathise with this. ArXiv is probably one of the best things to have happened to the high energy and astrophysics communities (who use it the most) because it gives access to any paper as soon as submitted, without charging the reader, or delaying the access in any way. I am happy that they want to focus on being able to continue to provide this service well.<br />
<br />
But this doesn't mean that the service isn't desired. And, now, in fact, something very near to this does exist. It is called <a href="https://hypothes.is/" target="_blank">Hypothesis</a> and it is a web annotation tool. Essentially, it is <a href="https://chrome.google.com/webstore/detail/hypothesis-web-pdf-annota/bjfhmglciegochdpefhhlphglcehbmek" target="_blank">a browser plugin</a> that allows you to read and write annotations anywhere on the web. So, if you have the plugin installed and write an annotation on a paper at the arXiv, then I can read it (<a href="http://arxiv.org/abs/1408.4720" target="_blank">or vice versa</a> - note that you do need the plugin installed to see annotations). It seems to work very well.<br />
<br />
Unfortunately, I don't think in its <i>current</i> form Hypothesis could replace peer review, even if the inertia problem could be overcome. For such a system to work would require a critical mass of people using it before it becomes effective. At present, an annotation is either visible to just the author, or to anyone. If annotations could be restricted to sub-groups then people will be more inclined to write annotations. Then, the particular annotations that the group (e.g. a research group at a university) finds most useful can be made more publicly available, if desired. Also, the ability to be notified (e.g. via email) whenever annotations are written on specific webpages, or websites would be needed. At present I can only see the option to be notified when someone replies to my own annotation. This means that if an annotation is written on a paper at the arXiv, then nobody else knows until someone specifically chooses to look at that paper, meaning most annotations will lie unread for a long time; a time unbounded from above.<br />
<br />
Once such features are in place I think it would provide a good working model for a 21st century peer review system. Unfortunately the inertia behind the 20th century system is so large that I don't hold out a huge amount of hope that change will occur (people will do what the funding requires and so long as <i>some</i> major funding sources judge based on "published papers" people will submit to journals). Such a change might therefore require top-down policy change by funding agencies themselves.<br />
<br />
In any case, some (20th century) scientists don't even think we would benefit from doing away with 20th century peer review! So opposition is from more than just inertia. Still, you should install the plugin and have a play with its features.<br />
<br />
What are your thoughts? Is the current peer review system sub-optimal, would an annotation system be better? If not, why not?<br />
<br />
Twitter: <a href="https://twitter.com/just_shaun" target="_blank">@just_shaun</a>Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com26tag:blogger.com,1999:blog-1513704378254120283.post-70633250238325299162015-07-01T10:39:00.000-07:002015-08-26T09:45:24.760-07:00Cosmological BackreactionIn the last few weeks <a href="http://arxiv.org/abs/1506.06452" target="_blank">a disagreement</a> <a href="http://arxiv.org/abs/1505.07800" target="_blank">has surfaced</a> at the <a href="https://en.wikipedia.org/wiki/ArXiv" target="_blank">arXiv</a>. The disagreement concerns whether <i>backreaction</i> is important in cosmology.<br />
<br />
To summarise <a href="http://trenchesofdiscovery.blogspot.co.uk/2015/07/cosmological-backreaction.html?showComment=1435772406237#c6178801855588297892" target="_blank">my take on the whole thing</a>, it seems to me that the two sides of this disagreement are, to a large extent, talking past each other. I don't doubt that there is genuine disagreement where definitions overlap, but, at least to my present understanding, much of the disagreement actually just lies in what should be considered "backreaction". There seems to be a secondary, though related, disagreement concerning whether one should start with observations and use them to methodically construct a model of the universe, or instead start with a model of the universe and then see whether it fits the data. The side that favours first constructing the model would say that a model without any backreaction is entirely self-consistent and fits the data well enough not to be concerned. To the other side this still doesn't prove that backreaction <i>must</i> be negligible.<br />
<br />
<b>But OK, what is cosmological backreaction?</b><br />
<br />
Backreaction itself is quite a common term in physical sciences.<br />
<br />
In a surprising proportion of calculations about nature we would normally analyse some sort of interesting object, existing within some external system, but in a scenario where the behaviour of the <i>object</i> has no measurable influence on the overall <i>system</i>. Then, calculating predictions essentially amounts to two independent steps: firstly, calculating what the background system is doing, and then calculating how the interesting object will react to that.<br />
<br />
However, this type of scenario isn't always accurate. When it isn't, the background system could be described as "backreacting" to the object's behaviour.<br />
<br />
<a name='more'></a>Backreaction effects often make calculations much more difficult. Essentially, you can't determine what the object will do until you know what the background is doing, but with backreaction you don't know what the system is doing until you know what the object is doing.<br />
<br />
With <a href="http://arxiv.org/abs/1003.3026" target="_blank">cosmological backreaction</a> the interesting objects are the structures in the universe. These are the things we can observe and are the things we can then use to learn about the universe as a whole. If backreaction doesn't exist, then we can happily calculate what we expect for the average behaviour of the universe and see whether the structures we see match that prediction. If backreaction does exist, we can't, at least not so easily.<br />
<br />
<b>Well then, is it important?</b><br />
<br />
Most of the cosmology community would, with varying degrees of confidence, predict that, up to the level of accuracy we have currently measured the universe, the formation of structures does not affect the average behaviour of the universe. The reasons why this belief is prevalent might vary person to person. To me, by far the most convincing one is that there is a modelfor the average behaviour of the universe that fits observations very well and assumes any backreaction is small enough to be ignored. <a href="https://en.wikipedia.org/wiki/Lambda-CDM_model" target="_blank">This model</a> is the <a href="https://en.wikipedia.org/wiki/Friedmann%E2%80%93Lema%C3%AEtre%E2%80%93Robertson%E2%80%93Walker_metric" target="_blank">FLRW metric</a> with <a href="https://en.wikipedia.org/wiki/Cold_dark_matter" target="_blank">cold dark matter</a> and <a href="https://en.wikipedia.org/wiki/Cosmological_constant" target="_blank">a cosmological constant</a>.<br />
<br />
This isn't a particularly satisfying reason though. The behaviour of the universe, on the scales relevant to the formation of structures, and larger, is described by <a href="https://en.wikipedia.org/wiki/General_relativity" target="_blank">general relativity</a>. This is a complete, deterministic theory. Surely, one can just calculate how big the backreaction is and <i>know</i> whether it is big or not?<br />
<br />
It turns out this isn't so simple and is why there can be arguments about how big/relevant the effect can be. The reason for this is the following:<br />
<br />
* In general relativity there is a set of equations (<a href="https://en.wikipedia.org/wiki/Einstein_field_equations" target="_blank">Einstein's equations</a>) that describe what gravity is like given what matter there is and what the matter is doing.<br />
* Einstein's equations are <a href="https://en.wikipedia.org/wiki/Nonlinear_system" target="_blank">non-linear</a> - i.e., very loosely, if you double the amount of matter you don't just double the "amount" of gravity.<br />
* This non-linearity means that averages do not <a href="https://en.wikipedia.org/wiki/Commutative_property" target="_blank">commute</a>. What this means is that even if we know what the average distribution of matter in the universe is, this doesn't mean that we can naively use Einstein's equations to determine what the average gravitational degrees of freedom (i.e. <a href="https://en.wikipedia.org/wiki/Metric_tensor_(general_relativity)" target="_blank">the metric</a>) are.<br />
* The FLRW metric that describes the average gravitational behaviour of the universe in the standard cosmological model requires a distribution of matter that is <a href="https://en.wikipedia.org/wiki/Homogeneity_and_heterogeneity" target="_blank">homogeneous</a> and <a href="https://en.wikipedia.org/wiki/Isotropy" target="_blank">isotropic</a>. That is, the same everywhere and with no special direction.<br />
<br />
It might very well be the case that, on average, the universe is both homogeneous and isotropic. However, what makes the backreaction calculation incredibly difficult is that, on the scales where structures exist, the universe is very, very far from either.<br />
<br />
<b>If the no-backreaction model works, why do people care?</b><br />
<br />
If we don't know how big it is, backreaction could in principle show up in measurements at any time.<br />
<br />
If tomorrow a significant anomaly shows up that doesn't go away and becomes more and more significant as similar measurements are made then everybody with their own pet dark matter or dark energy model would jump on the anomaly. Some of these pet models would fit the anomaly well. If that "anomaly" was just a consequence of backreaction we could then be faced with a situation where some new modified gravity, or dark matter, model becomes crowned when all we've done is measure a subtle effect of general relativity.<br />
<br />
In fact, some people would argue that this has already happened. In 1999 such an anomaly was measured. <a href="http://www.nobelprize.org/nobel_prizes/physics/laureates/2011/" target="_blank">Supernovae seemed dimmer than they should be</a>. The missing thing that was need to explain this was labelled "dark energy". The model that has now become the standard cosmological model introduced a cosmological constant to the gravitational side of Einstein's equations. It so happened that this model was simple and has survived and fit the data well. But, at least for a while, there was a lot of speculation that the apparent acceleration could be due to backreaction.<br />
<br />
<a href="http://arxiv.org/abs/1112.5335" target="_blank">There is still some speculation</a> that dark energy might just be backreaction but that particular possibility seems very unlikely, at least to me, in 2015. Having said that, it hasn't been absolutely proven to be incorrect and just because right now I would require pretty long odds before betting on it, doesn't mean future evidence might show it to be true.<br />
<br />
Or someone might conclusively rule it out tomorrow.<br />
<br />
<i>I'll try to elaborate on this (all) some more in future posts...</i><br />
<i><br /></i>
Twitter: <a href="https://twitter.com/just_shaun" target="_blank">@just_shaun</a>Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com7tag:blogger.com,1999:blog-1513704378254120283.post-71415930452098020052015-04-29T10:59:00.001-07:002015-04-29T11:30:31.850-07:00Mysterious news stories about supervoidsEarly last week <a href="https://www.ras.org.uk/news-and-press/2616-cold-spot-suggests-largest-structure-in-universe-a-supervoid-1-3-billion-light-years-across" target="_blank">a news story broke</a> about a supervoid. The supervoid was claimed to be a number of things, from an explanation for "<a href="http://en.wikipedia.org/wiki/CMB_cold_spot" target="_blank">the cold spot</a>", to the biggest "structure" yet found in the universe, to just "<a href="http://www.telegraph.co.uk/news/science/space/11550868/Giant-mysterious-empty-hole-found-in-universe.html" target="_blank">mysterious</a>".<br />
<br />
Whether it is a structure or not entirely depends on how you define structure, so I won't discuss whether it is or isn't a structure. However, if you do allow it to be a structure, it isn't the biggest structure yet found. It's hard to do a like for like comparison with other "superstructures". However, there are regions of the universe where the density of observable matter is smaller, for a wider range, so by any definition I can think of, this structure has been beaten.<br />
<br />
The cold spot is a region in the <a href="http://trenchesofdiscovery.blogspot.co.uk/2011/10/smoking-cmb-evidence-of-big-bang.html" target="_blank">cosmic microwave background</a> (CMB) that has a temperature profile that is somewhat unexpected (<a href="http://trenchesofdiscovery.blogspot.co.uk/2014/08/the-cold-spot-is-not-particularly-cold.html" target="_blank">due to a combination of a cold central spot and a hot ring around it</a>). Whether this void could be the explanation of the cold spot has been explained in <a href="http://arxiv.org/abs/1408.4720" target="_blank">this paper</a> and <a href="http://blankonthemap.blogspot.co.uk/2014/08/a-supervoid-cannot-explain-cold-spot.html" target="_blank">this blog post by Sesh</a>. It can't, not without a significant deviation from General Relativity (and a sufficiently big deviation that it would be very strange that these deviations haven't been seen elsewhere). It's worth stressing right now that it isn't the coldness of the cold spot that is itself anomalous. This is a subtle point so just about anyone who says "the cold spot is too cold" can be forgiven for the mistake, but in reality the cold spot <i>isn't</i> too cold. In fact it has more or less exactly the coldness expected of the coldest spot in the CMB. What isn't expected is that there will be a hot ring around such a cold spot. Actually, it's worth stressing further that it isn't even the hot ring that is, by itself, anomalous. Such a hot ring is <i>also</i> quite likely in the CMB. The anomalousness of the cold spot is caused by the fact that <i>both</i> of these features are present, right next to each other. I explained this curiosity in <a href="http://trenchesofdiscovery.blogspot.co.uk/2014/08/the-cold-spot-is-not-particularly-cold.html" target="_blank">this blog entry</a>, but it is worth repeating.<br />
<br />
I want to address now quickly the claim that this supervoid is mysterious. The quantitative source for the claim that the void is mysterious comes from the claim in <a href="http://arxiv.org/abs/1405.1566" target="_blank">the paper about the void</a> that it is "at least a \(3.3 \sigma\) fluctuation" and that "\(p=0.007\) ... characterizing the cosmic rarity of the supervoid". However (and this is the crucial point) what these numbers quantify is the probability that something as extreme as this void could exist at a<i> random </i>point of the universe (or, more precisely, a random point within the part of the universe seen by a particular observational survey). What these numbers <i>do not</i> quantify is the probability that the whole survey could have seen something this extreme. These are two separate statistical things and the relevant one for claiming mysteriousness is the second one. I'll try to estimate this probability.<br />
<br />
I don't have any reason to doubt the numbers they quote for the probability that this void could exist at a random line of sight in the survey. If I use the quoted radius, density contrast and redshift of the void I also calculate it to be a \(\sim 3\sigma\) fluctuation in the matter field. This can be done first by calculating the root-mean-square of the density (contrast) field of the universe when it is smoothed over a particular radius. This quantity, "\(\sigma_R\)", is commonly used in large scale structure. Then, the ratio of the density (contrast) of the obtained void and the \(\sigma_R\) value for the radius of the void gives you \(\sim 3.5\) so I trust that the more sophisticated analyses in the paper are correct, or at least aren't obtaining wildly wrong answers. If one assumes (probably validly) that the large scale density field of the universe has a Gaussian distribution this can be translated into a probability that the observed fluctuation could occur <i>at any random position in the universe.</i><br />
<br />
So, the crucial question that now needs to be asked before calling this supervoid mysterious is whether the survey used to find it saw enough of the universe to witness this rare an event. The size of the void in the sky is approximately \(10\) degrees (as quoted in their abstract). This means it has an area of approximately \(100\) square degrees on the sky. The void was found using data from the WISE and 2MASS <i>all-sky</i> surveys. However the whole sky isn't usable for robust analysis due to foregrounds, the galaxy, etc. Thankfully for our goal, the authors of the supervoid paper also wrote <a href="http://arxiv.org/abs/1401.0156" target="_blank">a paper about the catalogue of galaxies they used to find the supervoid</a> and in the abstract of that paper they estimate that their catalogue covers 21,200 square degrees of the sky.<br />
<br />
What does this mean when we pull it all together? Well, the catalogue used to find the 100 square degree thing, covered 21,200 square degrees of the sky. Therefore, there were \(\sim 21200/100 \simeq 200\) independent \(100\) square degree patches of the sky seen by the survey. Using their own probability for this void existing at any particular line of sight of \(p=0.007\) this gives a very approximate estimate of the expected number of under-dense regions of the universe <i>at least as extreme</i> as the "mysterious" supervoid. The answer is \(N \sim 200*0.007 = 1.4\).<br />
<br />
So, not only is the supervoid not actually mysterious, it is in fact more or less <i>exactly </i>in line with naive expectations!<br />
<br />
Twitter: <a href="https://twitter.com/just_shaun" target="_blank">@just_shaun</a><br />
<ul><ul>
</ul>
</ul>
Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com0tag:blogger.com,1999:blog-1513704378254120283.post-71110243832826133652015-03-25T08:14:00.000-07:002015-03-25T08:14:38.653-07:00The science of three-parent children<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbWFhctMWBXBp5JU3FivFX1_CWdt6vqAAomgKZ05U9T9P1KRlJp1vnc8cfjAI1FKsSNBY3eSli23RvgxQUnwzDr99p5TrIKBZhaGbO4-Lg1uy8a6MQlYpJXpk2JUxW0zTSP_YMq5kvvVk/s1600/Male-egg-and-female-sperms.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbWFhctMWBXBp5JU3FivFX1_CWdt6vqAAomgKZ05U9T9P1KRlJp1vnc8cfjAI1FKsSNBY3eSli23RvgxQUnwzDr99p5TrIKBZhaGbO4-Lg1uy8a6MQlYpJXpk2JUxW0zTSP_YMq5kvvVk/s1600/Male-egg-and-female-sperms.jpg" height="240" width="320" /></a></div>
<br />
<br />
<div style="text-align: justify;">
2015 has already been a significant year in the field of human medicine as February saw the UK become the <a href="http://www.theguardian.com/science/2015/feb/03/mps-vote-favour-three-person-embryo-law">first country in the world to legalise</a> the generation of so-called 'three-parent' children. This marks a milestone for preventative genetics and embryology and offers hope to many people around the UK and beyond who would be unable to have healthy children otherwise. The votes to bring this into law were fairly comfortably won by those in favour - 382 vs 128 in the House of Commons (the lower house) and 280 vs 48 in the House of Lords (the upper house) - however there have been a number of vocal opponents to the measure. In this post I hope to explain just what the process involves, and why it is considered necessary by the majority of British MPs.</div>
<br />
<b>A cellular energy crisis</b><br />
<b><br /></b>
<br />
<div style="text-align: justify;">
<a href="http://en.wikipedia.org/wiki/Mitochondrion">Mitochondria</a>, as you may recall from a <a href="http://trenchesofdiscovery.blogspot.co.uk/2013/10/the-human-machine-non-standard.html">previous post</a>, are the powerhouses of our cells. They metabolise a range of molecules derived from food at use them to generate energy in the form of another molecule, <a href="http://en.wikipedia.org/wiki/Adenosine_triphosphate">ATP</a>. You would not last long without them - just try holding your breath for a few minutes, since anaerobic respiration is all a cell without mitochondria would be able to manage. It is not surprising, therefore, that problems with mitochondrial function can be fairly nasty. <a href="http://en.wikipedia.org/wiki/Mitochondrial_disease">Mitochondrial diseases</a> are a range of genetic disorders in which the proper role of the mitochondria is disrupted due to mutations in one of the genes responsible for making mitochondrial proteins. These diseases never completely knock out mitochondrial function (since an embryo with such a disease could never survive to full development) but still cause severe symptoms in sufferers. Depending on the exact mutation, these can include <a href="http://en.wikipedia.org/wiki/Leber%27s_hereditary_optic_neuropathy">blindness</a>, <a href="http://en.wikipedia.org/wiki/Diabetes_mellitus_and_deafness">deafness</a>, <a href="http://en.wikipedia.org/wiki/Diabetes_mellitus_and_deafness">diabetes</a>, <a href="http://en.wikipedia.org/wiki/Mitochondrial_myopathy">muscle weakness</a>, <a href="http://en.wikipedia.org/wiki/Kearns%E2%80%93Sayre_syndrome">cardiac problems</a>, and <a href="http://en.wikipedia.org/wiki/Neuropathy,_ataxia,_and_retinitis_pigmentosa">problems with the central nervous system</a>. Prognoses vary from one disorder to the next, but they invariably shorten lifespan, often severely. Sufferers of <a href="http://en.wikipedia.org/wiki/Leigh%27s_disease">Leigh's disease</a>, for example, rarely live past 7 years of age, and spend their short lives experiencing <a href="http://en.wikipedia.org/wiki/Hypotonia">muscle weakness</a>, <a href="http://en.wikipedia.org/wiki/Ataxia">lack of control over movement</a> (particularly <a href="http://en.wikipedia.org/wiki/Ophthalmoparesis">of the eyes</a>), vomiting, diarrhea, an <a href="http://en.wikipedia.org/wiki/Dysphagia">inability to swallow</a>, and <a href="http://en.wikipedia.org/wiki/Hypertrophic_cardiomyopathy">heart problems</a>, among others. </div>
<br />
<a name='more'></a><br />
<br />
<div style="text-align: justify;">
What makes mitochondrial diseases unique is that they can occur without the need for mutation in the nuclear DNA, <i>i.e.</i> the DNA stored in the nucleus of the cell and what we typically consider our genome. This is because <a href="http://en.wikipedia.org/wiki/Mitochondrial_DNA">mitochondria have genomes</a> of their own - a remnant of the fact that mitochondria were, once upon a time, independent bacteria that subsequently formed a symbiotic relationship with our unicellular ancestors (more on this <a href="http://trenchesofdiscovery.blogspot.co.uk/2013/10/the-human-machine-non-standard.html">here</a> if you're interested). The mitochondrial genome is small, containing only 37 genes compared to the 25,000 or so in the nuclear DNA, but these genes comprise some of the most vital to mitochondrial function. In many cases of mitochondrial disease, it is mutations of one of these genes that is the problem.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
<b>An unwanted inheritance </b></div>
<br />
<div style="text-align: justify;">
A quirk of the separate mitochondrial genome is that you only inherit it from your mother. During fertilisation, the lucky sperm fuses with the egg and thereby provides the remaining half of the nuclear genome needed to make a person. The sperm's mitochondria are, however, not tolerated in the maternal egg and are quickly marked for destruction. This, combined with the relative numbers of mitochondria in the two cells (only a few hundred in a sperm cell compared to up to a million in an egg) mean that essentially every mitochondrion in the developing embryo and the person it becomes will come from the mother. In a real sense, you are more genetically similar to your mother than to your father, albeit marginally.</div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjK7MRqGcrU6cL4FpPSHY5VDEu2ka6GC9H3QnoUbk0FKZCwZyNHxgXsSC_3pPeukCP9BPCGglT5OMIvURPN1A0w6W-5BjQSJ1yqQTlVAFImnGfewvcBRInCqy-cQBUk7VE-y23igbpGzc/s1600/inheritance.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto; text-align: center;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjK7MRqGcrU6cL4FpPSHY5VDEu2ka6GC9H3QnoUbk0FKZCwZyNHxgXsSC_3pPeukCP9BPCGglT5OMIvURPN1A0w6W-5BjQSJ1yqQTlVAFImnGfewvcBRInCqy-cQBUk7VE-y23igbpGzc/s1600/inheritance.jpg" height="239" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="color: #444444;">Only maternal mitochondria survive during development. Image from the NHS.</span></td></tr>
</tbody></table>
<div style="text-align: justify;">
Because of this pattern of inheritance, men who suffer from mitochondrial diseases caused by mtDNA mutations can father children without the risk of passing the disease on to their children. Women, sadly, do not have this option. A woman suffering from a mitochondrial disease will have some mutant mitochondria and some healthy mitochondria in each cell of her body. The severity of the disease depends partly on the ratio of mutant to healthy; the more healthy mitochondria the better. During egg production, primordial germ cells divide into multiple eggs and the mitochondria become divided as well. Some eggs will have a better mutant:healthy ratio than other, and so the severity of the disease in a child will vary depending on which egg it is that gets fertilised.</div>
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdVC_sPwRQ7NOYvyPMfMQy7jLXDAXwcPpC26iDgSBOUa1bnmASmW-ZwKXQ7SC5gBlZbRqiCixCP2a_uIuRg1JTJpinovmslg81QJUd2LAvUWrdKb5XYB71BCs1AdEZGjmDR_408U77Z24/s1600/division.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdVC_sPwRQ7NOYvyPMfMQy7jLXDAXwcPpC26iDgSBOUa1bnmASmW-ZwKXQ7SC5gBlZbRqiCixCP2a_uIuRg1JTJpinovmslg81QJUd2LAvUWrdKb5XYB71BCs1AdEZGjmDR_408U77Z24/s1600/division.jpg" height="236" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="color: #444444;">The severity of mitochondrial disease depends on the proportion of mutant <br />mitochondria in the fertilised egg (oocyte). Image from the NHS.</span></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<b>Averting the problem</b></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
Medical science has so far found few effective treatments for mitochondrial diseases, however there is hope for female suffers who want to have children without passing on their condition. Since mitochondria are the problem, if they can be replaced by healthy mitochondria but keep the same nuclear DNA, then the child would still be genetically hers but without the disease. This is the principle behind 'three-parent' babies. In this process, the nucleus is extracted from one of the mother's eggs and implanted into the egg of a healthy donor with its nucleus removed, essentially making a healthy egg with the mother's own genome. This can then be fertilised <i>in vitro</i> by the father's sperm and implanted into the mother's womb for gestation, as with typical IVF. The resulting child will have DNA from three people; nuclear DNA from its mother and father, and the mtDNA of the healthy egg donor. This is what makes them a 'three-parent' child, but in reality the amount of DNA from the donor is minuscule compared to that of the 'true' parents - less than 0.1%. It has been suggested that the term 'two-and-one-one-thousandth-parent child' would be more genetically apt!</div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiQTc5DZK03yWc3CbGbYCShY-AS4aJF5JkYxWsFirT_dqbm5ECNiu3ao_LJuuR9R-XtG7i7Tq4EM9upIf7_5yQmn92Y-w1OXdAwyGaOSypq8RKYytHX2xTSAyOsp7w6Z030vplskpwLZY/s1600/3-person-IVF2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiQTc5DZK03yWc3CbGbYCShY-AS4aJF5JkYxWsFirT_dqbm5ECNiu3ao_LJuuR9R-XtG7i7Tq4EM9upIf7_5yQmn92Y-w1OXdAwyGaOSypq8RKYytHX2xTSAyOsp7w6Z030vplskpwLZY/s1600/3-person-IVF2.png" height="242" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
This process of mitochondrial transfer is relatively new but the concept of children bearing genetic material from three individuals is not - in fact there are a number of people alive now who have just that. This is because a similar process of <a href="http://en.wikipedia.org/wiki/Cytoplasmic_transfer">cytoplasmic transfer</a> was already in use as a fertility treatment in the US before being effectively banned in 2001. This involved the movement of healthy cytoplasm (mitochondria and all) into the host egg, rather then movement of the nucleus, but it amounts to a similar outcome. </div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
<b>Why only the UK? </b></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
Unsurprisingly this kind of process has been highly controversial in many countries and continues to stir debate even in the UK. Some opponents simply believe that any interference with the 'natural' process of fertilisation is inherently unacceptable, though arguments such as these are usually based on religious doctrine more than genuine scientific concern. Many people, however, believe that the outcomes of generating what amounts to genetically modified humans are simply too unpredictable to risk. This concern is understandable and should certainly be subject to rigorous discussion (as, indeed, it has been in both houses of parliament) however it is not one that I share. The argument boils down to the fact that mitochondria are not identical (even healthy ones), and also that mitochondria communicate heavily with the nuclear genome. Therefore, mixing the 'wrong' mitochondria with the 'wrong' nuclear DNA could have unforeseen consequences in either the resultant child or their future offspring. I can see where this argument is coming from, however I don't see why mixing mitochondrial and nuclear DNA through mitochondrial transfer is any different through doing it via normal fertilisation. Every time a child is conceived, half of their nuclear DNA is from an individual with different mitochondria to them, their father. Potentially, they may have very different mitochondria to them if, for example, the parents are from very different ethnic backgrounds. This is clearly not a hindrance to healthy development or genetic health down the generations, and I don't see a clear difference with mitochondrial transfer. Indeed, I would say the more generally accepted process of surrogacy is likely to have potentially greater impacts on genomic behaviour. This is because <a href="http://en.wikipedia.org/wiki/Epigenetics">epigenetic</a> processes link environmental factors to genome activity (more on this in a previous post <a href="http://trenchesofdiscovery.blogspot.co.uk/2013/04/the-human-machine-setting-dials.html">here</a> if you're interested) and this is particularly significant during gestation. The specific environment of a surrogate mother's womb (which may differ significantly from that of the biological mother for many reasons) will influence the development of an embryo in ways that may be passed on to that child's offspring in equally unpredictable ways as with mitochondrial transfer, yet surrogacy is widely legal across the world. </div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
Much of the fear surrounding procedures such as these seems to stem from the concept that there exists a single, ideal human genome from which we can only stray at our peril. Biology is much less perfect than that - genes are mixed liberally and randomly all the time in all species, and are mutated in a similarly chaotic fashion. This is not to say that we should be slapdash with any attempts we make to direct our own genetics, but we should bear in mind that there is no such thing as the 'human' genome as every individual is different and no one person is more genetically important than another. I'm hopeful for mitochondrial transfer therapy and am proud that it will the UK leading the way in relieving the suffering of the many people living with mitochondrial diseases. </div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<b>How many parents <i>could</i> a child have?</b><br />
<br />
<div style="text-align: justify;">
Finally, thinking about this topic got me wondering about the question of how many parents a child could, in theory, have? If we say that someone has to contribute a self-contained active piece of genetic material (<i>i.e.</i> not just one or a small number of base-pairs), then we could say that each of the 25,000 or so genes in the genome could be provided by different parents. But why stop there? There are several thousand non-coding RNAs encoded by the human genome that have all sorts of important functions, each could be donated by a different parent. Moreover, genes are regulated by a wide range of regulatory elements that are encoded within the genome but vary from person to person. It's hard to put a precise number on how many of these we have, but it's probably somewhere in the region of half a million. Given that everyone has two copies of each gene, ncRNA, regulatory element <i>etc.</i>, a reasonable estimate of the number of parents a child could have where <i>every</i> parent contributed direct genetic activity is probably in the region of 1,200,000 or so. If you include the parts of the genome that don't seem to do much then that number rises to tens or hundreds of millions, but 1,200,000 seems plenty to me!</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Of course, to achieve this the child's genome would have to be synthesised to contain the matching sequence from each parent - the DNA molecules within the first cell of the embryo would never have physically been inside the parents, unlike normal fertilisation. Some people might therefore say that they don't count as parents at all, so the child could either be considered to have over a million parents or none at all. With this in mind, three parents doesn't seem like too much of a leap after all!</div>
James Felcehttp://www.blogger.com/profile/14031758835739415241noreply@blogger.com0tag:blogger.com,1999:blog-1513704378254120283.post-54301424076244165102015-02-03T16:11:00.000-08:002015-02-03T16:20:08.060-08:00Combined constraints from BICEP2, Keck, Planck and WMAP on primordial gravitational wavesThis week, <a href="http://arxiv.org/abs/1502.00612" target="_blank">the joint analysis</a> of BICEP2 (+ BICEP2's successor Keck) and Planck has finally arrived. The result is more or less what was expected, which is that <a href="http://arxiv.org/abs/1403.3985" target="_blank">what BICEP2 saw last year</a> in the B-mode polarisation signal of the CMB was not actually primordial gravitational waves (as had originally been hoped and claimed), but was unfortunately actually due to dust in the Milky Way. Such is life. Though we did of course have the best part of a year to come to grips with this reality.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhatNatTAxpfzMDRYICKsPkKAW32Rc69Os1n9UG8mEFo4T9YcooKBtLPXcAdmcrMMxvzNxdl5EYV8XAiJC0HMKnow5Z7AeZMH30LGEoslZUm0dj2iJ6rjefY9Ip2BoSlUyg-IrQ1KHMAUQ/s1600/B87qfKNIcAAxVVj.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhatNatTAxpfzMDRYICKsPkKAW32Rc69Os1n9UG8mEFo4T9YcooKBtLPXcAdmcrMMxvzNxdl5EYV8XAiJC0HMKnow5Z7AeZMH30LGEoslZUm0dj2iJ6rjefY9Ip2BoSlUyg-IrQ1KHMAUQ/s1600/B87qfKNIcAAxVVj.png" height="300" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td class="tr-caption" style="font-size: 13px;">Combined constraint on \(r\) from polarisation and temperature measurements (in blue). <a href="https://twitter.com/just_shaun/status/562644825175232513" target="_blank">Freshly digitised in the spirit of modern cosmology</a>. Gives \(r\lesssim 0.09\) at \(95\%\) confidence.</td></tr>
</tbody></table>
</td></tr>
</tbody></table>
<br />
As a result of subtracting the dust component in BICEP2/Keck's signal (obtained by comparing the measurements from BICEP2/Keck and Planck), the final constraint on the "tensor to scalar ratio" (or \(r\)) from the BICEP2/Keck measurement is that \(r<0.12\) at \(95\%\) confidence. This \(r\) parameter essentially measures the amplitude of a primordial gravitational wave signal, so the net result is that the subtraction of dust takes BICEP2's high significance measurement of non-zero \(r\) and converts it into simply an upper bound.<br />
<br />
I've seen some comments on blogs, in media, on Twitter, etc that there is still evidence of some sort of excess signal in BICEP2/Keck over and above the dust, but I can't see any evidence of that in any of their published results. The final likelihood for \(r\) (shown above in black) shows a plot consistent with \(r=0\) at <i>less</i> than \(1-\sigma\) (i.e. \(r=0\) is less than one standard deviation away from the maximum likelihood value). In fact, it would seem that the <em>measurement</em> of the dust that has been obtained by comparing BICEP2/Keck's measurements with Planck's measurements has been so good that the B-mode constraint on \(r\) from BICEP2/Keck is now competitive with (or even slightly better than) the constraint arising from temperature measurements of the CMB. This was always going to happen at some point in the future and it seems that this future has now arrived.<br />
<br />
<a name='more'></a>Of course, the obvious caveat to point out is that BICEP2/Keck not detecting primordial gravitational waves doesn't mean primordial gravitational waves aren't there. It just means that we don't yet have any evidence for them. Instead, we have a new upper bound. It might be pointing out the obvious to state than any amplitude of signal which lies beneath that upper bound is entirely consistent with the data, but maybe it's worth stressing. It's also important to stress that this lack of detection is no longer caused by not knowing enough about the dust in BICEP2 and Keck's field of vision. The dust has now been measured with comparable accuracy to BICEP2 and Keck's own measurement uncertainty and sample variance. So, the lack of detection is now because \(r<0.12\) and BICEP2/Keck just aren't yet sensitive to values below that.<br />
<br />
It's hard (for me at least) not to have sympathy for the BICEP2 crew. When they released their work last year, the best dust models to date (however robust) all predicted that the B-mode polarisation due to dust within their field of vision should be small enough not to be much of a concern. Moreover, their signal really did look like what one expects from primordial gravitational waves, with prominent features on just the right angular scale. Dust is supposed to be almost scale invariant, so one can understand why they didn't suspect it. But, in February 2015, unfortunately, we now know that the dust was bigger than expected and also that, the characteristic features, once the dust signal is removed, are consistent, within \(1\sigma\), of expectations from noise and sample variance.<br />
<br />
Anyway, there are now two competing upper bounds on \(r\). There is the BICEP2/Keck+Planck B-mode constraint: \(r<0.12\). And <a href="http://arxiv.org/abs/1303.5076" target="_blank">also the Planck temperature</a> (+WMAP E-mode polarisation) constraint: \(r\lesssim0.12\). Note that primordial gravitational waves would also increase the fluctuations in the temperature (and E-mode polarisation) of the CMB and thus they can be constrained from measurements of those fluctuations too. These data sets are mostly independent and therefore one could combine the two constraints to obtain an over-all constraint on \(r\). In fact, this has apparently been done by Planck and will appear in their data release later this week (or early next), in the paper on inflation.<br />
<br />
However, why wait when the data's all out there?<br />
<br />
So, at the top of this entry is a figure showing the combined constraints on \(r\). The Planck/WMAP temperature and E-mode constraint is easy to (crudely) reproduce because the data is public. It is in red in the figure above (the y-axis is essentially the likelihood that the \(r\)-value is the true \(r\)-value). The BICEP2/Keck/Planck B-mode constraint can't yet be publicly reproduced due to the data not being public, but it can be copied from the figure(s) in their paper. This is shown in black. The combined constraint is then the curve in blue (obtained by just multiplying the lines together, which, although ignoring a number of effects, is good enough for the accuracy required in a blog entry). Clearly, larger values of \(r\) are more heavily disfavoured and thus the new \(95\%\) upper limit on \(r\) will be smaller. In fact, using this crude analysis, one can obtain:<br />
<br />
\(r\lesssim0.09\) at \(95\%\) confidence.<br />
<br />
So that's the current constraint on \(r\) obtained by combining BICEP2/Keck/Planck B-mode measurements and Planck/WMAP temperature and E-mode polarisation measurements. Planck will release their own E-mode polarisation within a week and have a more sophisticated version of this upper bound, but I expect the above will be right to within \(\sim10\%\) (although I do still reserve the right to be utterly wrong).<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyVawHRHeLyh5ORnZN9TNJdy2D4Cph9Kn2lb2elSDp4aZOLeszo6luiAoDHZUobwCmpQT3Bkfny6f9NDKDYdh13qwJ_0NeXVxnxTqzhUc15pqgMXU1SiKzlhppIrq9mSDdQWKoHf0qGEU/s1600/B87t4g2IMAAmGPL.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyVawHRHeLyh5ORnZN9TNJdy2D4Cph9Kn2lb2elSDp4aZOLeszo6luiAoDHZUobwCmpQT3Bkfny6f9NDKDYdh13qwJ_0NeXVxnxTqzhUc15pqgMXU1SiKzlhppIrq9mSDdQWKoHf0qGEU/s1600/B87t4g2IMAAmGPL.png" height="207" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td class="tr-caption" style="font-size: 13px;">Is this the end for convex inflation models? Do we now know that we started on a hilltop? Maybe.</td></tr>
</tbody></table>
</td></tr>
</tbody></table>
<br />
Curiously, an upper bound of \(r<0.09\) puts a lot of pressure on some inflation models (and favours other, perhaps better motivated, ones). See the plot above, for example. So I expect the upcoming Planck paper on inflation will have some interesting things to say...<br />
<br />
Twitter: <span class="zim-tag"><a href="https://twitter.com/just_shaun" target="_blank">@just_shaun</a></span>Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com2tag:blogger.com,1999:blog-1513704378254120283.post-64308787042851619502014-10-22T09:22:00.000-07:002014-10-22T12:10:13.336-07:00Why is Ebola so scary?<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5Wodwv0HGDafdJK4QraRYT_mVRslAjc0YnsLgvBp99FfmKWK14gwR5JkTjwVfL5Gjrr_XTGtmXfTIDDNYm_khvxk16h41n7b-asmMut-vcY7RK2lTfC3zhBxfmVGH-Cqw9E2eoaqgfWo/s1600/28-days-later-4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5Wodwv0HGDafdJK4QraRYT_mVRslAjc0YnsLgvBp99FfmKWK14gwR5JkTjwVfL5Gjrr_XTGtmXfTIDDNYm_khvxk16h41n7b-asmMut-vcY7RK2lTfC3zhBxfmVGH-Cqw9E2eoaqgfWo/s1600/28-days-later-4.png" height="172" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
Unless you've been living under a reasonably sizable rock for the last few months, it can't have escaped your attention that the world has yet another terror to throw on the mountain of things we should be scared of: Ebola. The ongoing situation in Africa is the largest Ebola outbreak in history and has seen the disease spread beyond Africa for the first time. At the time of writing this, nearly 10,000 people have become infected, almost half of whom have died. This number is growing...rapidly.</div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIMm6SP0Q-1O7rxTyqif8bnneitmpJoWrpmvkIp472u0KEIYcVCCxmSSkqi_KVaqopdyG2JJmCGfXwpkjuwM9_O9aqXmxey2jV9AiO2jCHieYkSWYh6U7awOfAIusdKeA0HEfdm4HQrvA/s1600/Diseased_Ebola_2014.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIMm6SP0Q-1O7rxTyqif8bnneitmpJoWrpmvkIp472u0KEIYcVCCxmSSkqi_KVaqopdyG2JJmCGfXwpkjuwM9_O9aqXmxey2jV9AiO2jCHieYkSWYh6U7awOfAIusdKeA0HEfdm4HQrvA/s1600/Diseased_Ebola_2014.png" height="220" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="color: #666666;">Ebola cases and deaths in the 2014 outbreak.</span></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: justify;">
In this post, I will describe what Ebola is, why it is so scary, and what chances we have of defeating it.</div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
<b>What is Ebola?</b></div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
'Ebola' as a biological term actually refers to a <a href="http://en.wikipedia.org/wiki/Ebolavirus">group of five viruses</a> within the <i><a href="http://en.wikipedia.org/wiki/Filoviridae">Filoviridae</a></i> family, of which four can cause the disease generally called Ebola, but more specifically known as <a href="http://en.wikipedia.org/wiki/Ebola_virus_disease">Ebola virus disease</a>. The recent outbreak has been caused by just one of these viruses, which used to be known as <a href="http://en.wikipedia.org/wiki/Ebola_virus">Zaire Ebolavirus</a>, but is now simply 'Ebola virus' given that it is the most common among humans, and Zaire no longer exists! It doesn't look a whole lot like most viruses, it has to be said - with long, tubular filaments waving around rather than the tight, spherical viruses we're used to seeing for 'flu, HIV, and most others.</div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyAyw2Q_96pSUq8-QYLVWDEEgEmMaWG_D_YTvqKfI5GHH2-j0A8iS45DgxxSximbFwjG9GfC0JRn3tmJp9IgEcyPalSeWqcxFvz7ySrghBaPLVDD7JHdPEQZXYgYotYzYMZM0GJ8vyqT0/s1600/1280px-Ebola_virus_virion.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyAyw2Q_96pSUq8-QYLVWDEEgEmMaWG_D_YTvqKfI5GHH2-j0A8iS45DgxxSximbFwjG9GfC0JRn3tmJp9IgEcyPalSeWqcxFvz7ySrghBaPLVDD7JHdPEQZXYgYotYzYMZM0GJ8vyqT0/s1600/1280px-Ebola_virus_virion.jpg" height="147" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="color: #666666;">The Ebola virus.</span></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: justify;">
</div>
<a name='more'></a>Like most viruses, Ebola is simple. Frustratingly simple, in fact, as it would be nice to think that as big, sophisticated organisms it would need something similarly sophisticated to take us down. Ebola has a genome made up of<a href="http://en.wikipedia.org/wiki/RNA"> RNA</a>, unlike our DNA genome. This doesn't make much difference in practice, but it does mean that the virus can mutate more rapidly as mistakes are made more frequently when replicating RNA than DNA. Influenza also has an RNA genome, which is why every year we need to come up with a whole new 'flu vaccine. If Ebola became widespread, there is a decent chance that multiple infectious strains would emerge, each requiring tailored treatments.<br />
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
<b>Why is Ebola so deadly?</b></div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
Like I say, though, it is still a simple organism. Our genome contains somewhere in the region of 3,000,000,000 nucleotides. The bacterium E. coli has ~4,600,000 nucleotides. Ebola has ~19,000. Its genome only encodes a paltry seven proteins, compared to the 5000 or so in E. coli and ~25,000 in us. And yet it is deadly. When it comes to biological warfare, complexity doesn't necessarily correlate with efficacy. Ebola's complexity is about average for a virus (HIV and influenza have genomes of ~10,000 nucleotides each, for comparison), and yet viruses are generally far deadlier diseases than the more complex bacteria that also attack us. This is because viruses are extremely streamlined replicative machines. They only have what they need to replicate, and they make that machinery extremely efficient. The key components of Ebola that make it so deadly are the proteins GP, VP24, and VP35, and they are very good at their job.</div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
The role of GP is very simple - it gets the virus into your cells. If you were to become infected with Ebola, GP proteins on its surface would bind to corresponding <a href="http://en.wikipedia.org/wiki/NPC1">NPC1</a> proteins on the surface of target cells in your body. These are mainly <a href="http://en.wikipedia.org/wiki/Endothelium">endothelial cells</a> lining the walls of blood vessels, but also liver cells and some cells of the immune system (specifically <a href="http://en.wikipedia.org/wiki/Macrophage">macrophages</a> and <a href="http://en.wikipedia.org/wiki/Monocyte">monocytes</a>). Binding of GP to NPC1 allows the virus to fuse with and enter your, now infected, cell. Ebola is so infectious because GP recognises and binds to NPC1 very efficiently, and this binding event is extremely effective in allowing viral entry. This is the reason for the characteristic hazmat-style suits worn by health workers in Ebola wards - it really doesn't take much exposure to the virus to become infected. If even a few tens of viruses find their way into your system you have a strong likelihood of developing the disease. For comparison, it typically requires thousands of influenza viruses, or hundreds of thousands of HIV particles, to allow infection.</div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
Once inside the cell, Ebola replicates furiously in the typical manner of viruses; by using the replicative machinery of the cell to produce more virus particles. Each new virus buds off from the infected cell taking a piece of the cell membrane with it. Ebola replicates so efficiently that infected cells rapidly become overwhelmed and die, spilling their guts into the surrounding space. This is what kills Ebola victims - the loss of blood vessel integrity causes by massive endothelial cell death means you start to bleed through pretty much every part of your body. This is the cause of the most horrific symptoms of Ebola: bleeding into the eyes, into vomit, and into other bodily fluids. This effect is made worse by the fact that dying cells release a variety of chemical signals (such as <a href="http://en.wikipedia.org/wiki/Tumor_necrosis_factor_alpha">TNFa</a>, <a href="http://en.wikipedia.org/wiki/Interleukin_6">IL-6</a>, and <a href="http://en.wikipedia.org/wiki/Interleukin_8">IL-8</a>), which normally serve to recruit pro-inflammatory and immune cells to the affected area in order to mop up any pathogens. In the case of Ebola, the tissue damage is so severe that the inflammation just causes yet more damage - hastening the demise of the poor infected individual. Ebola is uncommonly lethal among viruses; typically killing between 50-90% of those infected. For comparison, the Spanish Flu outbreak in 1918 had a lethality of 10-20% and that wiped 100 million people off the face of the Earth. </div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
The other proteins that I mentioned, VP24 and VP35, help the virus to evade the immune system. Both proteins act on different components of the <a href="http://en.wikipedia.org/wiki/Interferon">interferon</a> response pathway, which normally promotes the expression of antiviral proteins upon infection. Its inhibition by VP24 and VP35 allows Ebola to replicate more quickly in infected cells and thereby spread further. This means that viral load in bodily fluids remains extremely high right up to the point of death. Given that the victim is usually losing these fluids from all orifices, they become extremely infectious, which is why Ebola has spread so rapidly in affected areas.</div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
<b>What hope do we have?</b></div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
Most of what I've described so far is pretty alarming. The symptoms of Ebola are terrifying and its lethality is one of the highest of all known pathogens. For that reason Ebola is an extremely scary virus if you happen to have it or be near to someone who has it. However, for the rest of us, its lethality is actually a blessing in disguise. This is because, even though Ebola is a deadly virus, it is a poor pathogen on the whole. </div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
This is simply because Ebola kills people <i>too</i> quickly and <i>too</i> commonly. Pathogens can only survive as long as their hosts are alive. That's not to say that they definitely shouldn't kill their hosts in order to be successful, just that they need to do it after the host has infected others. This ability can be expressed as a pathogen's b<a href="http://en.wikipedia.org/wiki/Basic_reproduction_number">asic reproduction number</a>, which is the average number of other people that one infected individual will infect over the course of their illness. If this number is below 1 then eventually the pathogen will die out. For Ebola, it is just over 1. Other viruses have a much better balance of infectivity and lethality. Those that are highly infectious have evolved to be less lethal, meaning they produce short-lived but intense symptoms, as with influenza (with a basic reproduction number of about 3). Others have retained their lethality but reduced their severity, so that infection is ultimately fatal but is long-lived enough to allow transmission, as with HIV (BRN of about 4). Ebola is both highly lethal and highly severe, so infection is too short in many cases to allow transmission. Its only saving grace is its extreme infectivity, without which it would have died out long ago.</div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
The reason why Ebola is not very well adapted to life in humans is because it only started infecting us relatively recently. Normally Ebola hangs around in fruit bats, to which it is far better evolved and so far less deadly. The first recorded case of human infection was only in 1976, and it's just not had long enough to adapt to us. This is bad in terms of its lethality, but good in terms of disease control and possible treatments. Because of Ebola's severity, an infected individual is only infectious to others when they are showing obvious clinical symptoms. It is therefore relatively straightforward to identify and isolate those who are infectious - a strategy that has proved highly successful in Nigeria during the recent outbreak.</div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
As far are treatments go, Ebola shouldn't be too difficult. Pathogens that are well evolved to the human immune system can be very tricky to treat because they are so well adapted to avoiding it. HIV, for example, has been the focus of an enormous research effort for over 30 years and we're still no nearer to developing a vaccine. This is because HIV is wonderfully evolved to evade our immune systems using a 'shield' of sugar molecules to hide its surface proteins away from attacking antibodies. Ebola has no such defenses. If you can survive Ebola long enough, you will eventually become immune to it. Many people who have lived through Ebola have now become serial blood donors as their antibodies are very efficient at neutralising the virus. Using this as a strategy, several Ebola therapies are being developed that basically involve intravenous administration of neutralising antibodies against the virus. The most promising treatment appears to be a cocktail of three different antibodies known as <a href="http://en.wikipedia.org/wiki/ZMapp">Zmapp</a>, which can clear Ebola from rhesus macaques with a 100% survival rate, and has good success in preliminary human trials. Crucially, Zmapp is effective even if only given after symptoms have presented, which is the major advantage over previous treatments. Zmapp is being made in tobacco plants (although the antibodies are entirely human), which gives the potential for large-scale production and hopefully means that the current Ebola outbreak will be the last.</div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
The only real possibility of Ebola becoming a global pandemic is if it evolves either a new means of transmission (becoming airborne is the real worry) or a more sophisticated method of avoiding the immune system. Neither is likely, but both are possible - particularly given its aforementioned RNA genome. The longer Ebola remains within the human population, the greater the likelihood of this happening. Many people, myself included, are hopeful that we will eradicate the virus before it has a chance to adapt. For all our sakes, let's hope we're right!</div>
<div class="separator" style="clear: both; text-align: justify;">
<b><br /></b></div>
<br />James Felcehttp://www.blogger.com/profile/14031758835739415241noreply@blogger.com4tag:blogger.com,1999:blog-1513704378254120283.post-41044289951400960852014-09-19T07:36:00.000-07:002015-08-26T09:46:06.723-07:00Comparing Planck's noise and dust to BICEP2In case anyone reading this doesn't recall, back in March an experiment known as <a href="http://bicepkeck.org/web_page_links.html" target="_blank">BICEP2</a> made a detection of something known as <a href="http://en.wikipedia.org/wiki/B-modes" target="_blank">B-mode polarisation</a> in <a href="http://trenchesofdiscovery.blogspot.co.uk/2011/10/smoking-cmb-evidence-of-big-bang.html" target="_blank">the cosmic microwave background</a> (CMB). This was big news, mostly because this B-mode polarisation signal would be a characteristic signal of primordial gravitational waves. The detection of the effects of primordial gravitational waves would itself be a wonderful discovery, but this potential discovery went even further in the wonderfulness because the likely origin of primordial gravitational waves would be a process known as inflation which is postulated to have occurred in the very, very early universe.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwONcbRB-nOJTzAJLvx8GymNGWOatRFGO6Ysf9jwL-tgSgdCG9ZF295aV5r9oVKR9et6ukEpqowYryttvsDceg575yQwjlA4MfNi1BOWr9hhN-eArjGgGLwXDs9wlDxE-xjlGHZJkmTtc/s1600/BICEP2noarrows.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="157" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwONcbRB-nOJTzAJLvx8GymNGWOatRFGO6Ysf9jwL-tgSgdCG9ZF295aV5r9oVKR9et6ukEpqowYryttvsDceg575yQwjlA4MfNi1BOWr9hhN-eArjGgGLwXDs9wlDxE-xjlGHZJkmTtc/s1600/BICEP2noarrows.gif" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The B-mode polarisation in the CMB as seen by BICEP2. Seen here for the first time in blog format without the arrows. Is it dust, or is it ripples in space-time? Don't let Occam's razor decide!</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<a href="http://trenchesofdiscovery.blogspot.co.uk/2014/03/a-major-discovery-bicep2-and-b-modes.html" target="_blank">I said at the time</a>, and would stand by this now, that if BICEP2 has detected the effects of primordial gravitational waves, then this would be the greatest discovery of the 21st century.<br />
<br />
However, about a month after BICEP2's big announcement <a href="http://resonaances.blogspot.co.uk/2014/05/is-bicep-wrong.html" target="_blank">a large crack developed</a> in the hope that they had detected the effects of primordial gravitational waves and obtained strong evidence for inflation. The problem is that light scattering of dust in the Milky Way Galaxy can also produce this B-mode polarisation signal. Of course BICEP2 knew this and had estimated the amplitude of such a signal and found it to be much too small to explain their signal. The crack was that it seemed <a href="http://arxiv.org/abs/1405.7351" target="_blank">they had potentially under-estimated this signal</a>. Or, more precisely, it was unclear how big the signal actually is. It might be as big as the BICEP2 signal, or it might be smaller.<br />
<br />
Either way, the situation a few months ago was that the argument BICEP2 made for why this dust signal should be small was no longer convincing and more evidence was needed to determine whether the signal was due to dust, or primordial stuff.<br />
<br />
<a name='more'></a><strong>Planck</strong><br />
<br />
The best measurement of the dust signal comes from<a href="http://www.esa.int/Our_Activities/Space_Science/Planck" target="_blank"> the Planck satellite</a>. Planck doesn't measure the CMB with the same sensitivity as BICEP2, but it has measured the CMB over the whole sky and at many different frequencies. The fortunate situation is that the amplitude of a <em>dust</em> B-mode signal would increase at larger frequencies. Therefore, the hope is that, if this signal <em>is</em> due to dust, then Planck will be able to see it at the larger frequencies. In fact, it was <a href="http://arxiv.org/abs/1405.7351" target="_blank">estimates from unreleased Planck data</a> that indicated that perhaps the dust signal <em>is</em> of the same amplitude as the BICEP2 signal.<br />
<br />
The problem is that the expected amplitude of signal in Planck's larger frequency measurements, if the BICEP2 signal is dust, is right on the verge of Planck's sensitivity. Therefore, even though Planck can tell us <i>something</i> about the likelihood that this is or isn't dust, noise is still a big issue. In the long run we need to wait for BICEP2 level of sensitivity at multiple frequencies at which point it will be easy to tell dust from primordial stuff. In the medium run, Planck and BICEP2 are now, apparently, collaborating and will be looking, carefully, to see whether Planck's high frequency measurements look like BICEP2's low frequency measurement. If they do, that's bad news, because within BICEP2's field of vision Planck high frequency measurements are only sensitive to dust. If they don't look similar, this doesn't necessarily mean that BICEP2 haven't measured dust, because Planck could just be noise dominated. All of these tricky subtleties are being worked out and hopefully, before the end of 2014, some sort of quantitative (though perhaps still not conclusive) statement about the probability that BICEP2 has seen dust will arise.<br />
<br />
In the meantime, in the "short run", cosmologists are going to be impatient and will try to extract as much information as they can from <em>any</em> available data. I like this attitude. I think it's a sign of a healthy curiosity and passion for knowledge. However, one should be careful about what confidence one places in any results obtained. The reason Planck and BICEP2 are taking a long time to say anything is not <em>just</em> because Planck is a large group and getting agreement takes many meetings, conference calls and emails. It is also because there are many effects that need taken into account and understanding each of them takes time. If one doesn't take that time, one might miss something.<br />
<br />
With that set of caveats out of the way I'll discuss <a href="http://arxiv.org/abs/1409.4491" target="_blank">this interesting paper from a few days ago</a>.<br />
<br />
<strong>Digitising pdfs, the new way to do cosmology</strong><br />
<strong><br /></strong>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvdGRe3v_fddNbggYzXb3i6j5zWAep57ZQCmLqTEQZFh2L5Ml_yOAHHSsRhXNY0Efu2pUdI-YymE5657INe68LV8b25HHCOv7OoXSI_0dlNiAoniVS_MsuaxQkLaPV2sru04XrKMT1RMY/s1600/Planck353Ghz.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="153" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvdGRe3v_fddNbggYzXb3i6j5zWAep57ZQCmLqTEQZFh2L5Ml_yOAHHSsRhXNY0Efu2pUdI-YymE5657INe68LV8b25HHCOv7OoXSI_0dlNiAoniVS_MsuaxQkLaPV2sru04XrKMT1RMY/s1600/Planck353Ghz.gif" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The B-mode polarisation in the CMB as seen by Planck. Are the features the same as in the BICEP2 plot? That's the crucial question. Well, that and whether this actually corresponds to the CMB as seen by Planck, given that it was digitised from a slide presentation in 2013. Still, not long now for the real thing (i.e. a paper from which to digitise images!)</td></tr>
</tbody></table>
<br />
Neither Planck, nor BICEP2 have released their B-mode polarisation data (i.e no file was released giving the B-mode polarisation signal associated with each line of sight analysed on the sky). Instead, they've released images of the signal on the sky, mostly in pdf format, with a colour bar indicating the signal.<br />
<br />
The sneaky thing various groups have been doing, while waiting for actual data, is to digitise these images. That is, to use the colour scale in the image and convert this to a set of signal amplitudes at the various lines of sight being analysed. In fact, even BICEP2 did this, to a Planck image, in their first manuscript. Today another group has analysed a digitised version of Planck's maps, as well as BICEP2's map.<br />
<br />
What this group did is conceptually similar to what Planck and BICEP2 are (so the rumours say) doing behind the scenes. That is, to essentially look at the two maps and measure how similar they are.<br />
<br />
I won't go into any additional details regarding how the digitising was done. It is described in the paper. The main obstacles come from a bunch of arrows on the BICEP2 image that need to be removed and replaced with estimates of the signal, and from removing the small and large scale fluctuations from the Planck image (because BICEP2 did this to their image and one needs to compare like for like). This process is a little messy and we shouldn't forget that the Planck map being used is <a href="https://www.youtube.com/watch?v=GkKtB2JwDE4" target="_blank">the same one from 2013</a> that has been used in the past and only ever appeared in a slide during a conference talk! However, without the data itself, it's the best people can do, so why not? It's better than nothing (or so I think).<br />
<br />
<strong>What they saw</strong><br />
<br />
With these digitised images they performed a number of tests. The first test basically amounts to counting the numbers of hot spots in the image that pass a certain hotness threshold and subtracting the number of cold spots colder than the equivalent threshold (the "genus statistic"). One can compare this result as a function of the threshold to what is expected from Gaussian statistics. BICEP2 (or, at least, the digitised data from BICEP2's images) appears consistent with Gaussianity under this test. The Planck data does too. At least, this is true after removing the large scales and the small scales from the image. It is worth noting that without this removal, Planck's data seems highly non-Gaussian by this test, not surprising for dust.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixCBj0BtUgzEPUOZlgNY6WTEH_BGdvXFgtive_sk_OY0onyBhnyxClT8O-CFeBdLvWqtOLZMPmIRxGg1vwInLrBmkX91FeCMgei6i7WVFivvBvCciCgwobOXBmwPk9ZuvXmXWdq6fdxw8/s1600/isBICEP2Gaussian.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="302" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixCBj0BtUgzEPUOZlgNY6WTEH_BGdvXFgtive_sk_OY0onyBhnyxClT8O-CFeBdLvWqtOLZMPmIRxGg1vwInLrBmkX91FeCMgei6i7WVFivvBvCciCgwobOXBmwPk9ZuvXmXWdq6fdxw8/s1600/isBICEP2Gaussian.gif" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The dotted line is the expectation for something that has a Gaussian distribution. The solid line is the BICEP2 data. It seems BICEP2 passes this Gaussianity test. I wonder if this rules out any inflationary models that predict a freaky strong amount of tensor non-Gaussianity? Someone should write a paper!</td></tr>
</tbody></table>
They then compare the amplitude of the genus statistic for each experiment. Here they find that BICEP2's value is larger than Planck. The interpretation of this is that the fluctuations BICEP2 see are more prevalent on small scales and less prevalent on large scales, compared to Planck. This is actually what one would expect if Planck was seeing noise+dust (i.e. a more flat spectrum) and BICEP2 was seeing the effects of primordial gravitational waves (i.e., a spectrum that, over the considered scales, is growing larger towards smaller scales). However, as they point out, this isn't new. One can already see this from plots of the angular power spectrum in BICEP2's own paper (i.e. the fluctuations are larger on smaller scales). Also, <a href="http://arxiv.org/abs/1405.5857" target="_blank">in an earlier paper</a> it was found that primordial gravitational waves are a marginally better fit to BICEP2 alone than dust is, if the amplitude of each is allowed to be free.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQJuDfvvCYKxaxkEBEQ266VJd3LcYyBvNw6H3SpKDbTXHxczO8eHOakO09b-jJa4i9YwkZ0W0OS9bw3o1pv2l0yPMbhVY7i1P0byakO4rRjtRp049mYqUztnuRnEl1wP54xlVeLUiDc_8/s1600/CrossCorrBicPlanck.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="154" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQJuDfvvCYKxaxkEBEQ266VJd3LcYyBvNw6H3SpKDbTXHxczO8eHOakO09b-jJa4i9YwkZ0W0OS9bw3o1pv2l0yPMbhVY7i1P0byakO4rRjtRp049mYqUztnuRnEl1wP54xlVeLUiDc_8/s1600/CrossCorrBicPlanck.gif" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The cross-correlation of the previous two images. The blue spots are where both images were blue, the red spots are where both were red, the green is where one image was blue and the other red. The white haze is where either plots was not particularly blue or red (i.e. no positive or negative correlation). I can definitely see more red/blue than green, I think. Is this enough correlation to explain all of BICEP2's measurement? Is this consistent with randomness?</td></tr>
</tbody></table>
The next test they did, that I'll discuss, is a cross-correlation of the two data sets. This essentially amounts to statistically examining whether Planck's data is showing positive and negative B-modes along the same line of sight as BICEP2's. A large cross-correlation would indicate that when Planck is positive, so is BICEP2 and when Planck negative, so is BICEP2. A value close to zero would indicate that there is no relation, when Planck is positive, BICEP2 is just as likely to be positive as negative. A negative value would be incredibly surprising and would indicate that when Planck is seeing a positive signal, BICEP2 is more often than not seeing negative (and vice versa).<br />
<br />
They do see a small, positive, cross-correlation. Now, remember, that what Planck is seeing is likely some combination of dust and noise. Their noise couldn't possibly correlate with BICEP2's (completely different instruments at different locations). Therefore, if there is some correlation, it will be coming from the dust. This positive correlation therefore indicates that at least <em>some </em> of BICEP2's signal is probably coming from dust. The crucial question is <em>how much?</em><br />
<br />
The answer in the paper is <em>probably not all</em>. They estimate the amount of correlation between Planck and BICEP2 that would be needed to fully account for BICEP2's signal and it is more than what they observe. This, also, isn't really particularly new. In fact, BICEP2 did a similar analysis in their original submission, using the same conference talk Planck data, and came to a similar conclusion.<br />
<br />
Anyway, after accounting for this correlation, and estimating the remaining signal in BICEP2, the obtained value for "\(r\)" (which essentially measures the amplitude of primordial gravitational waves) is \(0.1 \pm 0.04\). This is not the "\(5\sigma\)" initially claimed by BICEP2 (i.e. \(r\simeq 0.2\)), but, if everything else that led to this value can be trusted, it is still non-zero evidence for primordial gravitational waves. Curiously, this smaller value for \(r\) is actually much easier to align with Planck's temperature data and inflation (for example it would alleviate what <a href="http://trenchesofdiscovery.blogspot.co.uk/2014/03/a-new-cosmological-coincidence-problem.html" target="_blank">I called a second "cosmological coincidence problem"</a>).<br />
<br />
<strong>Where now?</strong><br />
<br />
Now, we continue waiting. This paper hasn't really said anything that hasn't already been said or wasn't already known. It has just said and shown these known things in different ways. Any day now we are to expect Planck's paper revealing the <em>non-conference-talk</em> maps of the high frequency polarisation signal along BICEP2's line of sight. These will just be images though, not raw data. The word on the street/corridor is that a fully written draft exists and has clearance to be submitted and nobody I've spoken to knows why it hasn't been. The sort of phrases I've heard about what to expect from this is that "it will clarify a lot of things", but "it won't be conclusive". The safe bet is that it will show that Planck <em>has</em> seen some dust along this line of sight and some noise and that some of BICEP2's signal is almost certainly dust, but that, for now, precisely how much isn't <em>certain</em>.<br />
<br />
When mentioning things like \(r=0.1 \pm 0.04\) to Planck people in the past they've essentially shrugged their shoulders and said something like "yeah, that's probably possible"; however, one should keep in mind that a \(2.5\sigma\) deviation of noise alone in BICEP2 would "probably be fine", so that doesn't really say much.<br />
<br />
What we really crave is a cross-correlation analysis, similar in spirit to the one in the paper discussed above, but using the actual data. With the data not being public, only BICEP2 and Planck can do this, and they are. Results from this are expected "before the end of the year" (though which year is unclear).<br />
<br />
What we really, really crave is more data, at more frequencies, with BICEP2 or better level of precision. This will also come in time.<br />
<br />
Twitter: <a href="https://twitter.com/just_shaun" target="_blank">@just_shaun</a>Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com8tag:blogger.com,1999:blog-1513704378254120283.post-21282725846567298602014-08-26T19:00:00.000-07:002014-08-27T03:11:51.027-07:00The Cold Spot is not particularly cold<i>(and it probably isn't explained by a supervoid; although it is still anomalous)</i><br />
<br />
In the <a href="http://trenchesofdiscovery.blogspot.co.uk/2011/10/smoking-cmb-evidence-of-big-bang.html" target="_blank">cosmic microwave background</a> (CMB) there is a thing that cosmologists call "The Cold Spot". However, I'm going to try to argue that its name is perhaps a little, well, wrong. This is because it isn't actually very cold. Although, it is definitely notably spotty.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhY72aVyZnq0tiffd4vX_2KGRHvBuCTEgMgdbFELVnn-6eI9tpBjayrjRi2ysMALcHZYHjRMzSEIwdLbMY8dI-iPR0HlEJwTtsKIp5QZE_HMRlMVt8kScNLKWEWU60_jgq8uQt17Ii4T_c/s1600/220px-ColdSpot.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhY72aVyZnq0tiffd4vX_2KGRHvBuCTEgMgdbFELVnn-6eI9tpBjayrjRi2ysMALcHZYHjRMzSEIwdLbMY8dI-iPR0HlEJwTtsKIp5QZE_HMRlMVt8kScNLKWEWU60_jgq8uQt17Ii4T_c/s1600/220px-ColdSpot.jpg" height="208" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">That's the cold spot. It even has <a href="http://en.wikipedia.org/wiki/CMB_cold_spot" target="_blank">its own Wikipedia page</a> (which really does need updated).</td></tr>
</tbody></table>
<br />
<b>Why care about a cold spot?</b><br />
<b><br /></b>
This spot has become a thing to cosmologists because it appears to be somewhat <em>anomalous</em>. What this means is that a spot just like this has a very low probability of occurring in a universe where the standard cosmological model is correct. Just how anomalous it is and how interesting we should find it is a subject for debate and not something I'll go into much today. There <em>are</em> a number of anomalies in the CMB, but there is <em>also</em> a lot of statistical information in the CMB, so freak events are expected to occur if you look at the data in enough different ways. This means that the anomalies could be honest-to-God signs of wonderful new physical effects, or they could just be statistical flukes. Determining which is true is very difficult because of how hard it is to quantify how many ways in which the entire cosmology community have examined their data.<br />
<br />
However, if the anomalies are signs of new physics, then we should expect two things to happen. Firstly, some candidate for the new physics should come up, which can create the observed effect <em>and</em> produce <em>all</em> of the <em>much greater</em> number of other measurements that fit the standard cosmological model well. If this happens, then we would look for additional ways in which the universe described by this new model differs from the standard one, and look for those effects. Secondly, as we take more data, we would expect the unlikeliness of the anomaly to increase. that is, it should become more and more anomalous.<br />
<br />
In this entry, I'm not going to be making any judgement on whether the cold spot is a statistical fluke or evidence of new physics. What I want to do is explain why, although it still is anomalous, and is definitely a spot, the cold spot isn't very cold. Then, briefly, I'll explain why, if it is evidence of new physics, that new physics isn't a supervoid.<br />
<br />
<b>So, what is the cold spot, and why is it anomalous?</b><br />
<b></b><br />
<a name='more'></a>If one wants to find isolated spots/patches in any image it helps reduce the effects of noise in the image by <a href="http://en.wikipedia.org/wiki/Filter_(signal_processing)" target="_blank">filtering it</a>. The idea behind this is that the filter will have a certain characteristic width, and then any features in the image that are notable over a size equivalent to that width will remain notable after the filtering and any other features in the image arising due to noise will be reduced. <br />
<br />
<em>The</em> cold spot was found in maps of the CMB when they were filtered with a "spherical <a href="http://en.wikipedia.org/wiki/Mexican_hat_wavelet" target="_blank">Mexican hat wavelet</a>" (<a href="http://arxiv.org/abs/astro-ph/0105111" target="_blank">SMHW</a>) filter. The motivation for using this filter is that it has a central region with a positive value, and an outer region with a negative value. Therefore, it works especially well as a filter. A strictly positive filter, will filter out fluctuations in an image that occur on smaller scales than the filter; however any larger scale features will remain and could hide features that occur on the size of the filter. However, by having this compensating negative region, the SMHW filter also filters out the larger scales. This happens because any larger scale features will have the same magnitude in the central region and outer region, and thus the total filtered signal of a large scale feature will be close to zero. However, crucially, an isolated patch will only contribute in the centre and thus an image filtered with a compensated filter like the SMHW will more clearly isolate patches of a certain size than a non-compensated filter.<br />
<br />
The cold spot is the coldest spot that exists in the map of the CMB, once that map has been filtered with the SMHW filter. It is interesting and anomalous because, if one makes simulated CMB maps of "typical" universes, the coldest filtered spot rarely has as cold a filtered signal as ours does. In fact, it seems like less than one in one three hundred filtered spots would be that cool.<br />
<br />
This <em>seems</em> to indicate that something which isn't captured by our standard model has caused this spot.<br />
<br />
<b>What do these typical maps look like then?</b><br />
<b><br /></b>
While it is true that it is rare for a simulated map of the CMB to have a <em>filtered</em> cold spot that is as cold as our own, it is worth asking what the typical coldest filtered spots actually look like (i.e. <em>before </em>they're filtered). This will help to determine whether the shape of our spot is typical and it is just colder than usual, or, whether its shape is also somehow anomalous. <br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaXdNMsXItY-KMsoJUmN7P37AI66gjYll5zayz_lDX5oXSMLcGqLcPlJfmGXZHvNMMDBvJUex-Nx8yIGKKG6DpvAG2ICE9iJbooOnvYeHPURA3hHrH9a0gcCwH5gjYQ8W8mIO9QHzFm6Y/s1600/coldspotprof.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaXdNMsXItY-KMsoJUmN7P37AI66gjYll5zayz_lDX5oXSMLcGqLcPlJfmGXZHvNMMDBvJUex-Nx8yIGKKG6DpvAG2ICE9iJbooOnvYeHPURA3hHrH9a0gcCwH5gjYQ8W8mIO9QHzFm6Y/s1600/coldspotprof.png" height="201" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The limits of expected temperature profiles around cold spots. Red dashed is our cold spot's profile. It doesn't really look anomalously cold, does it? But it does have an anomalous profile!</td></tr>
</tbody></table>
I've shown exactly this in the figure above. It depicts the 1 and 2 "sigma" bands of the angular profile of the coldest filtered cold spot in simulated CMB maps. That is, at each angle, \(\sim 68\%\) of the coldest filtered spots in each simulated map had an unfiltered temperature within the inner band and \(\sim 95\%\) of the coldest spots had an unfiltered temperature within the outer band. The red, dashed curve is the unfiltered profile for the actual coldest filtered spot in the real CMB map (i.e. "The Cold Spot").<br />
<br />
This figure was produced for <a href="http://arxiv.org/abs/1408.4720" target="_blank">a recent paper I wrote</a> with Seshadri Nadathur, Mikko Lavinto and Syksy Rasanen (all based in Helsinki). When I first saw this plot, I thought we must have made a mistake. As you can see, our "anomalous" cold spot lies entirely within the bands. It seems to be entirely typical of a coldest filtered cold spot.<br />
<br />
Huh? So why claim that it is anomalous?<br />
<br />
Well, the point is that this type of profile is not how the anomalousness of the cold spot was first determined. The initial measure of its anomalousness used the <em>filtered</em> signal. Now, I'd like to point you to an interesting feature of the red curve. Although it is always within the bands, it s<em>tarts</em> in the lower half of the bands and <em>ends</em> in the upper half. <br />
<br />
This is crucial and is the source of the anomalousness of the cold spot. In fact, very few of the simulated coldest filtered spots will have this behaviour.<br />
<br />
<b>So, where is the anomaly?</b><br />
<b><br /></b>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqbgz_e-3q6sa_XTXAtFkXCScbiRIWiDgSNO359qvJj95pyhpJo6azUE4qZGfHaidV6YOB6oXboNxRvZVfqbVI9Hs49Uz5rGuUxDcJBF1h8iaZ7mtQZWPR6AKqKzfHEK4BgDAYB-vxfkc/s1600/whereisanomaly.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqbgz_e-3q6sa_XTXAtFkXCScbiRIWiDgSNO359qvJj95pyhpJo6azUE4qZGfHaidV6YOB6oXboNxRvZVfqbVI9Hs49Uz5rGuUxDcJBF1h8iaZ7mtQZWPR6AKqKzfHEK4BgDAYB-vxfkc/s1600/whereisanomaly.png" height="202" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The "cumulative filtered signal" (see text for description). Note that the red dashed line just keeps on decreasing, whereas the bottom bit of the bands comes back up. Cold spots don't want to have hot rings. Our's does.</td></tr>
</tbody></table>
<b><br /></b>
Now, remember the curious feature of the SMHW filter, which is that it has a positive inner region and a negative outer region. It just so happens that our cold spot fits that filter shape quite well, or at least, much better than a typical coldest filtered spot. It gains signal in <em>both</em> regions of the filter, which is what makes its total filtered signal so anomalously cold. Importantly, the simulated spots that are colder in the centre are also wider, and thus still quite cold in the compensating region of the filter; whereas those that are narrower, or have hot rings, aren't very cold in the centre.<br />
<br />
The unique thing about our cold spot is therefore that it has a <em>common </em>cold central region, with an <em>uncommon</em> hot ring around it.<br />
<br />
This is shown in the second figure above. This is the cumulative filtered signal as a function of the angle. This might be a slightly confusing figure, but I think it is incredibly illustrative once you understand what it is showing so it's worth trying to follow. To filter a point on an image you take the surroundings of the point and weight these surrounding regions by the amplitude of the filter in each region. The SMHW filter changes as a function of the angle a surrounding region is away from the point being filtered. The final filtered value at the point is the sum of all those weighted surrounding regions. The figure above shows the cumulative contribution to that final filtered signal, as a function of the angle. Essentially, you add up all the contributions to the full filtered signal which come from the region between the centre and the angle \(\theta\), and that provides the value in the figure.<br />
<br />
Why is this plot useful? Because it shows how the different angles contribute to the final filtered signal. For the red curve (again, that's our, real world cold spot), there are two regions of substantial downward trend. It is in these two regions that the final filtered signal picks up its value. The bands are the same as the bands in the previous figure. That is, the inner band shows what \(68\%\) of simulated maps would do and the outer band shows what \(96 \%\) would do. We can see that the red curve also takes two downward jumps compared to these bands. The first occurs near the centre, but it isn't until the second jump that the real world curve becomes anomalous and ends up outside the bands.<br />
<br />
If you look at this figure for a very long time you can also see that the bands widen at intermediate angles and then shrink precisely at the point where our universe becomes anomalous. This is somewhat interesting. It shows that the simulated maps that have cold values in the centre of their coldest spots actually lose signal in the outer region (i.e. the bottom of the band increases); whereas, those simulated maps that were comparatively warmer in the centre continue gaining signal at the larger angles (i.e. the top of the band decreases). However, crucially, our universe's line decreases in both regions.<br />
<br />
So, again, what is special about the "cold" spot, is not its coldness. That coldness is clearly not anomalous at all. What <em>is</em> special, is the spottiness of the cold spot and the fact that it is surrounded by a hot ring.<br />
<br />
<b>Should we care about the cold spot?</b><br />
<b><br /></b>
Yes, no, maybe.<br />
<br />
This result might make the anomalousness of the cold spot seem horribly arbitrary. Why the SMHW filter that just happens to fit <em>our</em> cold spot? Why should we care care if our not-anomalously-cold cold spot has an anomalously hot ring around it? Well, maybe you shouldn't. But, remember that this particular filter was well motivated <em>a priori</em>. It has spotted our cold spot and called it anomalous because of the hot ring, rather than its coldness; however the filter was chosen because it is ideal for removing noise on both small <em>and</em> large scales. While other filters without the compensating edge wouldn't call our cold spot anomalous, they would also have much larger noise on large scales, which could mimic our cold spot. In other words, they're not as good at picking out true spots as the SMHW filter is.<br />
<br />
But, yes, it isn't quite as cut and dried as one might think without this knowledge. Our spot <em>isn't</em> particularly cold compared to simulated coldest spots, so it <em>is</em> the hot ring that makes our spot special. This has an important corollary: If you think that the cold spot <em>is</em> evidence of something new and you want to explain its origin, you can't just explain its coldness, you also need to explain that hot ring. In fact, if you just produced a cold spot that would have an equally large filtered signal, but without a hot ring, you've actually failed to explain the cold spot. Your spot wouldn't have a profile that looks like our cold spot at all.<br />
<br />
I'd actually go a bit further. To explain the cold spot you probably shouldn't even try to reproduce the full coldness at the centre. Some, or even most of that will be describable by already known physics, as the two figures in this entry show. Moreover, you don't need to just explain why a hot ring can exist. What you need to explain is why there is a hot ring <em>and</em> why it is precisely around that cold spot.<br />
<br />
<b>What about a supervoid?</b><br />
<b><br /></b>
The paper I took the two figures from above was titled "Can a supervoid explain the cold spot?" As is typical for an article with a question in the title, our answer was a resounding no. Although that question was the main theme of the paper, it wasn't what I found most interesting in it, which is why I've focussed on the stuff to do with the exact nature of our coldspot and what makes it anomalous. This is interesting/important, because if the cold spot is due to a new physical effect, this tells us something very specific about that new physics.<br />
<br />
However, we did also conclude that the cold spot can't be explained by a supervoid. Sesh has already written <a href="http://blankonthemap.blogspot.co.uk/2014/08/a-supervoid-cannot-explain-cold-spot.html" target="_blank">a blog article about this conclusion</a>, so you should go and read that, if you're interested. I'll summarise the argument super briefly here. The argument for why a supervoid <em>can</em> explain the cold spot, <a href="http://arxiv.org/abs/1405.1555" target="_blank">presented in this paper</a>, is that it is the result of a gravitational effect very similar to the ISW effect (which I discussed in these posts). Here is why that doesn't work:<br />
<ul>
<li>Most importantly, we just don't reproduce the result of the other group. They claim a particular temperature imprint on the CMB coming from a particular type of supervoid. When we calculate the expected signal for that supervoid, in two different ways, we get a much smaller result (consistent between our two methods). Incidentally, our result also matches what <a href="http://arxiv.org/abs/1408.4442" target="_blank">another, simultaneous, result</a> from another group (well, person) saw.</li>
<li>Using the results of our calculation we would expect that the probability of a supervoid existing that can explain the coldspot is utterly negligible in the standard cosmological model. Given that the coldspot itself is only a 1 in ~ 300 anomaly, this makes a supervoid a highly unlikely "explanation" of that anomaly.</li>
<li>There are other supervoids that have been detected that are just as big and deep as the one proposed to explain the cold spot. If there is some physical mechanism that allows supervoids to affect the CMB so strongly, there should be more than one anomalous coldspot.</li>
</ul>
<div>
<b>The end</b></div>
<div>
<b><br /></b></div>
<div>
So, if you find the cold spot anomalous, looking for its explanation in a supervoid is a dangerous path. If it is due to a supervoid, then the standard cosmological model is substantially wrong, and either the effects of voids on light is different to General Relativity, or very extreme voids are much, much more likely than previously expected. </div>
Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com0tag:blogger.com,1999:blog-1513704378254120283.post-53734803164504954722014-06-27T07:30:00.000-07:002014-08-19T08:51:19.167-07:00The human machine: obsolete components<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.bakertroop.com/blog/wp-content/uploads/2005/12/obsolete.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://www.bakertroop.com/blog/wp-content/uploads/2005/12/obsolete.png" height="158" width="200" /></a></div>
<br />
The previous post in this series can be found <a href="http://trenchesofdiscovery.blogspot.co.uk/2014/05/the-human-machine-replacing-damaged.html">here</a>.<br />
<br />
<div style="text-align: justify;">
In my last post in this series I described some of the ways in which gene therapy is beginning to help in the treatment of genetic disorders. A caveat of this (which was discussed further in the comments section of that post) is that currently available gene therapies do not remove the genetic disorder from the germline cells (<i>i.e. </i>sperm or eggs) of the patient and so do not protect that person's children against inheriting the disease. This could be a problem in the long run as it may allow genetic disorders to become more common within the population. The reason for this is that natural selection would normally remove these faulty genes from the gene pool as their carriers would be less likely to survive and reproduce. If we remove this selection pressure by treating carriers so that they no longer die young, then the faulty gene can spread more widely through the population. If something then happened to disrupt the supply to gene therapeutics - conflict, disaster,<i> etc.</i> - then a larger number of people would be adversely affected and could even die.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Although this is a significant problem to be considered, it is one that is fairly simply avoidable by screening or treating the germline cells of people undergoing gene therapy in order to remove the faulty genes from the gene pool. This is currently beyond our resources on a large scale, but will almost certainly become standard practice in the future.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
All of this got me thinking: are there any other genes that might be becoming more or less prevalent in the population as a result of medical science and/or civilisation in general? If so, can we prevent/encourage/direct this process and at what point do we draw the line between this and full-blown genetic engineering of human populations? This is the subject of this post, but before we get into this, I want to first give a little extra detail about how evolution works on a genetic scale.</div>
<div style="text-align: justify;">
<br />
<b>Imperfect copies</b><br />
<br />
Evolution by natural selection, as I'm sure you're aware, is simply the selection of traits within organisms based on the way in which those traits affect that organism's fitness. An organism with an advantageous trait is more likely to survive and reproduce and so that trait becomes more and more common within the population. Conversely, traits that disadvantage the organism are quickly lost through negative selection as the organism is less likely to reproduce. The strength of selection in each case is linked to how strongly positive or negative that trait is - <i>i.e.</i> a mutation that reduces an animal's strength by 5% might be lost only slowly from a population, whereas one that reduces it by 90% will probably not make it past one generation. In turn, the strength of that trait is determined by the precise genetic change that has occurred to generate it.<br />
<br />
<a name='more'></a><br />
<br />
Changes in inheritable traits are brought about by mutations in the DNA of an organism's genome (except in rare instances when <a href="http://en.wikipedia.org/wiki/Epigenetic">epigenetic</a> effects are dominant - see this<a href="http://trenchesofdiscovery.blogspot.co.uk/2013/04/the-human-machine-setting-dials.html"> earlier post </a>for more) that alter the sequence of genes or non-coding regions that regulate gene behaviour. Some of these mutations are deliberately introduced during the formation of sperm and egg cells (a process known as <a href="http://en.wikipedia.org/wiki/Genetic_recombination">recombination</a>), but most occur by accident. Accidental mutations like these usually happen when DNA gets damaged and then incorrectly repaired, meaning the sequence you end up with is different to the one that you started with, or when the DNA replication machinery makes a mistake during cell division. If these mutations accumulate within the same cells this can lead to cancer, but if it occurs in your germline cells then it will be passed on to every last cell in your children's bodies. Estimating the rate of mutations in humans has proved a bit tricky, but our current <a href="http://www.genetics.org/content/156/1/297.abstract">best guess</a> is that any given base pair (the 'letters' that make up the DNA code) will on average accumulate ~2.5x10<sup>-8</sup> mutations per generation -<i> i.e.</i> on average it will mutate once in every 40 million generations. That's pretty infrequent<span style="font-family: inherit;">, but when you consider that the human genome contains ~7x10<sup>9</sup> base pairs it means that we actually each possess ~175 mutations that our parents didn't. Given that mutations can come in a variety of forms - bases can be deleted, inserted, or replaced - it's very likely that almost all of these mutations are entirely unique to the carrier and their offspring, which is something to make you feel more special!</span></div>
<br />
<div style="text-align: justify;">
These mutations exert their effects by how they alter the sequences of proteins encoded by the affected genes. Proteins are made up of a string of amino acids, the order and number of which is determined by the DNA sequence of the encoding gene. Mutating the gene will cause the amino acid sequence of the encoded protein to change, which will then have functional consequences. Each protein has a specific set of functions that it needs to do - when you alter its amino acid sequence you will alter its ability to do these jobs, either for better or for worse. Mutations that improve protein function will confer a selective advantage to the organism, whereas those that disrupt it will convey a selective disadvantage. Over time, proteins accumulate progressively larger and larger numbers of advantageous mutations until they achieve the best possible combination of amino acids. We can exemplify this process by imagining a computer program that randomly inserts 'mutations' into a sentence and then selects the results based on how close they are to the phrase "<i>to be or not to be</i>". Eventually the program will home in on the target phrase as deleterious mutations are lost while advantageous ones are retained.
</div>
<span style="font-family: inherit;"><span style="line-height: 115%;"><br /></span></span>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2HgZAi7Wbb5Irq6DxG56-V404_4LV4KsJ7F-qF9p0N8b0Pd5Mi2ZsltfKfq7CAjZX45v5L5r3FAAFmZ7dY8Jj2_WSGQgAf9tEyX4x2NKlwe-ZnqDLKc0Lz07Pafyrc0MuLCJdsT6rtOo/s1600/Presentation1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2HgZAi7Wbb5Irq6DxG56-V404_4LV4KsJ7F-qF9p0N8b0Pd5Mi2ZsltfKfq7CAjZX45v5L5r3FAAFmZ7dY8Jj2_WSGQgAf9tEyX4x2NKlwe-ZnqDLKc0Lz07Pafyrc0MuLCJdsT6rtOo/s1600/Presentation1.jpg" height="150" width="400" /></a></div>
<div style="text-align: center;">
<br /></div>
<div style="text-align: center;">
<div style="text-align: justify;">
<b style="text-align: left;">Selection is blind to detail</b></div>
<div style="text-align: left;">
<br /></div>
</div>
<div style="text-align: justify;">
So far, I've probably just been telling you stuff you already know. However, an important implication of the molecular theory of natural selection is that the precise sequence of genes is never fixed, they just change within a tolerance range determined by selective pressures. What I mean by this is as long as a mutated gene fulfils its role as well as the original, it can replace it over time. To replicate this in our "<i>to be or not to be</i>" scenario we just loosen the selective constraints so that we no longer select by similarity to the target phrase, but instead by how similar it sounds when read aloud. In this case, the original, functional phrase is lost over time despite being perfectly good. Mutations that have no impact on how a gene affects the fitness of the organism are known as <a href="http://en.wikipedia.org/wiki/Neutral_mutation">neutral mutations</a>.</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihZ5MMnp56uqvNyGOPwW-JcHY3t6JsitJZNWjU3t7k07EG3kUN_1kpI6AKlY2mwVskAFCWoizYCvOcQ2ZnN9hmitTfc4Z4xLExmiTaI4QtWHe3bEzibpsuHQ1U4F_CKOKDGAuc0mPIypk/s1600/Slide2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihZ5MMnp56uqvNyGOPwW-JcHY3t6JsitJZNWjU3t7k07EG3kUN_1kpI6AKlY2mwVskAFCWoizYCvOcQ2ZnN9hmitTfc4Z4xLExmiTaI4QtWHe3bEzibpsuHQ1U4F_CKOKDGAuc0mPIypk/s1600/Slide2.jpg" height="300" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
The point that I'm driving at with all this is that genes are not in any way fixed, and will always change over time within their selective tolerance limits. This is known as <a href="http://en.wikipedia.org/wiki/Genetic_drift">genetic drift</a>, whereby many neutral mutations accumulate to change a gene drastically over time within different populations. This is best exemplified by looking at the same genes in different species - they tend to do the same jobs but can have significantly altered sequences. There are many examples of genes that have been removed from mice and replaced with their human counterparts without having any obvious detrimental effects.</div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
<b>Removing the protection of selection</b></div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
What genetic drift shows us is that natural selection does not need a mutation to be bad in order for it to be lost, it just needs to have no impact on the organism's fitness. The worrying implication for this is that genes can be lost from genomes simply by no longer serving a purpose, even if they are not actively detrimental. If the function that a gene fulfils becomes redundant then the window for mutational tolerance that selection imposes will become, in effect, infinite because every mutation in that gene will now be a neutral mutation. In our scheme, this would be like having no criteria for selecting the mutated phrases, and over many generations that original becomes lost completely. The mangled, barely recognisable genes that emerge from this are known as <a href="http://en.wikipedia.org/wiki/Pseudogene">pseudogenes</a>, and we have a lot of them! </div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYzRWoMoYoQpD4ZXu8l3Z6zZiaOJ9Bc0Ju3wH4po-kApqo6vTh-uITO4GE4ll6flsM0nhhGXtRDDdM-T1KDyg-SNJ3jnMfsNuxFLYF9NLNgfXDHIV02ghINB1fvAFEa6zPV2jqsBYeaJ0/s1600/Slide1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYzRWoMoYoQpD4ZXu8l3Z6zZiaOJ9Bc0Ju3wH4po-kApqo6vTh-uITO4GE4ll6flsM0nhhGXtRDDdM-T1KDyg-SNJ3jnMfsNuxFLYF9NLNgfXDHIV02ghINB1fvAFEa6zPV2jqsBYeaJ0/s1600/Slide1.jpg" height="300" width="400" /></a></div>
<div style="text-align: center;">
<br />
<div style="text-align: justify;">
The reason why this is worrying is because there are a surprising number of abilities that we take for granted and that enrich our lives greatly, but which are not actually necessary for our survival any more due to the protection afforded us by civilisation. An obvious example of this is our sense of smell. I think we'd probably all agree that being able to smell is a good thing that we would all prefer to be able to do, but actually if I lost my sense of smell my risk of dying before having children is pretty much unchanged. In the past I would have been more likely to die from eating rotten food, not detecting predators or prey, or would have been less likely to find a mate; but none of that applies any more. Since our sense of smell has lost the protection that selection brings, it is at the mercy of genetic drift and we are losing it. Humans have a huge number of olfactory pseudogenes - gravestones for a sense of smell that was once there. We can trace this slow decline in our sense of smell back to a very <a href="http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0020005">specific point in evolutionary history</a> - apes (including humans) and Old World monkeys have a much larger number of olfactory pseudogenes relative to fully functioning olfactory genes when compared to New World monkeys. This difference corresponds to the point at which primates diverged into those with the ability to see in three colours and those that can't. It appears that the species that first evolved this ability of trichromatic vision suddenly found its sense of smell to be far less important than it was before, leaving it open to be chipped away by genetic drift. It seems almost inevitable to me that our olfactory genes will continue to degrade further and further until we are eventually left with no sense of smell whatsoever. How long this will take, I don't know, but it is coming.<br />
<br />
<b>What's next?</b><br />
<br />
Smell is a fairly clear-cut example of genetic drift undermining human capabilities, but there are a startling number of other similar cases. The most similar to our sense of smell is our sense of taste. The point of taste is to be able to detect what's in food - is it poisonous, sugary, salty <i>etc. </i>In a world where we know the nutritional content of food and rarely have to worry about being poisoned, such an ability becomes redundant and so deteriorates. A depletion in our taste abilities has also taken place recently in evolutionary history that is similar to that observed in olfaction - several of the genes encoding our <a href="http://en.wikipedia.org/wiki/Taste_receptor#Bitter">bitter taste receptors</a> have degraded into pseudogenes. Perhaps it's only a matter of time before the rest follow suit.</div>
<div style="text-align: justify;">
<br />
Many other traits that may have the potential to be lost over time may be the unwitting victims of our advancing medical knowledge. In a world where we can restore lens function in the eye by laser eye function, what is the advantage of genes that prevent lenses from weakening? If we are able to reset broken bones and not die from bacterial infections in the process, where is the need for genes maintaining full bone density? Why do we need genes regulating correct heart rhythm when we have pacemakers to do it for us? As medical science becomes more and more sophisticated, we risk relying more and more heavily on our technology and less on our genes.<br />
<br />
The idea of a slow decline in human health and/or ability due to natural selection has been around for over a hundred years and is known as <a href="http://en.wikipedia.org/wiki/Dysgenics">dysgenics</a>. It has inspired many science fiction writers to imagine dystopian futures populated by our subhuman descendants. The most alarming of these tend to describe worlds inhabited by morons after human intelligence took a nose-dive, such as in the excellent film <a href="http://en.wikipedia.org/wiki/Idiocracy" style="text-align: left;"><i>Idiocracy</i></a>. Although a world full of idiots is not evolutionarily feasible (intelligence would remain a selective advantage in such a world as a clever person could exploit and outwit all the morons), it could be possible that some elements of our intelligence may become diminished over time. What we call 'human nature' - our inherent ways of thinking, feeling <i>etc.</i> - has evolved under the selective pressure to protect ourselves and those related to us. We have instincts for danger, a concept of fear, a distrust of the unfamiliar because these things helped our ancestors to survive. If our world continues to become increasingly safe and sanitised (and our medical abilities increasingly effective) these may too become obsolete. Our capacity for altruism evolved because if we protect and help those around us we <a href="http://en.wikipedia.org/wiki/Selfish_gene_theory">optimise the likelihood for survival of others with our shared genes </a>(including those for altruism). In today's multicultural, dynamic society many of those we help are not likely to be closely related to us, and maybe don't need our help anyway, so perhaps altruism has also become evolutionarily redundant. It's not all bad news, though, our tendency towards racism and tribalism also evolved from a need to protect our own genes, so hopefully they will degrade as well.<br />
<br />
<b>What can we do?</b><br />
<br />
As I mentioned earlier, our best bet for preventing the spread of genetic disorders in the wake of successful gene therapies is to screen patients' germline cells for their genetic defects. This is fairly straightforward when there is just one gene your interested in and you know who to screen. It seems unlikely that we could screen the entire population for potential genetic disorders in every gene in our genomes. And even if we did, what constitutes the 'correct' sequence for each gene since we all have marginally different versions anyway? Realistically the only way to maintain genetic integrity in the long term would be a programme of active genetic modification and/or human cloning that periodically 'resets' the genomic clock to the sequences we have today or from some future time before our abilities degrade too far. This would be an extreme step and is rife with ethical and philosophical questions. Not least is the issue of where to stop with that. Why just reset our genomes to our current standards, why not replace some of our defective olfactory or taste pseudogenes with functional equivalents?<br />
<br />
Luckily, no matter what we decide to do, we at least now have a definitive record of the sequence of the human genome and so can track the changes as they occur. As far as future geneticists are concerned we are pretty much at time zero for the genetic clock since this is where their complete records will begin. Perhaps in 10,000 years time they will be using our genomes as the template from which they can rebuild many of our lost abilities, perhaps leaving out some of the more undesirable ones. Hopefully this will allow us to remain recognisably human, although our definition of what constitutes 'human' may also have changed. Perhaps they will be happier with their new physiology and look back at ignorant bloggers who foretold disaster with derision. Either way, we are going to change, but at least we no have the understanding and ability to track the changes and, if we want to, reverse them.</div>
</div>
<div style="text-align: justify;">
<span style="text-align: center;"><span style="color: #cc0000;"><br /></span></span>
<span style="text-align: center;"><span style="color: #cc0000;"><br /></span></span></div>
James Felcehttp://www.blogger.com/profile/14031758835739415241noreply@blogger.com0tag:blogger.com,1999:blog-1513704378254120283.post-56493234698852281802014-05-05T13:37:00.000-07:002014-06-27T07:35:38.551-07:00The human machine: replacing damaged components<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidbNqqvUZYR4Qg1b8WiAtHdLKC9YaJ749DIPkGMw95db-3G2wcI39rbj3yodYsw4QMDkJEdkKUYpQ_wqnKeKHIyW4SpSUJE0jVOR2FEn0RxNrOxyp2s17Av87nZC6c5fMX_JBohii1suw/s1600/plugs12.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidbNqqvUZYR4Qg1b8WiAtHdLKC9YaJ749DIPkGMw95db-3G2wcI39rbj3yodYsw4QMDkJEdkKUYpQ_wqnKeKHIyW4SpSUJE0jVOR2FEn0RxNrOxyp2s17Av87nZC6c5fMX_JBohii1suw/s1600/plugs12.gif" /></a></div>
<br />
The previous post in this series can be found <a href="http://trenchesofdiscovery.blogspot.co.uk/2014/03/the-human-machine-finely-tuned-sensors.html">here</a>.<br />
<br />
<br />
<div style="text-align: justify;">
The major theme of my 'human machine' series of posts has been that we are, as the name suggests, machines; explicable in basic mechanical terms. Sure, we are incredibly sophisticated biological machines, but machines nonetheless. So, like any machine, there is theoretically nothing stopping us from being able to play about with our fundamental components to suit our own ends. This is the oft feared spectre of 'genetic modification' that has been trotted out in countless works of science fiction, inexorably linked to concepts of eugenics and Frankenstein-style abominations. Clearly genetic modification of both humans and other organisms is closely tied to issues of ethics, and biosafety, and must obviously continue to be thoroughly debated and assessed at all stages, but in principle there is no mechanistic difference between human-driven genetic modification and the mutations that arise spontaneously in nature. The benefit of human-driven modification, however, is that it has foresight and purpose, unlike the randomness of nature. As long as that purpose is for a common good and is morally defensible, then in my eyes such intervention is a good thing.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
One fairly obvious beneficial outcome of genetic modification is in the curing of various genetic disorders. Many human diseases are the result of defective genes that can manifest symptoms at varying times of life. Some genetic disorders are the result of mutations that cause a defect in a product protein, others are the complete loss of a gene, and some are caused by abnormal levels of gene activity - either too much or too little. A potential means to cure such disorders is to correct the problematic gene within all of the affected tissue. The most efficient means to do that would be to correct it very early in development, since if you corrected it in the initial <a href="http://en.wikipedia.org/wiki/Embryo">embryo</a> then it would be retained in all of the cells that subsequently develop from that embryo. This is currently way beyond our technical limitations for several reasons. Firstly, we don't routinely screen embryos for genetic abnormalities and so don't know which ones might need treatment. Secondly, the margin for error in this kind of gene therapy is incredibly narrow as you have to ensure that every single cell that the person has for the rest of their life will not be adversely affected by what you do to the embryonic cells in this early stage - we're not there yet. Thirdly, our genetic technology is not yet sophisticated enough to allow us to remove a damaged gene and replace it with a healthy one in an already growing embryo - the best we can do it stick in the healthy gene alongside the defective one and hope it does the job. There is certainly no fundamental reason why our technology could not one day reach the stage where this kind of procedure is feasible, but we are a long way off yet.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
So, for the time being what can we do? Well instead of treating the body at the embryonic stage, the next best approach is to treat specifically the affected cells later on in life. This involves identifying the problematic gene and then using a delivery method to insert the correct gene into whatever tissues manifest the disease, preferably permanently. This is broadly known as <a href="http://en.wikipedia.org/wiki/Gene_therapy">gene therapy</a>, and is one of the most promising current fields of 'personalised' medicine. </div>
<div style="text-align: justify;">
<br />
<a name='more'></a><br /></div>
<div style="text-align: justify;">
<b>From humble beginnings</b></div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The idea of gene therapy has been around for quite a while now, but the clinical side of the field got off to a tragic start in 1999 with the death of 18 year old Jesse Gelsinger during a trial for gene therapy to treat his <a href="http://en.wikipedia.org/wiki/Ornithine_transcarbamylase_deficiency">ornithine transcarbamylase deficiency</a>. The idea behind the treatment was fairly simple, the disorder was the result of a missing gene encoding an enzyme in ammonia metabolism so Jesse was given a virus engineered to return the gene to his liver cells. Unfortunately the specific virus used, a form of <a href="http://en.wikipedia.org/wiki/Adenoviridae">adenovirus</a>, had the severe side effect of causing a massive immune response that fatally damaged Jesse's already struggling liver. Just four days after receiving treatment Jesse died in the full glare of the media spotlight and the field of gene therapy hit what was to be the first of a number of severe stumbling blocks.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
This case highlighted the difficulties in developing safe delivery systems for gene therapy. Viruses are inherently good at getting foreign genes into cells - it's what they've evolved to do - but they come with associated dangers. What's more, even if you manage to get the correct gene into the correct cells without any problems, there can still be dramatic unforeseen problems. This was brought to prominent attention in 2003, when gene therapy for <a href="http://en.wikipedia.org/wiki/Severe_combined_immunodeficiency">severe combined immunodeficiency</a> (SCID) caused <a href="http://en.wikipedia.org/wiki/Leukaemia">leukaemia</a> in a group of French children. SCID is a disorder of immune cells that essentially leaves the patient with no effective immune system. The gene therapy in this case was designed to insert a working copy of a defective gene, <a href="http://en.wikipedia.org/wiki/IL2RG"><i>IL2RG</i></a>, into the children's immune cells, thereby rendering them functional and restoring immune activity. Unfortunately, when the <i>IL2RG</i> was inserted into the genomes of these cells, it got positioned next to another gene, <a href="http://en.wikipedia.org/wiki/LMO2"><i>LMO2</i></a>, and activated it. <i>LMO2</i> is involved in the development of immune cells as is not normally active in fully developed cells. Its activation by <i>IL2RG</i> cause the cells to begin dividing in an uncontrolled manner, thereby causing leukaemia. It was a sobering reminder to the field that current gene therapy techniques can be a bit of a shot-gun approach.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
<b>To promising results</b></div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Whilst cases such as these are both tragic on a personal scale and damaging on a scientific level, they are nonetheless informative, and so the field has continued to edge forward with ever improving approaches. The French children, whilst suffering from leukaemia, did indeed have their immune function restored, and so in that sense the treatment was a partial success. We are now at the stage where exciting, apparently safe gene therapy is becoming a clinical reality. The most recent success story was published earlier this year in a <a href="http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(13)62117-0/abstract">Lancet</a> article from Robert MacLaren's group in Oxford. This study aimed to treat patients with <a href="http://en.wikipedia.org/wiki/Choroideremia">choroideremia</a>, a degenerative eye disease that generally causes total blindness by middle age. Choroideremia is causes by a defect in the protein REP1, encoded by the gene <i>CHM</i>. By using a virus engineered to specifically deliver the corrected gene to the retinal cells without eliciting a strong immune response, the trial was able to significantly improve the light sensitivity of all of the treated patients. Some were even able to read letters on a standard eye chart, and showed significant improvements in the structure of the affected tissues.<br />
<br />
Degenerative eye disorders are rich hunting for gene therapists due to the fact that they are usually caused by problems with a single gene, and only affect one specific tissue - the eye. Unsurprising, then, that another recent success story has come in the form of a treatment for <a href="http://en.wikipedia.org/wiki/Leber%27s_congenital_amaurosis">Leber's congenital amaurosis</a>, which also results in blindness. In some cases this is caused by a defect in the <i>RPE65 </i>gene, which leads to a deficiency in vitamin A within cells of the retina. <a href="http://www.ncbi.nlm.nih.gov/pubmed/22323828">Insertion of normal <i>RPE65</i> </a>into the retina has successfully increased light sensitivity, and sometimes vision clarity, in a number of patients in clinical trials.<br />
<br />
Both of the above trials have used recombinant 'adeno-associated viruses' to deliver the target gene. The advantage of this virus is that the gene doesn't actually get incorporated into the host genome and so there is no chance of the same kind of off-target effects experienced by the French SCID children. The downside is that this isn't a permanent fix. Because the DNA exists within the target cells but not in their genome, it can be lost over subsequent rounds of cell division and so repeated treatment might be required. That's not to say that it isn't necessarily long-lived, it can last years, but it's not permanent. A better solution would be to have a way of integrating the gene into the genome without affecting other genes.<br />
<br />
This is difficult because the way to do this is by using a type of virus called a <a href="http://en.wikipedia.org/wiki/Retrovirus">retrovirus</a> that fully inserts its genome into that of the target cell, but unfortunately has a tendency to do this at exactly the points where other genes are present. This is because viruses want to replicate quickly, and they will have a better chance of doing that if their genes are in a highly active area of the genome. It has therefore taken a lot of research to come up with a retroviral vector that is safe to use in human cells, but we are finally making progress. Ironically, this progress has come in the form of a modified relative of <a href="http://en.wikipedia.org/wiki/Hiv">HIV</a>, thereby turing one of the modern world's biggest medical problems into a potential wonder cure. That said, the use of these retroviruses in humans is still tricky, and so at the moment we are generally limited to removing cells from patients, treating them, and then putting them back. This has proved effective in the treatment of <a href="http://en.wikipedia.org/wiki/Metachromatic_leukodystrophy">metachromatic leukodystrophy</a>, which causes severe nerve damage resulting in progressive motor and cognitive impairment. <a href="http://www.ncbi.nlm.nih.gov/pubmed/23845948">Replacement of the defective <i>ARSA </i></a>gene in stem cells removed from affected patients allowed the gene to be stably introduced into their cerebrospinal systems when the cells were put back. Although only three patients were involved in this study, all of them showed either a halt in disease progression or a failure to manifest the disease in the first place. Promising stuff.<br />
<br />
<b>Questions of inheritance</b><br />
<br />
The use of gene therapy to treat debilitating conditions such as those described above is undoubtedly a good thing, but it does come with a important additional concern. The current treatments that we have available do not actually <i>cure</i> the genetic disorder, they simply mask it. Insofar as the patient is concerned this doesn't make a blind bit of difference, but where their children are concerned it does. Because the treatment does not correct the genes within the patient's <a href="http://en.wikipedia.org/wiki/Germline">germline</a> cells (<i>i.e.</i> either sperm or ova) they are still likely to pass the defect on to their children. This has the worrying implication that over time such disorders might become more and more common within the population as there is no longer any negative selection pressure against them (<i>i.e.</i> people no longer die before they can pass them on). You could say that this is fine since we essentially have a cure for them, but it is far from desirable to have a large proportion of the population requiring gene therapy in order to survive. Aside from the logistical and safety implications of this, what would happen in the event of periodic breaks in the supply chain of medicine? Something like a major war might claim far more lives if we were so heavily dependent on a constant stream of therapeutic gene treatments.<br />
<br />
That said, it is equally unthinkable that we would deny use of successful therapies to those who need it, or that we should stop striving to further eradicate such diseases, as doing either would be little different that killing those people ourselves. Instead, we will need a careful system of screening for people undergoing such treatment to try to minimise the potential for inheritance of their disorder by their children. Eventually it may be possible to treat the germline cells of patients to completely eradicate the faulty genes from their family, and many believe that this is our best bet for overcoming genetic disorders in the long term. Whatever happens, this kind of medicine is only going to become more and more prominent and widespread, and I for one welcome that.<br />
<br />
The next post in this series can be found <a href="http://trenchesofdiscovery.blogspot.co.uk/2014/06/the-human-machine-obsolete-componenets.html">here</a>.<br />
<br /></div>
<b>References</b><br />
<br />
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=The+Lancet&rft_id=info%3Adoi%2F10.1016%2FS0140-6736%2813%2962117-0&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Retinal+gene+therapy+in+patients+with+choroideremia%3A+initial+findings+from+a+phase+1%2F2+clinical+trial&rft.issn=01406736&rft.date=2014&rft.volume=383&rft.issue=9923&rft.spage=1129&rft.epage=1137&rft.artnum=http%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0140673613621170&rft.au=MacLaren%2C+R.&rft.au=Groppe%2C+M.&rft.au=Barnard%2C+A.&rft.au=Cottriall%2C+C.&rft.au=Tolmachova%2C+T.&rft.au=Seymour%2C+L.&rft.au=Clark%2C+K.&rft.au=During%2C+M.&rft.au=Cremers%2C+F.&rft.au=Black%2C+G.&rft.au=Lotery%2C+A.&rft.au=Downes%2C+S.&rft.au=Webster%2C+A.&rft.au=Seabra%2C+M.&rfe_dat=bpr3.included=1;bpr3.tags=Research+%2F+Scholarship">MacLaren, R., Groppe, M., Barnard, A., Cottriall, C., Tolmachova, T., Seymour, L., Clark, K., During, M., Cremers, F., Black, G., Lotery, A., Downes, S., Webster, A., & Seabra, M. (2014). Retinal gene therapy in patients with choroideremia: initial findings from a phase 1/2 clinical trial <span style="font-style: italic;">The Lancet, 383</span> (9923), 1129-1137 DOI: <a href="http://dx.doi.org/10.1016/S0140-6736(13)62117-0" rev="review">10.1016/S0140-6736(13)62117-0</a></span>
<br />
<br />
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Science+translational+medicine&rft_id=info%3Apmid%2F22323828&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=AAV2+gene+therapy+readministration+in+three+adults+with+congenital+blindness.&rft.issn=1946-6234&rft.date=2012&rft.volume=4&rft.issue=120&rft.spage=&rft.epage=&rft.artnum=&rft.au=Bennett+J&rft.au=Ashtari+M&rft.au=Wellman+J&rft.au=Marshall+KA&rft.au=Cyckowski+LL&rft.au=Chung+DC&rft.au=McCague+S&rft.au=Pierce+EA&rft.au=Chen+Y&rft.au=Bennicelli+JL&rft.au=Zhu+X&rft.au=Ying+GS&rft.au=Sun+J&rft.au=Wright+JF&rft.au=Auricchio+A&rft.au=Simonelli+F&rft.au=Shindler+KS&rft.au=Mingozzi+F&rft.au=High+KA&rft.au=Maguire+AM&rfe_dat=bpr3.included=1;bpr3.tags=Research+%2F+Scholarship">Bennett J, Ashtari M, Wellman J, Marshall KA, Cyckowski LL, Chung DC, McCague S, Pierce EA, Chen Y, Bennicelli JL, Zhu X, Ying GS, Sun J, Wright JF, Auricchio A, Simonelli F, Shindler KS, Mingozzi F, High KA, & Maguire AM (2012). AAV2 gene therapy readministration in three adults with congenital blindness. <span style="font-style: italic;">Science translational medicine, 4</span> (120) PMID: <a href="http://www.ncbi.nlm.nih.gov/pubmed/22323828" rev="review">22323828</a></span><br />
<br />
<br />
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Science+%28New+York%2C+N.Y.%29&rft_id=info%3Apmid%2F23845948&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Lentiviral+hematopoietic+stem+cell+gene+therapy+benefits+metachromatic+leukodystrophy.&rft.issn=0036-8075&rft.date=2013&rft.volume=341&rft.issue=6148&rft.spage=1233158&rft.epage=&rft.artnum=&rft.au=Biffi+A&rft.au=Montini+E&rft.au=Lorioli+L&rft.au=Cesani+M&rft.au=Fumagalli+F&rft.au=Plati+T&rft.au=Baldoli+C&rft.au=Martino+S&rft.au=Calabria+A&rft.au=Canale+S&rft.au=Benedicenti+F&rft.au=Vallanti+G&rft.au=Biasco+L&rft.au=Leo+S&rft.au=Kabbara+N&rft.au=Zanetti+G&rft.au=Rizzo+WB&rft.au=Mehta+NA&rft.au=Cicalese+MP&rft.au=Casiraghi+M&rft.au=Boelens+JJ&rft.au=Del+Carro+U&rft.au=Dow+DJ&rft.au=Schmidt+M&rft.au=Assanelli+A&rft.au=Neduva+V&rft.au=Di+Serio+C&rft.au=Stupka+E&rft.au=Gardner+J&rft.au=von+Kalle+C&rft.au=Bordignon+C&rft.au=Ciceri+F&rft.au=Rovelli+A&rft.au=Roncarolo+MG&rft.au=Aiuti+A&rft.au=Sessa+M&rft.au=Naldini+L&rfe_dat=bpr3.included=1;bpr3.tags=Medicine%2CCancer%2C+Hematology">Biffi A, Montini E, Lorioli L, Cesani M, Fumagalli F, Plati T, Baldoli C, Martino S, Calabria A, Canale S, Benedicenti F, Vallanti G, Biasco L, Leo S, Kabbara N, Zanetti G, Rizzo WB, Mehta NA, Cicalese MP, Casiraghi M, Boelens JJ, Del Carro U, Dow DJ, Schmidt M, Assanelli A, Neduva V, Di Serio C, Stupka E, Gardner J, von Kalle C, Bordignon C, Ciceri F, Rovelli A, Roncarolo MG, Aiuti A, Sessa M, & Naldini L (2013). Lentiviral hematopoietic stem cell gene therapy benefits metachromatic leukodystrophy. <span style="font-style: italic;">Science (New York, N.Y.), 341</span> (6148) PMID: <a href="http://www.ncbi.nlm.nih.gov/pubmed/23845948" rev="review">23845948</a></span>James Felcehttp://www.blogger.com/profile/14031758835739415241noreply@blogger.com2tag:blogger.com,1999:blog-1513704378254120283.post-11894433858110319122014-03-27T15:50:00.003-07:002014-03-28T01:58:33.184-07:00A new cosmological coincidence problem?One of the consequences of the <a href="http://bicepkeck.org/faq.html" target="_blank">BICEP2 data from last week</a>, should it hold up to scrutiny, and be seen by other experiments (I hope it holds up to scrutiny and is seen by other experiments), is that there is a significant lack of "power" in the temperature anisotropies on large angular scales.<br />
<br />
What that sentence means is that when you look at the CMB in very large patches on the sky (about the size of the moon and bigger) <a href="http://arxiv.org/abs/1403.5231" target="_blank">its temperature fluctuates from patch to patch less than we would expect</a>.<br />
<br />
This was already somewhat the case before the BICEP2 discovery, but BICEP2 made it much more significant. The reason for this will hopefully turn into a post of its own one day, but, essentially, the primordial gravitational waves that BICEP2 has hopefully discovered would themselves have seeded temperature anisotropies on these large angular scales. Previously, we could just assume that the primordial gravitational waves had a really small amplitude and thus didn't affect the temperature much at all. Now, however, it seems like they might be quite large and therefore, this apparent lack of power becomes much more pertinent.<br />
<br />
That's all fine and is something that any model of inflation that hopes to explain the origin of these gravitational waves <i>will </i>need to explain, despite what many cosmologists already writing papers on the ArXiv seem to want to believe (links withheld). As a side, ever-so-slightly-frustrated, note, the only papers <a href="http://arxiv.org/abs/1403.5231" target="_blank">I've seen that have</a> actually <a href="http://arxiv.org/abs/1403.5922" target="_blank">analysed the data</a>, rather than repeating old claims, have confirmed this problem that was clear from, at the latest, <a href="https://twitter.com/just_shaun/status/446247797713408000" target="_blank">the day after the announcement</a>.<br />
<br />
But why does it imply a "cosmological coincidence problem"? And why is it a <i>new </i>coincidence problem? <a href="http://www.scholarpedia.org/article/Cosmological_constant#Coincidence_Problem" target="_blank">What's the old one?</a><br />
<br />
<a name='more'></a><b>The old cosmological coincidence problem</b><br />
<b><br /></b>
The energy density attributable to a cosmological constant is, well, constant. The energy density of matter and radiation drops as the universe expands. Right now, today, the energy densities of matter and of "dark energy"/the cosmological constant appear to be similar. If we extrapolate into the distant future, almost all of the energy density of the universe will be dark energy and in the past almost none of it was.<br />
<br />
The first cosmological coincidence problem is that we live and observe at precisely the time of this transition. That is supposed to seem a bit odd. When I'm in the right mind, I agree. There are a number of possible explanations for it, most of them anthropic (though some aren't - e.g. if dark energy is related to structure formation).<br />
<br />
This is a problem well-known amongst cosmologists and something often pondered about.<br />
<br />
<b>The new cosmological coincidence problem(s?)</b><br />
<b><br /></b>
However BICEP2 suggests a new one. The universe is expanding. But light also travels at a finite speed. Therefore, as time goes on <i>more</i> of the universe becomes visible to us (at least so far) as we see farther and farther away. The result of this is that the largest angular scales we can currently see (the patches on the sky bigger than the moon) have only "recently" become visible. If we were around when the CMB formed, we would observe a much smaller fraction of the currently observable universe.<br />
<br />
Maybe you're starting to see where the new coincidence problem comes from. The behaviour of the universe on all the angular scales <i>smaller</i> than the moon appears to be consistent and well described by one model, whereas the fluctuations on the largest angular scales appear to follow a different model.<br />
<br />
Why do we observe this transition today?<br />
<br />
If we were around billions of years ago we wouldn't even know that this funny behaviour had occurred, because these very large distance scales would still be "outside our horizon", we simply wouldn't be able to see them because light couldn't have brought us knowledge of them yet.<br />
<br />
<blockquote class="tr_bq">
So, the new cosmological coincidence problem is that this strange behaviour is becoming visible to the universe at precisely the same time as we are here observing it. Why now (if it has to happen at all) and not much later or much earlier?</blockquote>
<br />
Is a potential explanation again anthropic? Did the universe need to have some minimum size in order for intelligent life to have enough time/space to evolve and we're now seeing the edge of our homogeneous, observable, patch? Whereas, in other, smaller (and more common?), patches there is no life to see this effect happen much earlier?<br />
<br />
As a final comment, I can't help but think that there is then an obvious <i>third</i> coincidence problem that arises when you combine both of the others. If it is a strange coincidence that we are around just as dark energy comes to dominate the universe <i>and</i> it is a strange coincidence that we are around just as this funny feature in the primordial fluctuations of the universe becomes visible, then it is also a strange coincidence that dark energy comes to dominate at exactly the same time as the primordial fluctuations change their shape.<br />
<br />
Is it all anthropic? Is the same physical mechanism that is responsible for dark energy also responsible for these large scale features? Have I just lost any chance of getting a permanent job in serious cosmology? Time will tell (at least I didn't put it on the ArXiv!)<br />
<br />
Twitter: <a href="https://twitter.com/just_shaun" target="_blank">@just_shaun</a>Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com8tag:blogger.com,1999:blog-1513704378254120283.post-48384274977911103902014-03-24T10:07:00.000-07:002014-05-05T13:42:39.371-07:00The human machine: finely-tuned sensors<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhps0Naga2NOPVrG5gK78kgxzntI9h9etgp6d4IJe5CYy6QlnDbozXh1Jwwr2jJk_HdS5m-SpI4zprjeJHDRIRNg8uegWBfbpiUmzaOpP1c24gYqYfAil18O96n0t7CB-4LIR0GuRtA_ZU/s1600/AGM50%5B2%5D.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhps0Naga2NOPVrG5gK78kgxzntI9h9etgp6d4IJe5CYy6QlnDbozXh1Jwwr2jJk_HdS5m-SpI4zprjeJHDRIRNg8uegWBfbpiUmzaOpP1c24gYqYfAil18O96n0t7CB-4LIR0GuRtA_ZU/s1600/AGM50%5B2%5D.jpg" height="250" width="320" /></a></div>
<br />
The previous post in this series can be found <a href="http://trenchesofdiscovery.blogspot.co.uk/2014/02/the-human-machine-picoscale-engineering.html">here</a>.<br />
<br />
<div style="text-align: justify;">
All good machines need sensors, and we are no different. Everyone is familiar with the five classic senses of sight, smell, touch, taste, and hearing, but we often forget just how amazingly finely tuned these senses are, and many people have little appreciation of just how complex the biology behind each sense is. In this week's post, I hope to give you an understanding of how one of our senses, smell, functions and how, in light of recent evidence, is far more sensitive than we previously thought.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
<b>Microscopic sensors</b><br />
<b><br /></b>
The olfactory system is an extremely complex one, but it is built up from fairly simple base units. The sense of smell is of course located in the nose, but more specifically it is a patch of tissue approximately 3 square centimetres in size at the roof of the nasal cavity that is responsible for all of the olfactory ability in humans. This is known as the <a href="http://en.wikipedia.org/wiki/Olfactory_epithelium">olfactory epithelium</a> and contains a range of cell types, the most important of which is the <a href="http://en.wikipedia.org/wiki/Olfactory_sensory_neuron">olfactory receptor neuron</a>. There are roughly 40 million of these cells packed into this tiny space and their job is to bind odorant molecules and trigger neuronal signals up to the brain to let it know which odorants they've detected. They achieve this using a subset of a huge family of receptors that <a href="http://trenchesofdiscovery.blogspot.co.uk/2012/10/the-human-machine-communication.html">I've written about before</a>, the <a href="http://en.wikipedia.org/wiki/GPCR">G protein-coupled receptors </a>(GPCRs). These receptors are proteins that sit in the membranes of cells and recognise various ligands (<i>i.e.</i> molecules for which they have a specific affinity) and relay that information into the cell. There are over 800 GPCRs in the human genome and they participate in a broad range of processes, from neurotransmission to inflammation, but the king of the GPCRs has to be the olfactory family, which make up over 50% of all the GPCRs in our genome.<br />
<br />
<a name='more'></a><br />
<br />
There is nothing inherently different about how the olfactory GPCRs function relative to the other GPCRs: a ligand binds to the receptor on the outside of the protein, which causes it to change shape, which is in turn detected by other proteins inside the cell and so causes a response from the cell itself (if you're interested in learning more about this process, I suggest you read my earlier post <a href="http://trenchesofdiscovery.blogspot.co.uk/2012/10/the-human-machine-communication.html">here</a>). In this case of olfactory GPCRs, this responses is the firing of the olfactory neuron via an <a href="http://en.wikipedia.org/wiki/Action_potential">action potential</a>, which I have also written about before, <a href="http://trenchesofdiscovery.blogspot.co.uk/2012/12/the-human-machine-circuits-and-wires.html">here</a>. Interestingly, this process also occurs with another group of sensory GPCRs - those found in your <a href="http://en.wikipedia.org/wiki/Retina">retina</a> that detect light. The difference is that the visual GPCRs absorb a <a href="http://en.wikipedia.org/wiki/Photon">photon</a> in order to change shape rather than binding a molecular ligand, but they are otherwise equivalent processes. So I guess there's little reason why we shouldn't say that we 'smell' light, or 'see' odours!<br />
<br />
Although we have over 400 olfactory GPCRs, each olfactory neuron expresses only one type at its surface. This means that each neuron is limited to the detection of only a limited range of the odorants that could be detected by the system as a whole. This is important, as all the brain knows is that an individual neuron has fired, not what caused it. If each neuron expressed a whole range of olfactory GPCRs, the brain wouldn't know which one was responsible for activating it.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjT_Mqk-qdKiuq820VCJubM2uFbuI6151F2AoEJt6fVxMQaO_Z1MIQCnZ231HZrxsZgaMdmoblm-Vm-6awTbfXjn2VYbuUNLVM0YVmxjRLVoK1f3T0foJgb-dz10iyDF6AxpUN8mhOFPt0/s1600/ki2011219f1.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjT_Mqk-qdKiuq820VCJubM2uFbuI6151F2AoEJt6fVxMQaO_Z1MIQCnZ231HZrxsZgaMdmoblm-Vm-6awTbfXjn2VYbuUNLVM0YVmxjRLVoK1f3T0foJgb-dz10iyDF6AxpUN8mhOFPt0/s1600/ki2011219f1.jpg" height="295" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="color: #444444;">Each olfactory neuron has only one type of olfactory receptor (represented here by different colours). These come together to form the olfactory bulb, in which signals from the different neurons get amplified and sent to the brain. Figure from Bomback & Raff (2011), Kidney International 80 (8).</span></td></tr>
</tbody></table>
<br /></div>
<div style="text-align: justify;">
Interestingly, though, even though each neuron only has one type of receptor, it can recognise multiple similar odorant molecules. This is because the olfactory receptors have 'ligand promiscuity', which means that any individual receptor can bind to a range of ligands, and any individual ligand can be bound by a variety of receptors. This is possible because olfactory GPCRs bind their ligands more loosely than most GPCRs, and so can afford to be less specific in terms of their recognition. This seems to have been important in the evolution of a varied sense of smell, as being sensitive to a broader range of odorants is generally advantageous, but you can only have so many receptors and so this is a nice way of having increased breadth within a limited receptor profile. This is also why some things smell identical despite being different molecules - they bind the same receptors.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1-g1T5hH9B4bmeE8ph1s8z9QBExFd_YI8bcd1K3lbNX5cLnS_z0_pUf9HilYsD2ZUg48OvRhBkBOIoNOM55FbM6dstw93vq4fusxvFKr9sQZlUpRYt7BGimClBg6j9xvXapRljvNZFk4/s1600/OlfactoryReceptorCodes.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1-g1T5hH9B4bmeE8ph1s8z9QBExFd_YI8bcd1K3lbNX5cLnS_z0_pUf9HilYsD2ZUg48OvRhBkBOIoNOM55FbM6dstw93vq4fusxvFKr9sQZlUpRYt7BGimClBg6j9xvXapRljvNZFk4/s1600/OlfactoryReceptorCodes.gif" height="227" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="color: #444444;">The promiscuity of olfactory receptors. Multiple odorants can be recognised by one receptor, while one odorant can bind to a range of receptors. Figure from the 2004 Nobel Prize in Medicine presentation by<span style="font-family: Times, Times New Roman, serif; font-size: x-small;"> <span style="line-height: 24px;">Richard Axel and Linda Buck.</span></span></span></td></tr>
</tbody></table>
<br /></div>
<div style="text-align: justify;">
<b>A finely tuned system</b></div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The reason I chose to focus on smell in this post is because <a href="http://www.sciencemag.org/content/343/6177/1370">a study</a> published in <i>Science</i> just this week has suggested that the range of odours that humans can distinguish is much, much broader than we previously thought. We humans have a habit of doing ourselves down - comparing our sense of smell to that of dogs or other animals and concluding that we are poor at best. This is an unduly glass-half-empty approach, since even though we may not all be bloodhounds, we are still pretty potent when it comes to olfaction. Until this week scientists had been working on the assumption that humans can distinguish between around 10,000 odours, although this was based on some fairly loose calculations made back in the 1920s when they didn't even really understand what olfaction was. 10,000 sounds pretty good to me, though it was revealed this week that we actually can discriminate between far more odours than that. And when I say 'far more' I mean a hell of a lot more - more like 1 trillion individual odours! </div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The study that showed this is elegant in its simplicity and, helpfully, doesn't require any biological knowledge to understand. The researchers who undertook this just took 128 odorants and combined them in unique combinations of 10, 20, or 30 different compounds before getting people to try to distinguish them by smell - pretty simple! They found that people were generally pretty good at distinguishing odours (note that we're defining 'odour' here as the combination of individual odorant molecules in a given smell) if they varied by more than 50% of their basal composition. If you crunch the numbers for this, the total number of possible combinations of these 128 odorants that could be distinguished is an average of 1 trillion. This varies significantly between individuals, though, as their top smeller had the potential to distinguish a whopping thousand trillion odours, whilst the poor soul with the weakest sense of smell could only distinguish a pitiful 80 million. Nonetheless, we're well above the 10,000 that we were giving ourselves before this study!</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
This study might seem like fluff science at first glance, but it actually has important implications for our understanding of how sensory signals are combined and distinguished as our brains are constantly bombarded by signals from our environment. Our ability to discriminate between such a broad range of possible odours (which, I would say, is probably broader than the actual range that commonly exists in nature) is probably the product of several levels of complexity within the olfactory system. There is the ligand promiscuity of the olfactory receptors that I mentioned earlier. This can then be combined with the variable expression of different olfactory receptors on individual olfactory neurons so that different neurons become activated to varying extents by different odours. Then there's how the brain processes the complex array of neuronal signals being transmitted to it from the nose, adding yet another layer of complexity to the system. Once all of this is considered, it seems naive to think that we could only distinguish 10,000 odours, we are far too complex machines for that!</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Just another example of how the human machine is constantly surprising us with is beautiful complexity, and another reason to stop and smell the flowers!<br />
<br />
The next post in this series can be found <a href="http://trenchesofdiscovery.blogspot.co.uk/2014/05/the-human-machine-replacing-damaged.html">here</a>.<br />
<br />
<b>Reference</b></div>
<br />
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Science+%28New+York%2C+N.Y.%29&rft_id=info%3Apmid%2F24653035&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Humans+can+discriminate+more+than+1+trillion+olfactory+stimuli.&rft.issn=0036-8075&rft.date=2014&rft.volume=343&rft.issue=6177&rft.spage=1370&rft.epage=2&rft.artnum=&rft.au=Bushdid+C&rft.au=Magnasco+MO&rft.au=Vosshall+LB&rft.au=Keller+A&rfe_dat=bpr3.included=1;bpr3.tags=Research+%2F+Scholarship">Bushdid C, Magnasco MO, Vosshall LB, & Keller A (2014). Humans can discriminate more than 1 trillion olfactory stimuli. <span style="font-style: italic;">Science (New York, N.Y.), 343</span> (6177), 1370-2 PMID: <a href="http://www.ncbi.nlm.nih.gov/pubmed/24653035" rev="review">24653035</a></span>James Felcehttp://www.blogger.com/profile/14031758835739415241noreply@blogger.com0tag:blogger.com,1999:blog-1513704378254120283.post-28475690435472527552014-03-19T06:14:00.002-07:002014-05-28T05:31:00.146-07:00Preliminary: Cosmological impacts of BICEP2 + PlanckIf anybody is interested, I'm currently <a href="https://twitter.com/just_shaun/status/446247797713408000" target="_blank">drip</a>-<a href="https://twitter.com/just_shaun/status/446264089455509505" target="_blank">tweeting</a> some of the constraints one can obtain from considering Planck and BICEP2 data together. BICEP2 did do a bit of this in their paper, but they only considered specific scenarios. They were also often a bit coy about the implications of the combined analysis. I'll try not to be ;-).<br />
<br />
The results should only be seen as indicative, these aren't published, and never will be in this form (<a href="http://www.mla.org/style/handbook_faq/cite_a_tweet" target="_blank">maybe they could be cited if used in a paper though!</a>). They were provided to me by Sussex Uni's resident obtaining-cosmology-from-the-CMB expert <a href="http://cosmologist.info/" target="_blank">Antony Lewis</a>, after a hurried Tuesday adding the BICEP2 data to the Planck cosmology pipeline (i.e. <a href="http://cosmologist.info/cosmomc/" target="_blank">CosmoMC</a>) and may contain mistakes.<br />
<br />
<a href="http://cosmocoffee.info/viewtopic.php?t=2302" target="_blank">Antony has himself also made some of these results public at the Cosmo Coffee website.</a><br />
<br />
Questions here, or on Twitter are most welcome. If you want to see specific cosmologies, I'll do my best to show them (if I have them), or ask Antony very nicely to provide them (no guarantees, of course).<br />
<br />
You can find my Twitter account here: <a href="https://twitter.com/just_shaun" target="_blank">@just_shaun</a>. Feel free to share!Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com2tag:blogger.com,1999:blog-1513704378254120283.post-90653411923840774242014-03-14T09:41:00.004-07:002014-03-19T06:16:09.286-07:00"A major discovery", BICEP2 and B-modes<span style="color: blue;"><i>[Added note (on Monday): Well, wow, the rumours were, if anything, understated. I'm happy to go on record that, unless a mistake has been made, this is the greatest </i>scientific<i> discovery of the 21st century, and may remain so even once the century is over. I (and others) will write many more detailed summaries of what was observed over time, but BICEP2 </i>have<i> announced a discovery of primordial B-modes, which is extremely strong evidence of cosmological inflation (if it turns out to be scale invariant, inflation is as true as most accepted science). Matt Strassler has <a href="http://profmattstrassler.com/2014/03/17/bicep2-new-evidence-of-cosmic-inflation/" target="_blank">a good hastily written summary here</a>. As does Liam McAllister <a href="http://motls.blogspot.co.uk/2014/03/bicep2-primordial-gravitational-waves.html" target="_blank">at Lubos Motl's blog, here</a>. Of course, this is just one experiment and maybe they've made a mistake, but the results look very robust at the moment.</i></span><br />
<i><span style="color: blue;"><br /></span></i>
<i><span style="color: blue;">Congratulations on being alive today readers! We just learned about how particles work at energies \(10^{13}\) times greater than even the LHC can probe, and about what was happening at a time much, much less than a nanosecond after the beginning of the Big Bang.]</span></i><br />
<i><span style="color: red;"><br /></span></i>
<i><span style="color: red;">............................................</span></i><br />
<i><span style="color: red;"><br /></span></i>
<i><span style="color: red;">[Added note (on Sunday): It seems highly probable that these rumours are essentially true. Although the precise details of the results aren't yet public, the BICEP2 PI, John Kovac, has sent a widely distributed email with the following information: Data and scientific papers with results from the BICEP2 experiment will go public and <a href="http://bicepkeck.org/" target="_blank">be viewable here</a> at 2:45pm GMT on Monday. At the same time a technical webcast will begin <a href="http://www.cfa.harvard.edu/news/news_conferences.html" target="_blank">at this address</a>.</span></i><br />
<i><span style="color: red;"><br /></span></i>
<i><span style="color: red;">It's going to be an exciting day!]</span></i><br />
<br />
.......................................................<br />
<br />
The cosmology rumour mill exploded today. <a href="http://www.spaceref.com/news/viewpr.html?pid=42751&fb_action_ids=10201901890250543&fb_action_types=og.recommends&fb_source=other_multiline&action_object_map=%7B%2210201901890250543%22%3A597015947052729%7D&action_type_map=%7B%2210201901890250543%22%3A%22og.recommends%22%7D&action_ref_map=%5B%5D" target="_blank">Harvard Astrophysics have issued a press release stating that, on Monday, they will announce a "major discovery"</a>.<br />
<br />
This is the only hard-evidence of <i>anything </i>interesting on the way and it could be an announcement of anything that fits under the label of "astrophysics". This is important to keep in mind. However, for one reason or another (that is hard to nail down), cosmologists are suggesting that it is going to be about cosmology. The speculation is that it will be about the <a href="http://www.cfa.harvard.edu/CMB/bicep2/" target="_blank">BICEP2</a> experiment, which has been measuring the polarisation in the CMB. The speculation is that BICEP2 have seen primordial <a href="http://background.uchicago.edu/~whu/polar/webversion/node8.html" target="_blank">"B-mode" polarisation</a>.<br />
<br />
If this speculation is true, this would be a result immense in its significance.<br />
<br />
<i>Primordial </i>B-modes would be a smoking gun signal of <a href="http://en.wikipedia.org/wiki/Gravitational_wave" target="_blank">primordial gravitational waves</a>. This, alone, makes such a discovery important. Gravitational waves have not yet been observed, but are a prediction from general relativity. Therefore, such a discovery would be on the same level of significance as the discovery of the Higgs particle. We were almost certain it would be there, but it is good to finally see it.<br />
<br />
However, the potential significance of such a result goes further because these <i>primordial</i> gravitational waves would need a source. The theory of cosmological inflation would/could be such a source. Inflation is a compelling theory, not without some problems, for how the universe evolved in its very earliest stages. If it occurred when the universe had a large enough temperature, it would generate primordial gravitational waves large enough to tickle the CMB enough to make these B-modes visible in the polarisation. As of yet, inflation has passed quite a few observational tests, but nothing has been seen that could be described as smoking gun evidence. A spectrum of primordial gravitational waves would very nearly be such a smoking gun. If the spectrum was scale invariant (i.e. if the gravitational waves have the same amplitude on all distance scales) that would be a smoking gun for inflation and accolades, Nobel Prizes, etc, etc, would flow accordingly.<br />
<br />
<i>All </i>of this is just speculation, but some of it does seem to be coming from reputable sources. And some of my colleagues have been talking about tip-offs from people who wish to remain anonymous, so I figured I'd collect all the speculation I know of here in a post (let me know if I've missed anything):<br />
<br />
<ul>
<li><a href="http://excursionset.com/blog/2014/3/15/the-smoking-gnu" target="_blank">Richard Easther, on the rumour and its implications</a></li>
<li><a href="http://cosmobruce.wordpress.com/2014/03/14/108/" target="_blank">Bruce Bassett, on the probability that the rumours are true</a></li>
<li><a href="http://www.theguardian.com/science/2014/mar/14/gravitational-waves-big-bang-universe-bicep?CMP=twt_gu" target="_blank">The Guardian (the first major news source to pick up on this) with comments from various prominent cosmologists</a></li>
<li><a href="http://resonaances.blogspot.co.uk/2014/03/plot-for-weekend-flexing-biceps.html" target="_blank">Jester/Resonaances on the context (i.e. earlier constraints on primordial B modes)</a></li>
<li><a href="http://motls.blogspot.co.uk/2014/03/rumor-inflation-related-primordial-b.html#more" target="_blank">Lubos Motl, amongst other things, explains what B mode polarisation actually is</a></li>
<li><a href="http://blog.vixra.org/2014/03/15/primordial-gravitational-waves/" target="_blank">Philip Gibbs, at viXra log</a></li>
<li><a href="http://telescoper.wordpress.com/2014/03/15/some-b-mode-background/" target="_blank">Peter Coles, amongst other things, on why gravitational waves mean there should be B-modes in the CMB polarisation</a></li>
<li><a href="http://blankonthemap.blogspot.co.uk/2014/03/b-modes-rumours-and-inflation.html" target="_blank">Sesh Nadathur on why, amongst other things, the rumoured measurement would appear to be in tension with results from Planck and WMAP</a></li>
</ul>
<div>
<br /></div>
<ul>
<li><a href="http://www.preposterousuniverse.com/blog/2014/03/16/gravitational-waves-in-the-cosmic-microwave-background/" target="_blank">Sean Carroll has written a very thorough overview of the implications for cosmology (if the rumours are true).</a></li>
</ul>
<div>
<br />
The PI of BICEP2,<a href="http://astronomy.fas.harvard.edu/people/john-m-kovac" target="_blank"> John Kovac</a>, gave a talk at <a href="http://www.ctc.cam.ac.uk/activities/cosmo2013/" target="_blank">the annual COSMO conference last year</a> that had some pretty ambitious claims for how sensitive BICEP2 and similar experiments were going to be, so... well... we'll know on Monday. It should also be noted that, although the existence of these gravitational waves is a prediction of inflation, their amplitude is a free parameter and an amplitude this big is potentially a little surprising (for me, lower temperature inflation models just seem more compelling, others might disagree).<br />
<br />
Twitter: <a href="https://twitter.com/just_shaun" target="_blank">@just_shaun</a><br />
<br />
<i><a href="http://www.sms.cam.ac.uk/media/1549387" target="_blank">[Edit: The video of John Kovac's talk can be found here]</a></i></div>
Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com5tag:blogger.com,1999:blog-1513704378254120283.post-48172364260749890082014-03-07T05:37:00.000-08:002014-03-27T16:49:59.376-07:00Quantum mechanics and the Planck-spectrum<i>[The following is a guest post from <a href="http://www.ita.uni-heidelberg.de/~spirou/" target="_blank">Bjoern Malte Schaefer</a>. Bjoern is one of the curators of the <a href="http://cosmologyquestionoftheweek.blogspot.co.uk/" target="_blank">Cosmology Question of the Week</a> blog, which is worth checking out. This post is a historical look at some of the early parts in the history of quantum mechanics, in particular, the black-body spectrum. Questions are welcome and I'll make sure he sees any of them. Image captions (and hyper-links, in this case) are, as usual, by me, because guest posters don't ever seem to provide their own.]</i><br />
<strong><br /></strong>
<strong>Two unusual systems</strong><br />
<br />
<a href="http://en.wikipedia.org/wiki/Quantum_mechanics" target="_blank">Quantum mechanics</a> surprises with the statement that the <a href="http://en.wikipedia.org/wiki/Microscopic_scale" target="_blank">microscopic world</a> works very differently from the <a href="http://en.wikipedia.org/wiki/Macroscopic_scale" target="_blank">macroscopic world</a>. Therefore, it took a while until quantum mechanics was formally established as the theory of the microworld. In particular, despite the fact that two of the natural systems on which theories of quantum mechanics could initially be tested were very simple, even from the point of view of the physicists of the time, one needed to introduce a number of novel concepts for their description. These two physical systems were the <a href="http://en.wikipedia.org/wiki/Hydrogen_atom" target="_blank">hydrogen atom</a> and the <a href="http://en.wikipedia.org/wiki/Emission_spectrum" target="_blank">spectrum</a> of a <a href="http://en.wikipedia.org/wiki/Thermal_radiation" target="_blank">thermal radiation source</a>. The hydrogen atom was the lightest of all atoms with the <a href="http://en.wikipedia.org/wiki/Balmer%27s_formula" target="_blank">most simply structured spectrum</a>. It exhibited many regularities involving rational numbers relating its discrete energy levels. It could only be <a href="http://en.wikipedia.org/wiki/Ionisation" target="_blank">ionised</a> once implying that it had only a single <a href="http://en.wikipedia.org/wiki/Electron" target="_blank">electron</a> and from these reasons it was the obvious test case for any theory of mechanics in the quantum regime. <a href="http://en.wikipedia.org/wiki/Werner_Heisenberg" target="_blank">Werner Heisenberg</a> was the first to be successful in solving this quantum mechanical analogue of <a href="http://en.wikipedia.org/wiki/Kepler_problem" target="_blank">the Kepler-problem</a>, i.e. the equation of motion of a charge moving in a <a href="http://en.wikipedia.org/wiki/Coulomb%27s_law" target="_blank">Coulomb-potential</a>, paving the way for a systematic understanding of <a href="http://en.wikipedia.org/wiki/Atomic_spectrum" target="_blank">atomic spectra</a>, their fine structure, the theory of chemical bonds, interactions of atoms with fields and ultimately <a href="http://en.wikipedia.org/wiki/Quantum_electrodynamics" target="_blank">quantum electrodynamics</a>.<br />
<br />
The <a href="http://en.wikipedia.org/wiki/Planck_spectrum" target="_blank">Planck-spectrum</a> was equally puzzling: It is the distribution of photon energies emitted from a body at thermal equilibrium and does not, in particular, require any further specification of the body apart that it should be black, meaning ideally emitting and absorbing radiation irrespective of wave length: From this point of view it is really the simplest macroscopic body one could imagine because its internal structure does not matter. In contrast to the hydrogen atom it is described with a continuous spectrum. In fact, there are at least two beautiful examples of Planck-spectra in Nature: the thermal spectrum of the Sun and <a href="http://trenchesofdiscovery.blogspot.co.uk/2011/10/smoking-cmb-evidence-of-big-bang.html" target="_blank">the cosmic microwave background</a>. The solution to the Planck-spectrum involves quantum mechanics, <a href="http://en.wikipedia.org/wiki/Particle_statistics#Quantum_statistics" target="_blank">quantum statistics</a> and <a href="http://en.wikipedia.org/wiki/Special_relativity" target="_blank">relativity</a>, and unites three of the four the great constants of Nature: the <a href="http://en.wikipedia.org/wiki/Planck%27s_constant" target="_blank">Planck-quantum h</a>, the <a href="http://en.wikipedia.org/wiki/Boltzmann_constant" target="_blank">Boltzmann-constant \(k_B</a>\) and <a href="http://en.wikipedia.org/wiki/The_speed_of_light_in_vacuum" target="_blank">the speed of light c</a>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPH6EtMau6yuuDiH_80hS_Mkevx0BZJM3OYn6-T-OOvNDAvdpMR56E8CTQGtM5k8pDTJgs1J9FvVhphWHmSsX-fPlej3AU_vXzLMIpKGoAFXg3eqTV1uoS1_3q9caMYbZ5gzhdqaMiRDc/s1600/EffectiveTemperature_300dpi_e.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPH6EtMau6yuuDiH_80hS_Mkevx0BZJM3OYn6-T-OOvNDAvdpMR56E8CTQGtM5k8pDTJgs1J9FvVhphWHmSsX-fPlej3AU_vXzLMIpKGoAFXg3eqTV1uoS1_3q9caMYbZ5gzhdqaMiRDc/s1600/EffectiveTemperature_300dpi_e.png" height="234" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The spectrum (basically intensity against wavelength or frequency) of the light from the sun (in yellow) and a blackbody with the same temperature (grey). I'm actually surprised by how similar they are.</td></tr>
</tbody></table>
<br />
<br />
<strong>Limits of the Planck-spectrum</strong><br />
<br />
Although criticised at the time by many physicists as phenomenological, the high energy part of the Planck-spectrum is relatively straightforward to understand, as had been realised by <a href="http://en.wikipedia.org/wiki/Wilhelm_Wien" target="_blank">Wilhelm Wien</a>: Starting with the result that photons as relativistic particles carry energies proportional to their frequency as well as momenta inversely proportional to their wave length (the constant of proportionality in both cases being the Planck-constant h), imposing <a href="http://en.wikipedia.org/wiki/Isotropy" target="_blank">isotropy</a> of the photon momenta and assuming a thermal <a href="http://en.wikipedia.org/wiki/Maxwell-Boltzmann_distribution" target="_blank">distribution of energies according to Boltzmann</a> leads directly to<a href="http://en.wikipedia.org/wiki/Wien%27s_distribution_law" target="_blank"> Wien's result</a> which is an excellent fit at high photon energies but shows discrepancies at low photon energies, implying that at low temperatures the system exhibits quantum behaviour of some type.<br />
<br />
<a name='more'></a><br />
<a href="http://en.wikipedia.org/wiki/Wien%27s_displacement_law" target="_blank">Wien had a second result</a>: It was known experimentally that the location of the maximum of the spectrum in terms of photon energy is proportional to the temperature of the radiating body. While intuitively this makes a lot of sense (because hotter bodies emit more energetic radiation) and followed from Wien's calculation, Wien could not quite make sense of the numerical pre-factor which of course differs because of the unknown functional shape of the spectrum at low energies. And in fact <a href="http://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law" target="_blank">Wien had a third result</a>: The total energy radiated depends on the temperature taken to the fourth power. This result follows as well from Wien's formula for the spectrum, and one could guess the natural constants involved by dimensional analysis, but the prefactor is again off by a little, underlining that there is something fundamental amiss at low energies.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgcyETqxAYKy1CehorpSWy-24Zg3XLIPWFvIq9iREzuPArYV7x-Nl10hbyHHqAx_6O3vYsQpq5ud_dHYSITMo9o7fTDKgOamu62vIT-NAnNCzO5o3aYq30_tGbqSKlEmqJfvCKcH5O7fE/s1600/wien_postcard.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgcyETqxAYKy1CehorpSWy-24Zg3XLIPWFvIq9iREzuPArYV7x-Nl10hbyHHqAx_6O3vYsQpq5ud_dHYSITMo9o7fTDKgOamu62vIT-NAnNCzO5o3aYq30_tGbqSKlEmqJfvCKcH5O7fE/s1600/wien_postcard.jpg" height="320" width="226" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Wilhelm Wien looking like a pretty typical turn of the 20th century man posing for a photo.</td></tr>
</tbody></table>
<br />
<strong>Statistics at low energies</strong><br />
<br />
While the solution to the hydrogen atom lies in the <a href="http://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation#Hydrogen_atom" target="_blank">correct equation of motion for the electron</a> with all formal overhead needed in quantum mechanics, the solution to the Planck-spectrum needed the insight that quantum mechanical particles required a different type of statistics which asymptotically recovers classical statistics at high energies. Clearly, in the case of non-interacting photons (electrodynamics is a <a href="http://en.wikipedia.org/wiki/Linear_system" target="_blank">linear theory</a>!), an equation of motion can not provide the solution.<br />
<br />
A textbook would now state that photons as quantum mechanical particles are <a href="http://en.wikipedia.org/wiki/Identical_particles" target="_blank">identical and indistinguishable</a>, which to me was very confusing when I first read it, but the two adjectives refer to very specific properties of photons. Surely they are identical in the sense that they're excitations of the electromagnetic field: They share physical properties such as <a href="http://en.wikipedia.org/wiki/Polarisation_(waves)" target="_blank">polarisation</a>, energy and momentum and travel always at the same speed. Indistinguishable means something different: You can *not* follow the trajectory of a quantum mechanical particle in the same way as you could with a classical particle. All one can do is localise particles at a given instant, and localise them again at a later time, but there is no way of telling which particle from the first localisation has moved to which position at the second localisation, and in fact there is interference between both paths.<br />
<br />
This effect is relevant if the typical particle separation is small compared to a length scale defined by quantum mechanics, which is defined as the wave length of a photon with an energy corresponding to the thermal energy. Highly energetic photons are rare and separated by large distances, so quantum mechanical interference does not play a role and the photons behave classically, in contrast to low-energy photons, of which there are plenty, and which are separated by small distances. For these tightly packed photons quantum interference matters and indistinguishability becomes important.<br />
<br />
Constructing statistics from indistinguishable particles is now <a href="http://en.wikipedia.org/wiki/Identical_particles#Statistical_effects_of_indistinguishability" target="_blank">a bit like drawing from an urn with replacement,</a> while considering all draws of a certain number of photons as equivalent. This alteration of statistics is relevant at energies small compared to the thermal energy of the system, while at high energies the system behaves purely classical, giving rise to Wien's results. <br />
<br />
Additionally, the new statistics yields explanations for very puzzling numbers that Wien could not make sense of: They appear as values of the <a href="http://en.wikipedia.org/wiki/Riemann_zeta_function" target="_blank">Riemann zeta-function</a> (<a href="http://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B%AF" target="_blank">a very enigmatic function with fascinating properties</a>), and which are, in contrast to the rational numbers occurring in the hydrogen problem, irrational.<br />
<br />
<strong>Pauli's principle and the Hanbury Brown-Twiss-experiment</strong><br />
<br />
In what way exactly do photons now interfere? Statistically, photons prefer to be "bunched" which is a consequence of a new symmetry discovered by <a href="http://en.wikipedia.org/wiki/Wolfgang_Pauli" target="_blank">Wolfgang Pauli</a> and *not* because of their dynamics (after all, they are still non-interacting as electrodynamics is linear). Quantum mechanical systems react in a specific way if one exchanges particles that make up the system, which in fact is very relevant in between successive localisation steps as discussed before. Nature is quite capricious when it comes to this point, as noticed by Pauli, as there are only two types of particles. The first family, called <a href="http://en.wikipedia.org/wiki/Boson" target="_blank">bosons</a>, exhibit <a href="http://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_statistics" target="_blank">a constructive interference between realisations with interchanged particles</a>. Photons belong to this family and their bunching is explained in the fact that it is overall more likely due to constructive interference to find them in identical states. (The second family is called <a href="http://en.wikipedia.org/wiki/Fermion" target="_blank">fermions</a>, which show <a href="http://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac_statistics" target="_blank">destructive interference between realisations with interchanged particles</a>, and one example of this group is the electron.) It is worth noting that Max Planck himself solved the problem with purely thermodynamical stability arguments without providing a statistical description which is due to <a href="http://en.wikipedia.org/wiki/Satyendra_Nath_Bose" target="_blank">Satyendra Nath Bose</a> and <a href="http://www.lbi.org/wp-content/uploads/2012/09/Einstein_portrait-e1346962517640.jpg" target="_blank">Albert Einstein</a>. <br />
<br />
The bunching of photons has been in fact observed in an ingenious experiment by <a href="http://en.wikipedia.org/wiki/Hanbury_Brown_and_Twiss_effect" target="_blank">Robert Hanbury Brown and Richard Q. Twiss</a>, who showed that after observing a photon from a thermal source it is statistically more likely to observe a second photon with similar properties. This not only means that there can be arbitrarily many photons in a single statistical state, but further that photons like being in the same states due to positive interference if interchanged with their partners (and not because of interaction!).<br />
<br />
<strong>Summary</strong><br />
<strong><br /></strong>
The Planck-spectrum and the hydrogen atom were elementary to the formulation of quantum mechanics. The solution to the Planck-spectrum involved a new type of statistics which was required by the indistinguishability of quantum mechanical particles and Pauli's exchange symmetry. And it's a nice example that quantum mechanics can become relevant in unusal places: Imagine, about 40% of the energy of the Sun is carried by photons from the quantum mechanical part of the spectrum, which is a nice thought on a bright sunny day!Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com3tag:blogger.com,1999:blog-1513704378254120283.post-12069977463237350822014-02-03T06:15:00.000-08:002014-02-03T06:18:20.229-08:00The human machine: picoscale engineering<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqKS-NCSH17z74XjnX1L07-eSVuyZawwzQ-39PVOoHtKmEBXep-5waZySXCkhAjTDyG3UCS1hTnjJTd6JJeY4MR7jm66k6YFDZypEL7gMjemlWoja0OQWRBA3NTF1mVr12_mjOQLg3ErA/s1600/4410_signs.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqKS-NCSH17z74XjnX1L07-eSVuyZawwzQ-39PVOoHtKmEBXep-5waZySXCkhAjTDyG3UCS1hTnjJTd6JJeY4MR7jm66k6YFDZypEL7gMjemlWoja0OQWRBA3NTF1mVr12_mjOQLg3ErA/s1600/4410_signs.jpg" height="107" width="320" /></a></div>
<br />
<br />
The previous post in this series can be found <a href="http://trenchesofdiscovery.blogspot.co.uk/2013/10/the-human-machine-non-standard.html">here</a>.<br />
<br />
<div style="text-align: justify;">
Over the course of my 'human machine' series of posts I've tried to convey the intricacy and beauty of our biological engineering, and demonstrate that we are incredibly well-engineered machines whose complexity and originality go all the way down to the atomic level. In this week's post, I will be exemplifying this with one of the best cases that I can think of; how we transport oxygen around our bodies. I feel that this is a great story to tell because it is one that most people might think that they know well, but that actually is far more complex and subtle than it may appear, and that demonstrates how our lives are highly dependent on perfectly evolved processes working on the subatomic scale.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
<b>"It will have blood, they say."</b></div>
<div style="text-align: justify;">
<b><br /></b></div>
<div style="text-align: justify;">
I'm sure that anyone reading this blog is fully aware that we need oxygen to survive (although if you want a more detail explanation of exactly why then I direct your attention to a previous post of mine available <a href="http://trenchesofdiscovery.blogspot.co.uk/2012/05/human-machine-biological-batteries-and.html">here</a>), and anyone remembering their primary school biology will know that oxygen is transported around the body by the circulatory system, <i>i.e.</i> the blood. Most of the cells within your blood are the famous <i>red</i> blood cells (to distinguish them from the immune cells - the <i>white</i> blood cells), which are, unsurprisingly, responsible for blood's distinctive colour - earning them the respect of horror movie aficionados everywhere. You have roughly 20-30 trillion red blood cells in you as you read this, each of which is about 7 microns (<i>i.e. </i>7 millionths of a metre) in diameter. They shoot around your body, taking roughly 20 seconds to make one circulation, and have just one job; take oxygen from the lungs (where there's lots of it) to the tissues (where there's not). So specific are they to this job that they don't even bother having a <a href="http://en.wikipedia.org/wiki/Cell_nucleus">nucleus</a>, thereby removing all possibility of them doing anything else. </div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj17LhwJYcX8ttPI2TDbux0rIVdi3QR65eSbkxoVGXVgTcgiaRLp527Mj8kSlfWE-dt32iFjuyY9XMmT9VD7FQ0IK4t4h0r6BHtHagpUKxPL3R4vuUPCVnxY7bddE9ZIL3MGSHXTaLTmuk/s1600/red-blood-cells1000x1000.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj17LhwJYcX8ttPI2TDbux0rIVdi3QR65eSbkxoVGXVgTcgiaRLp527Mj8kSlfWE-dt32iFjuyY9XMmT9VD7FQ0IK4t4h0r6BHtHagpUKxPL3R4vuUPCVnxY7bddE9ZIL3MGSHXTaLTmuk/s1600/red-blood-cells1000x1000.jpg" height="320" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span class="Apple-style-span" style="color: #444444;">Human red blood cells - you make 2 million every second!</span></td></tr>
</tbody></table>
<div style="text-align: justify;">
<br />
<a name='more'></a><br /></div>
<div style="text-align: justify;">
What allows red blood cells to do their job so wonderfully is the same thing that gives them their bright red colour: haemoglobin. Haemoglobin is one of the most abundant proteins in the human body - each red blood cell is packed with around 250 million copies of it - meaning you have about 7.5 billion trillion copies in your whole body! The role of haemoglobin is fairly simple, it increases the solubility of oxygen compared to just letting the oxygen dissolve in the water that makes up most of your blood. It does this using four distinct oxygen-binding centres within its structure - each called a <a href="http://en.wikipedia.org/wiki/Haem">haem</a> (or 'heme' if you're from the colonies) group. Each haem centre contains a captured atom of iron, which is capable to forming interactions with molecular oxygen, thereby allowing four molecules of oxygen to be captured by one haemoglobin. So far, so good.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
<b>Problems with strength</b></div>
<div style="text-align: justify;">
<b><br /></b></div>
<div style="text-align: justify;">
There is, however, a fundamental problem with a haemoglobin molecule that simply binds oxygen with a single, unchanging strength. If haem binds oxygen really well then it will be great at capturing it in the lungs where there's plenty of it, but then will still hang onto it in the tissues where it needs to release it. Conversely, if its interaction with oxygen is weak enough to allow it to let go in the tissues then it won't be terribly efficient at picking it up in the lungs. Ideally, you need a haemoglobin that binds oxygen really strongly in the lungs, but then changes to be much weaker in the tissues so it can dump it out where it's needed.<br />
<br />
To a chemist this may sound a bit fanciful - the interaction strengths of one species with another are determined by the myriad factors that decide their charge, electron orbital structure, size, <i>etc.</i>, and they can't just be changed to suit our needs. This is indeed true if you just take free haem without the rest of the haemoglobin protein and watch it bind to oxygen - the strength and affinity of the interaction is consistent no matter how much oxygen is around (under standard conditions, of course).<br />
<br />
In haemoglobin, however, an odd effect can be seen as the concentration of oxygen changes: the more oxygen that haemoglobin binds, the more the strength of this interaction increases. Similarly, as it starts to lose its oxygen at low oxygen concentrations, it does so faster and faster as oxygen levels drop. This process is vital for our functioning as large animals that consume a lot of oxygen, as without it we either wouldn't be able to get enough oxygen out of the air, or wouldn't be able to release it into our tissues. Either way, we'd be screwed.<br />
<br />
So, how does it work? Well it all comes down to the fact that each haemoglobin has four oxygen-binding haem groups rather than just the one. Each haem is attached to a single protein chain (either alpha or beta), which in turn come together to make two pairs containing one alpha and one beta, and then two pairs come together to make the whole haemoglobin. When oxygen binds to one haem group, it causes the affinities of all the other haems in the complex to slightly increase their affinity for oxygen. So, the more oxygen molecules that get bound, the greater the affinity of the remaining oxygen-free haems will be for interaction with oxygen. This is a process called <a href="http://en.wikipedia.org/wiki/Cooperativity">positive cooperativity</a>, and is a feature common to many multi-domain proteins within your body.<br />
<br />
<b>Issues of shape and size</b></div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
What I like in particular about the cooperativity of haemoglobin is that it directly relies on the principle of quantum mechanics describing how <a href="http://en.wikipedia.org/wiki/Electron">electrons</a> have wave-like behaviour rather than behaving purely as particles. Clearly, all biology is based on this since it's a fundamental property of the universe that governs all chemical reactions, bond stabilities <i>etc.</i>, but this is a rare instance in which Newtonian physics could not reasonably approximate the outcome of a biochemical process. This is because the cooperativity of haemoglobin depends on the ability of the iron ions within the haem groups to change their shape upon binding oxygen. Specifically, they change the shape of their electron orbitals. If you're not familiar with the behaviour of electrons, they exist in discrete energy levels (known as <a href="http://en.wikipedia.org/wiki/Atomic_orbital">orbitals</a>) surrounding the nucleus of atoms, and their distribution within these orbitals depends on exactly how many there are present in the atom. Different orbitals have different shapes, and so the atom essentially can have different sizes dependent on how many electrons it has at any given time.<br />
<br />
In haemoglobin, the irons in haem are in what's called the 'high spin' state when they're not bound to oxygen. In this state, the electrons arrange themselves in such a way that the radius of the iron is 92 picometres (<i>i.e. </i>92 trillionths of a metre), with the shape shown in the figure below. When an oxygen comes along and binds to it, however, iron's electrons reorganise into the 'low spin' state, which has a far more streamlined radius of just 75 picometres.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHSPyrPwvqyDJwDaKc19caE8oWNS8rpjotRiLoq0KYjdNHw5HrVFc-9-xGO_7f9khMH-mKfrm_PhlLmz9T7vGXTeheIoogM3FcGDe7pFkWpQ-VE786QSrAclG9abskN9edqzoouQIfr9o/s1600/TOC28_1.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHSPyrPwvqyDJwDaKc19caE8oWNS8rpjotRiLoq0KYjdNHw5HrVFc-9-xGO_7f9khMH-mKfrm_PhlLmz9T7vGXTeheIoogM3FcGDe7pFkWpQ-VE786QSrAclG9abskN9edqzoouQIfr9o/s1600/TOC28_1.gif" height="163" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="color: #444444;">The high (left) and low (right) spin states of iron in haemoglobin. The tiny change of 17 trillionths of a metre is responsible for your being able to breathe.</span></td></tr>
</tbody></table>
The change in the shape of iron isn't on its own particularly significant, though it has profound implications for the rest of the haemoglobin molecule. This is because the iron within the haem group sits inside a system of flat rings that form four bonds with it to hold it in place. Iron is capable of forming six bonds at the same time, so it also forms one with a specific amino acid within haemoglobin (called the proximal <a href="http://en.wikipedia.org/wiki/Histidine">histidine</a>), and then the sixth is left free for oxygen. The problem is that in the high spin (<i>i.e.</i> without oxygen) state the iron is too wide to fit snugly into the centre of the ring, and instead has to sit 60 picometres outside of the ring. This is not favourable because it can't form the strongest possible bonds with the surrounding ring, and it also forces the proximal histidine to be pulled slightly out of its optimal position. It's a bit like there's a spring pulling on the iron, trying to force it into the centre of the ring, but it can't because the iron is too big.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7HdEtdxV8g9VaGVXqvZ-itJ-yDWhZJfyiJPmxsnZzIu7nd7tXgTEP2O4E7ug8jiVtIuuZ_cULHA-r0v4z1Y2S6AeCyajgku6OW_yc4lAn8yFzlG_MOdwmCybFwd2b_G0KCwzHZTHnAlg/s1600/200px-Heme_b.svg.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7HdEtdxV8g9VaGVXqvZ-itJ-yDWhZJfyiJPmxsnZzIu7nd7tXgTEP2O4E7ug8jiVtIuuZ_cULHA-r0v4z1Y2S6AeCyajgku6OW_yc4lAn8yFzlG_MOdwmCybFwd2b_G0KCwzHZTHnAlg/s1600/200px-Heme_b.svg.png" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="color: #444444;">Iron (Fe) within the ring system of a haem group.</span></td></tr>
</tbody></table>
Once the iron has bound to oxygen, however, it is small enough to enter the ring thanks to the quirks of quantum mechanics that cause its electron orbitals to rearrange. The movement of iron into the plane of the haem allows the proximal histidine to relax into its more optimal position, thereby releasing the strain on the 'spring' pulling on the iron. In effect, the whole haem has adopted a lower energy arrangement, and hence why this is known as the 'relaxed' state of haemoglobin.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3S5TOuvoR0ll0oq7-8fjqRaDT5ND5ZjJVWh1oEcyCssPUqeMmb7T5EtUFOBjR5stX0HKpI_JBD6QBFL-f_-ZIhmFTLKclOlmQQ2DRf68mm-g2tkAuL3_oduoKrUwEM5besIUumu1u8Bg/s1600/HemePlane.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3S5TOuvoR0ll0oq7-8fjqRaDT5ND5ZjJVWh1oEcyCssPUqeMmb7T5EtUFOBjR5stX0HKpI_JBD6QBFL-f_-ZIhmFTLKclOlmQQ2DRf68mm-g2tkAuL3_oduoKrUwEM5besIUumu1u8Bg/s1600/HemePlane.gif" height="320" width="303" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="color: #444444;">Movement of the haem iron from outside the ring (top) to inside it (bottom) on binding oxygen.</span></td></tr>
</tbody></table>
<b><br /></b>
<b>"Give me somewhere to stand and I can move the Earth."</b><br />
<b><br /></b>
So how does the 'relaxation' of one haem group allow for positive cooperativity. Well, any schoolboy who's studied his Archimedies can tell you that the way to amplify a force is by using a lever. This is basically what happens in haemoglobin: the movement of the proximal histidine causes one of the helices within that subunit of haemoglobin to pivot around a fixed point. This means that the ends of the helix move much further than the proximal histidine ever did, and so the effect is amplified. The movement of this this helix kicks off a whole chain of changes that cause the two alpha/beta pairs to rotate relative to each other, such that the conformation of the whole haemoglobin molecule is changed. This has the effect of pulling more strongly on the irons in the oxygen-free haems (<i>i.e.</i> putting more strain on their 'springs') so that if they do happen to bind more oxygen they will do so more strongly as giving up the oxygen would mean having to fight that much harder against the pull of the spring. Thus, for every oxygen bound, the next one will bind just that little bit tighter; and similarly as the haemoglobin loses its oxygen, each one will come off that bit more easily than the last.</div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://upload.wikimedia.org/wikipedia/commons/b/ba/Hemoglobin_t-r_state_ani.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/b/ba/Hemoglobin_t-r_state_ani.gif" height="240" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span class="Apple-style-span" style="color: #444444;">The two states of haemoglobin (courtesy of Wikipedia). The two alpha/beta subunits rotate relative to one another to alter the affinities of the coordinated haem groups.</span></td></tr>
</tbody></table>
<br />
<div style="text-align: justify;">
Haemoglobin is a great example of how even the most basic jobs in our bodies are engineered down to the finest details - changes of trillionths of a metre are harnessed and amplified to adapt the human machine beautifully to survive as best it can. It's also a nice reminder that we are very much children of quantum mechanics, even if we forget that a lot of the time. </div>
James Felcehttp://www.blogger.com/profile/14031758835739415241noreply@blogger.com0tag:blogger.com,1999:blog-1513704378254120283.post-70417159184105697742014-01-22T02:48:00.000-08:002014-02-06T03:26:39.035-08:00Particle Fever<iframe allowfullscreen="true" allowtransparency="true" frameborder="0" height="290" mozallowfullscreen="true" scrolling="no" src="http://movies.yahoo.com/video/39-particle-fever-39-trailer-221017086.html?format=embed&player_autoplay=false" webkitallowfullscreen="true" width="508"></iframe><br />
<br />
The video above is a trailer of an upcoming documentary about CERN and the discovery of the Higgs particle. This documentary looks wonderful and important. CERN has triumphed again at outreach and is simply leagues ahead of basically everyone else in science when it comes to this sort of thing. If anyone is surprised or wonders how CERN is able to get such a relatively large sum of science funding (though only relative to other science funding) then don't be. This sort of thing matters and makes a difference. People care about CERN because they know about CERN and they know about CERN because documentaries like this are made, made well, marketed well and received well.<br />
<br />
The documentary itself will be released March 5, in New York, and hopefully will be viewable in most major locations, eventually, after that.<br />
<br />
My only gripe is that it is coming 18 months <em>after</em> the Higgs discovery. I know that part of the motivation for this is that people want to make sure the science is definitely true before disseminating it, otherwise things can become confusing for the less engaged viewer. However, in July 2012 those guys were reasonably sure that they'd found <em>something</em>. This research is owned as much by the public as it is by the researchers. CERN did do a great job on that day by holding press releases, announcing the discovery live, with live web-streams, and with public level discussions, at the moment, of what the implications were. And, of course, this is all great, and I love CERN for it. But <em>maybe</em> it can be done even better.<br />
<br />
Here's (potentially) how...<br />
<br />
This documentary will probably be reasonably widely viewed. It looks like it is potentially headed for some major awards and it is being reviewed very favourably by a bunch of major newspapers and film critics.<br />
<br />
Imagine if the film had been released, and widely viewed, <em>immediately</em> <i>prior</i> to the discovery's announcement, and the climax of the film was all the researchers, scientists, students, engineers, and everyone involved in this experiment waiting, full of anticipation, not knowing the result. The viewer now has a reasonable understanding of what the researchers were looking for and how they were hoping to find it. Now <i>everyone</i> is waiting, full of anticipation, not knowing the result. Then, we cut to the actual, live, not even the majority of the scientists know the result, announcement of the detection. The general viewer will now share in this discovery, that their taxes paid for (and who's future taxes will pay for future experiments) <em>in the moment</em>.<br />
<br />
That's not just great for science outreach, it is genuinely good theatre for everyone involved (even if there isn't a detection). But most importantly it allows this sharing of not just the result, but the acquisition of the result. The public feels like they were there, like they took part, like it is also <em>their</em> discovery. And, to bring back the bottom line, when funding is next being decided, they want to be able to contribute to, and participate in, more discoveries like this.<br />
<br />
Instead, people could tune in to the discovery, and see the researchers and scientists, etc, and <em>their</em> excitement, without being able to share in it.<br />
<br />
Having said all of that, 18 months isn't that long. So, when the documentary is released, go watch it, and remember that this stuff happened less than two years ago. This is the present.<br />
<br />
Twitter: <a href="https://twitter.com/just_shaun" target="_blank">@just_shaun</a>Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com0tag:blogger.com,1999:blog-1513704378254120283.post-76080486767885534652014-01-14T08:00:00.000-08:002014-02-06T11:17:47.017-08:00A few more comments on inflation and the multiverse<i>[This carries on from <a href="http://trenchesofdiscovery.blogspot.co.uk/2014/01/on-inflation-and-multiverse.html" target="_blank">a post yesterday</a> where I attempted to explain what inflation has to do with a multiverse]</i><br />
<strong><br /></strong>
<strong>Is that it?</strong><br />
<br />
You might be thinking: "OK, that's a toy-toy model about how a multiverse might come from an inflationary model. Cool. But are there any non-toy models?"<br />
<br />
As far as I'm aware, no. And this is where I definitely agree with Peter that, although it is certainly <em>possible</em> to generate a multiverse, it definitely isn't inevitable. In fact, if anyone reading this does know of any full models where a multiverse is generated, with a set of vacua with different energies, please let me know (even if it's just a complete toy model).<br />
<br />
In which case, you might now be wondering why is there so much excitement amongst some cosmologists about multiverses? Why do some physicists want it so much? There are two reasons I can think of. The first is that the multiverse, coupled with an anthropic principle, can explain why the cosmological constant has the value it does. If the true model of inflation generated Big Bangs in <em>many</em> vacua (i.e. more than 10^130 vacua), then, even though most of them will have large vacuum energies, the Big Bangs that occur in them also can't support life. Therefore we would expect to find ourselves in a Big Bang bubble where the cosmological constant was small, but just big enough to be detected. And this is actually exactly what we see. <i style="font-weight: bold;">[Edit: <a href="http://trenchesofdiscovery.blogspot.co.uk/2014/01/a-few-more-comments-on-inflation-and.html?showComment=1389718827803#c5749075113109233714" target="_blank">As Sesh points out in a comment</a>, an additional assumption is required to conclude that the cosmological constant should be both small </i>and<b><i> measurable. This assumption is that the distribution of vacuum energies in the multiverse favours large energies. See the comment and replies for discussion. Thanks Sesh.]</i></b><br />
<br />
The second reason multiverses are popular is that there is a candidate for where this absurdly large number of possible minima could come from and this is string theory. In fact, string theory predicts many more than 10^130 possible vacua.<br />
<br />
<strong>Summary</strong><br />
<br />
So, that's it. A multiverse needs two things: a way that multiple possible types of universe are <em>possible;</em> and a way to make sure that these universes all actually come into existence. String theory suggests that there may indeed be multiple possible types of "universe" (i.e. sets of laws of physics), but it is eternal inflation that would cause many Big Bangs to occur and thus, potentially, to populate these "universes".<br />
<br />
<strong>Some parting words...</strong><br />
<br />
There are some (perhaps even many) scientists who hate the idea of a multiverse and demand that multiverses are stricken from science for being "unfalsifiable" or "unpredictive" (because we can't ever access the other Big Bangs).<br />
<br />
I don't understand this mentality.<br />
<br />
Forgetting about whether a multiverse is "scientific" or not, what if it is <em>true</em>? What if we do live in a universe that, it just so happens, is part of a multiverse? Would we not want whatever method we use to try to learn about our existence to be able to deal with it? If we want "science" to be something that examines reality, then (if we are in a multiverse) should it not be able to deal with a multiverse? We might not be able to directly measure other Big Bangs, but we <em>can</em> infer their probable existence by measuring other things. <b><i>[Edit(06/02): I just want to clarify that I'm not meaning to suggest here that </i>science<i> needs changed to be able to talk about untestable things, but instead that </i>scientists<i> are justified when trying hard to find ways to test this idea. And that there </i>are<i> ways to test it.]</i></b><br />
<br />
Suppose we all lived 500 years ago and wanted to know why the Earth is exactly the right distance from the sun to allow life to occur. What explanations could we come up with for why this is true?<br />
<br />
What is the real reason?<br />
<br />
Twitter: <a href="https://twitter.com/just_shaun" target="_blank">@just_shaun</a>Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com22tag:blogger.com,1999:blog-1513704378254120283.post-63098108150145143552014-01-13T10:40:00.001-08:002014-01-14T08:27:38.500-08:00On inflation and the multiverse<em><span style="color: #0b5394;">[Note: in the following, and in the title, I have used the word multiverse a lot. When I do I am exclusively referring to <a href="http://en.wikipedia.org/wiki/Multiverse#Level_II:_Universes_with_different_physical_constants" target="_blank">this type of multiverse</a>, which has, for example, <a href="http://prl.aps.org/abstract/PRL/v59/i22/p2607_1" target="_blank">been used to try to explain why the cosmological constant it so small</a>. If you have any questions then please do ask them.]</span></em><br />
<br />
About a week ago, <a href="http://en.wikipedia.org/wiki/Peter_Coles" target="_blank">Peter Coles</a>, another <a href="http://telescoper.wordpress.com/" target="_blank">cosmology blogger</a> (who also happens to be my boss' boss' boss - or something), <a href="http://telescoper.wordpress.com/2014/01/06/inflation-and-the-multiverse/#comments" target="_blank">wrote a post</a> expressing confusion about the association of inflation with <em>the multiverse</em>. His post was a reaction to a copy of a set of lectures <a href="http://arxiv.org/abs/1312.7340" target="_blank">posted on the arXiv</a> by <a href="http://en.wikipedia.org/wiki/Alan_Guth" target="_blank">Alan Guth</a>, one of the inventors of <a href="http://en.wikipedia.org/wiki/Inflation" target="_blank">inflation</a> (and discoverer of the name). Guth's lectures claimed, in title and abstract, that there is a very obvious link between inflation and a multiverse. Peter had some strong comments to make about this, including the assertion that at some points he's inclined to believe that any association between inflation and a multiverse is no different to a thought pattern of: quantum physics ---> woo ---> a multiverse!<br />
<br />
I have some sympathy for Peter's frustration when people over-sell their articles/papers, and I would agree that inflation does not <em>require</em> a multiverse to exist, nor does inflation itself necessarily make a multiverse seem particularly likely/obvious. However, it is also true that, in a certain context, inflation and a multiverse <em>are</em> related. Put simply, through "eternal inflation", inflation provides a mechanism to create many <a href="http://en.wikipedia.org/wiki/Big_Bang" target="_blank">Big Bangs</a>. To get the sort of multiverse this post is about, these different Big Bangs need to have different laws of physics, which is not generic. However it can occur if the laws of physics depend on how inflation ends, in a way which I will describe below.<br />
<br />
As with Peter though, I am unaware of any <em>complete</em> inflationary model that will generate a multiverse. We could both have a blindspot on this, but my understanding is that the situation is that people expect (or hope?) that complete models of inflation derived from <em>string theory</em> are <em>likely</em> to generate a multiverse for reasons that I will describe below.<br />
<br />
Before that, you're probably wondering what this inflation thing is...<br />
<br />
<strong>Inflation</strong><br />
<br />
The inflationary epoch is a (proposed - although the evidence for it is reasonably convincing) period in the past where the energy density of the universe was almost exactly constant and homogeneous (i.e. the same everywhere) and the expansion of the universe was accelerating. After this inflationary epoch ended, the expansion was decelerating (which isn't surprising given that gravity is normally attractive) and the universe gradually became less and less homogeneous, until it looked like it does today. We like inflation for all sorts of reasons, but for the purpose of this post, the preceding two sentences are all you need to know.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWfSCuK24ZMlhocTPoqZmMNRhFSPS1nZQl_l2DzyXeLrCzyhY4EyxCtTXgsxbAc4QSyRq5EeBOrB5SEUszFi82mFkD3xjSNxasS5f6nkIbr_MU6T5RCvF0BkgF67z_o3AbkxElf3oEsTU/s1600/potential.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWfSCuK24ZMlhocTPoqZmMNRhFSPS1nZQl_l2DzyXeLrCzyhY4EyxCtTXgsxbAc4QSyRq5EeBOrB5SEUszFi82mFkD3xjSNxasS5f6nkIbr_MU6T5RCvF0BkgF67z_o3AbkxElf3oEsTU/s320/potential.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">This is the "potential energy density" stored by a hypothetical inflationary field, \(\phi\). The x-axis is the value of \(\phi\). The y-axis is the energy density. The hatched region is where the conditions for "eternal inflation" would be satisfied.</td></tr>
</tbody></table>
<br />
<a name='more'></a><br />
This whole paradigm is depicted in the figure above, which is showing the <em>potential energy density</em> stored by the inflationary field, \(\phi\), as a function of its value. When the field, \( \phi \), has a value less than \( \phi_e \) (i.e. its value is on the left in the figure) inflation is occurring, when \(\phi>\phi_e\) inflation will end. This occurs because the potential function is flat enough that this potential energy dominates kinetic energy, and therefore it also dominates the gravitational effects in the universe. And, when a constant potential energy dominates in the universe, the expansion of the universe accelerates. Then, when the field has a value greater than \(\phi_e\) the expansion stops accelerating, starts <em>decelerating,</em> and the universe begins to do the stuff we know of as the Hot Big Bang. The reason inflation is interesting is that \(\phi\) inevitably has small, quantum, fluctuations in its value. Thus inflation ends at slightly different times in different parts of the universe, and, also thus, the Hot Big Bang starts at slightly different times in different parts of the universe. As a result, there are very small fluctuations in the density of the post-inflationary universe - and it is these small fluctuations that then grow to become <a href="http://trenchesofdiscovery.blogspot.co.uk/2011/10/smoking-cmb-evidence-of-big-bang.html" target="_blank">temperature anisotropies</a>, <a href="http://en.wikipedia.org/wiki/Hoag%27s_Object" target="_blank">galaxies</a>, <a href="http://en.wikipedia.org/wiki/PSR_B1257%2B12" target="_blank">solar systems</a> and <a href="https://twitter.com/just_shaun" target="_blank">bloggers</a>. We can predict the statistical properties of these density perturbations because we can predict the statistical properties of the fluctuations in \(\phi\).<br />
<br />
This is all fine and good and this inflationary paradigm now has a lot of observational weight behind it. I won't go into any discussion about whether it actually <em>is</em> the way our universe got started or not, except to mention that it definitely seems possible and is the leading paradigm amongst cosmologists today, even if there hasn't yet been a way to know conclusively if it's right or not.<br />
<br />
<strong>Eternal inflation</strong><br />
<br />
There is a curious feature that can arise in these models. As I explained above, in the inflating universe, \(\phi\) has different values at different points in the universe. Inflation will end where \(\phi > \phi_e \) and continue where \(\phi < \phi_e \). As time proceeds more and more of the universe leaves inflation and enters the Big Bang. Because the field is always rolling down that potential energy function, one would ordinarily expect that, after a sufficient length of time, the entire universe has stopped "inflating" and entered the Big Bang. However, if the rate at which \(\phi \) increases, and thus takes some volume of the universe out of inflation, is sufficiently slow, compared to the rate of accelerated expansion, it can actually occur that the total volume of the universe that is still inflating <em>also</em> continues to increase. If \(\phi \) can get close enough to the top of the hill in the above figure (i.e. enter the hashed region), then this is exactly what happens. When it does, although some of the universe undergoes a Big Bang, in other regions, inflation continues eternally.<br />
<br />
In detail, for this to occur <em>eternally,</em> the initial distribution of \(\phi\) needs to be wide enough that there is always some small part of it that started arbitrarily close to the very top of that hill. In reality, because of those intrinsic quantum fluctuations, \(\phi\) cannot get <em>arbitrarily</em> close to the hilltop. The question then becomes whether or not the quantum fluctuations are large enough that they will ever dominate over the tendency for the field to roll down the potential. The quantum fluctuations in the field relate to the total energy density of the universe (the height of the curve in the figure above), and the evolution downward of the field depends on this <em>and </em>the slope of the potential energy function. For any model that looks like the figure above, this eternal inflation condition will be satisfied near the top of the hill, because there the slope tends to zero. For models that arise in regions that aren't hill tops, whether inflation continues eternally, will depend on the nature of the model. So, this <em>eternal inflation</em> scenario is actually quite generic for inflationary models, even if not <em>all pervasive</em>.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgiGELI43wVoZ-SI5rbFVU8XIwoN7dqswq3B9W1Wuq0A6TJmCKTb3R_n4dnR2K_5DsUYdzOxcpshE9l0Eu9Db-vtES2pA2Fmmg0G8tcypQ0uy7jkQV85FPyqlB2VLSZxNcB1OsNdWg9UwU/s1600/Bubbles.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgiGELI43wVoZ-SI5rbFVU8XIwoN7dqswq3B9W1Wuq0A6TJmCKTb3R_n4dnR2K_5DsUYdzOxcpshE9l0Eu9Db-vtES2pA2Fmmg0G8tcypQ0uy7jkQV85FPyqlB2VLSZxNcB1OsNdWg9UwU/s320/Bubbles.jpg" width="315" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">This is meant to show, in two dimensions, what is happening during eternal inflation. The red area is still "inflating". The yellow/white areas have entered Big Bangs. These Big Bangs will never collide, because the red, inflating region between them will push them apart.</td></tr>
</tbody></table>
<br />
But, what does this mean? It means that, although some region of the universe will escape this eternal inflation scenario and \(\phi\) will evolve downwards to a value where inflation ends and a Big Bang occurs, in most of the rest of the universe \(\phi\) has a value such that inflation continues. This Big Bang that occurs is then essentially an isolated bubble, surrounded by more inflation. Then, as time proceeds, some other region of the inflating universe will eventually roll to where \(\phi>\phi_e\) and another Big Bang will occur there, but, again, most of the rest of the universe will continue inflating. And so, as time goes on, you always have <em>most</em> of the universe inflating, with bubble Big Bangs coming off of it. This is meant to be depicted in the figure immediately above. The two white blobs would be bubble Big Bangs, and the rest of the red area is still inflating. Note that the size of each bubble Big Bang will grow with time (faster than light); however the bubbles will never meet, because the volume of the inflating red area is increasing even faster (thus the growing bubbles are pushed apart).<br />
<br />
So what does <em>this</em> mean? Is it interesting? Well, in this scenario, not particularly. Every single one of these Big Bang bubbles will leave the eternally inflating patch in exactly the same way, which means that they all end up looking the same. It is true that the precise fluctuations in \(\phi\), from point-to-point, in each bubble universe, will be different, so the precise locations and history of the galaxies, solar systems and bloggers, in each bubble will be different, but the statistical properties of those fluctuations will be the same, and crucially, so will <em>all </em>the laws of physics.<br />
<br />
<strong>Where does the multiverse come from?</strong><br />
<br />
I haven't yet shown you a multiverse (at least not of the kind that I promised at the beginning). I've shown you a way that inflation, <em>reasonably generically</em>, will create lots and lots of Big Bangs, each of which is separated from all the others. However a multiverse would want those different Big Bangs to have different physical laws as well.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqrsNsecvZlnwAEv1kBiWzP96PltW17IWGj-uKRExqI2aZ-yMCULzpdg3oOBIY2KkebITpjwNsoQ4prmufxdwdUcqFuNU5OWApPOqJYuZsDgCC6z2HYtbPVzN1MvOIT7A2mFHhWVd6_ds/s1600/potential2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="234" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqrsNsecvZlnwAEv1kBiWzP96PltW17IWGj-uKRExqI2aZ-yMCULzpdg3oOBIY2KkebITpjwNsoQ4prmufxdwdUcqFuNU5OWApPOqJYuZsDgCC6z2HYtbPVzN1MvOIT7A2mFHhWVd6_ds/s320/potential2.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The beginnings of a multiverse. Eternal inflation could happen in both "a" and "b", and the resulting Big Bangs can occur in either A or B, meaning that the Big Bangs would have different vacuum energies and potentially even different laws of physics</td></tr>
</tbody></table>
<br />
<br />
To see how a multiverse can come from this needs a slightly different function for the potential energy density. Consider the potential function shown above. Now there are two minima, A and B. If I chose an initial value for the field, entirely at random, it could "roll" to either minimum/vacuum. Now, remember all the lessons about inflation from the previous sections of this post. So long as both of the regions "a" and "b", support eternal inflation, this potential will generate bubble Big Bangs that are sometimes occurring in vacuum A and sometimes in vacuum B.<br />
<br />
And here we have the beginnings of a multiverse. Firstly, the way I've drawn the potential you can see that the two minima have different potential energies. It is precisely this "vacuum" energy that would be responsible, today, for the effects labelled as "dark energy". Therefore, the bubble Big Bangs that end up in vacuum A would measure a different dark energy density to those that ended up in vacuum B. Moreover, suppose that some aspect of fundamental physics depends on the value of the field. Note that this is not particularly far-fetched at all. For example, the masses of many of the fundamental particles in the universe almost certainly depend on the value of the Higgs field. If some fundamental parameters did depend on the value of the inflationary field then the very nature of chemistry, atoms, biology, galaxies, bloggers, etc, in the bubbles that land in A would be completely different from those which land in B.<br />
<br />
We're almost there, but this is still not quite the "multiverse" people are trying to motivate nowadays in fundamental physics. In that multiverse there are not just <em>two</em> types of bubbles, there are <em>many</em>. So how can that happen?<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXYJSWd0N35ZNv3z95hkx9yIaJDkf_y31W5tvMhMrfOOXJhVUyg20orHOy_3_99SiFbggFRym9CVyLiIGfOCuucQwktqMBH6k3n06z1HhOJVxU1hg9fFGkFD0RlPhIPNFM3HumpW2cDUI/s1600/multiverse.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="267" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXYJSWd0N35ZNv3z95hkx9yIaJDkf_y31W5tvMhMrfOOXJhVUyg20orHOy_3_99SiFbggFRym9CVyLiIGfOCuucQwktqMBH6k3n06z1HhOJVxU1hg9fFGkFD0RlPhIPNFM3HumpW2cDUI/s320/multiverse.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Now there are two field dimensions and eight potential vacuum energies/laws of physics. The hypothetical multiverse we're in would have many more field dimensions and <i>many</i> more potential vacua/minima. I would like to thank <a href="http://en.wikipedia.org/wiki/Stephen_Wolfram" target="_blank">Stephen Wolfram</a> for personally making this image for me after I entered the instructions at <a href="http://www.wolframalpha.com/input/?i=plot+%7C+%282-2+%28x%5E2%2By%5E2%29%2B%28x%5E2%2By%5E2%29%5E2%29+%283.5%2Bcos%2813+tan%5E%28-1%29%28x%2Fy%29%29+tanh%281%2F2+%28-x%5E2-y%5E2%29%29%2Bcos%283%2B9+tan%5E%28-1%29%28x%2Fy%29%29+tanh%281%2F2+%28-x%5E2-y%5E2%29%29%29+%7C+x+%3D+-1.25+to+1.25" target="_blank">this http URL </a></td></tr>
</tbody></table>
<br />
To see that, we need more than one inflationary field. Suppose that instead we were descended from a universe with just two field dimensions. In this universe, the potential energy function could look like the figure above, where \(x\) and \(y\) represent the values of the two inflationary fields. Now, you can see many different possible minima/vacua. So, lets follow the logic of the earlier sections of this post. If this potential energy function is flat enough near the centre to support eternal inflation, then there would be an infinite number of Big Bang's that would bubble off from the inflating patch. When these Big Bang's bubble off, they could roll to any one of the eight minima/vacua and thus the vacuum energy and physical laws could take up to eight different values and behaviours. This particular toy-toy-model has only eight vacua, but even in two dimensions, it is easy to imagine as many vacua as one wants, just by adding additional ripples in the potential.<br />
<br />
And this, generalised to many more fields, is what people imagine when talking about the "inflationary multiverse". That is, an eternally inflating patch, with Big Bangs bubbling off from it, and descending into one of many different possible minima/vacua, each of which has a different vacuum energy and set of fundamental constants/laws.<br />
<br />
<i>[<a href="http://trenchesofdiscovery.blogspot.co.uk/2014/01/a-few-more-comments-on-inflation-and.html" target="_blank">There is a very small part two here</a>...]</i><br />
<br />
Twitter: <a href="https://twitter.com/just_shaun" target="_blank">@just_shaun</a>Shaun Hotchkisshttp://www.blogger.com/profile/04832423210563130467noreply@blogger.com12