Wednesday, March 25, 2015

The science of three-parent children

2015 has already been a significant year in the field of human medicine as February saw the UK become the first country in the world to legalise the generation of so-called 'three-parent' children. This marks a milestone for preventative genetics and embryology and offers hope to many people around the UK and beyond who would be unable to have healthy children otherwise. The votes to bring this into law were fairly comfortably won by those in favour - 382 vs 128 in the House of Commons (the lower house) and 280 vs 48 in the House of Lords (the upper house) - however there have been a number of vocal opponents to the measure. In this post I hope to explain just what the process involves, and why it is considered necessary by the majority of British MPs.

A cellular energy crisis

Mitochondria, as you may recall from a previous post, are the powerhouses of our cells. They metabolise a range of molecules derived from food at use them to generate energy in the form of another molecule, ATP. You would not last long without them - just try holding your breath for a few minutes, since anaerobic respiration is all a cell without mitochondria would be able to manage. It is not surprising, therefore, that problems with mitochondrial function can be fairly nasty. Mitochondrial diseases are a range of genetic disorders in which the proper role of the mitochondria is disrupted due to mutations in one of the genes responsible for making mitochondrial proteins. These diseases never completely knock out mitochondrial function (since an embryo with such a disease could never survive to full development) but still cause severe symptoms in sufferers. Depending on the exact mutation, these can include blindness, deafness, diabetes, muscle weakness, cardiac problems, and problems with the central nervous system. Prognoses vary from one disorder to the next, but they invariably shorten lifespan, often severely. Sufferers of Leigh's disease, for example, rarely live past 7 years of age, and spend their short lives experiencing muscle weakness, lack of control over movement (particularly of the eyes), vomiting, diarrhea, an inability to swallow, and heart problems, among others. 

Tuesday, February 3, 2015

Combined constraints from BICEP2, Keck, Planck and WMAP on primordial gravitational waves

This week, the joint analysis of BICEP2 (+ BICEP2's successor Keck) and Planck has finally arrived. The result is more or less what was expected, which is that what BICEP2 saw last year in the B-mode polarisation signal of the CMB was not actually primordial gravitational waves (as had originally been hoped and claimed), but was unfortunately actually due to dust in the Milky Way. Such is life. Though we did of course have the best part of a year to come to grips with this reality.

Combined constraint on \(r\) from polarisation and temperature measurements (in blue). Freshly digitised in the spirit of modern cosmology. Gives \(r\lesssim 0.09\) at \(95\%\) confidence.

As a result of subtracting the dust component in BICEP2/Keck's signal (obtained by comparing the measurements from BICEP2/Keck and Planck), the final constraint on the "tensor to scalar ratio" (or \(r\)) from the BICEP2/Keck measurement is that \(r<0.12\) at \(95\%\) confidence. This \(r\) parameter essentially measures the amplitude of a primordial gravitational wave signal, so the net result is that the subtraction of dust takes BICEP2's high significance measurement of non-zero \(r\) and converts it into simply an upper bound.

I've seen some comments on blogs, in media, on Twitter, etc that there is still evidence of some sort of excess signal in BICEP2/Keck over and above the dust, but I can't see any evidence of that in any of their published results. The final likelihood for \(r\) (shown above in black) shows a plot consistent with \(r=0\) at less than \(1-\sigma\) (i.e. \(r=0\) is less than one standard deviation away from the maximum likelihood value). In fact, it would seem that the measurement of the dust that has been obtained by comparing BICEP2/Keck's measurements with Planck's measurements has been so good that the B-mode constraint on \(r\) from BICEP2/Keck is now competitive with (or even slightly better than) the constraint arising from temperature measurements of the CMB. This was always going to happen at some point in the future and it seems that this future has now arrived.

Wednesday, October 22, 2014

Why is Ebola so scary?

Unless you've been living under a reasonably sizable rock for the last few months, it can't have escaped your attention that the world has yet another terror to throw on the mountain of things we should be scared of: Ebola. The ongoing situation in Africa is the largest Ebola outbreak in history and has seen the disease spread beyond Africa for the first time. At the time of writing this, nearly 10,000 people have become infected, almost half of whom have died. This number is growing...rapidly.
Ebola cases and deaths in the 2014 outbreak.
In this post, I will describe what Ebola is, why it is so scary, and what chances we have of defeating it.

What is Ebola?

'Ebola' as a biological term actually refers to a group of five viruses within the Filoviridae family, of which four can cause the disease generally called Ebola, but more specifically known as Ebola virus disease. The recent outbreak has been caused by just one of these viruses, which used to be known as Zaire Ebolavirus, but is now simply 'Ebola virus' given that it is the most common among humans, and Zaire no longer exists! It doesn't look a whole lot like most viruses, it has to be said - with long, tubular filaments waving around rather than the tight, spherical viruses we're used to seeing for 'flu, HIV, and most others.

The Ebola virus.

Friday, September 19, 2014

Comparing Planck's noise and dust to BICEP2

In case anyone reading this doesn't recall, back in March an experiment known as BICEP2 made a detection of something known as B-mode polarisation in the cosmic microwave background (CMB). This was big news, mostly because this B-mode polarisation signal would be a characteristic signal of primordial gravitational waves. The detection of the effects of primordial gravitational waves would itself be a wonderful discovery, but this potential discovery went even further in the wonderfulness because the likely origin of primordial gravitational waves would be a process known as inflation which is postulated to have occurred in the very, very early universe.

The B-mode polarisation in the CMB as seen by BICEP2. Seen here for the first time in blog format without the arrows. Is it dust, or is it ripples in space-time? Don't let Occam's razor decide!

I said at the time, and would stand by this now, that if BICEP2 has detected the effects of primordial gravitational waves, then this would be the greatest discovery of the 21st century.

However, about a month after BICEP2's big announcement a large crack developed in the hope that they had detected the effects of primordial gravitational waves and obtained strong evidence for inflation. The problem is that light scattering of dust in the Milky Way Galaxy can also produce this B-mode polarisation signal. Of course BICEP2 knew this and had estimated the amplitude of such a signal and found it to be much too small to explain their signal. The crack was that it seemed they had potentially under-estimated this signal. Or, more precisely, it was unclear how big the signal actually is. It might be as big as the BICEP2 signal, or it might be smaller.

Either way, the situation a few months ago was that the argument BICEP2 made for why this dust signal should be small was no longer convincing and more evidence was needed to determine whether the signal was due to dust, or primordial stuff.

Tuesday, August 26, 2014

The Cold Spot is not particularly cold

(and it probably isn't explained by a supervoid; although it is still anomalous)

In the cosmic microwave background (CMB) there is a thing that cosmologists call "The Cold Spot". However, I'm going to try to argue that its name is perhaps a little, well, wrong. This is because it isn't actually very cold. Although, it is definitely notably spotty.

That's the cold spot. It even has its own Wikipedia page (which really does need updated).

Why care about a cold spot?

This spot has become a thing to cosmologists because it appears to be somewhat anomalous. What this means is that a spot just like this has a very low probability of occurring in a universe where the standard cosmological model is correct. Just how anomalous it is and how interesting we should find it is a subject for debate and not something I'll go into much today. There are a number of anomalies in the CMB, but there is also a lot of statistical information in the CMB, so freak events are expected to occur if you look at the data in enough different ways. This means that the anomalies could be honest-to-God signs of wonderful new physical effects, or they could just be statistical flukes. Determining which is true is very difficult because of how hard it is to quantify how many ways in which the entire cosmology community have examined their data.

However, if the anomalies are signs of new physics, then we should expect two things to happen. Firstly, some candidate for the new physics should come up, which can create the observed effect and produce all of the much greater number of other measurements that fit the standard cosmological model well. If this happens, then we would look for additional ways in which the universe described by this new model differs from the standard one, and look for those effects. Secondly, as we take more data, we would expect the unlikeliness of the anomaly to increase. that is, it should become more and more anomalous.

In this entry, I'm not going to be making any judgement on whether the cold spot is a statistical fluke or evidence of new physics. What I want to do is explain why, although it still is anomalous, and is definitely a spot, the cold spot isn't very cold. Then, briefly, I'll explain why, if it is evidence of new physics, that new physics isn't a supervoid.

So, what is the cold spot, and why is it anomalous?

Friday, June 27, 2014

The human machine: obsolete components

The previous post in this series can be found here.

In my last post in this series I described some of the ways in which gene therapy is beginning to help in the treatment of genetic disorders. A caveat of this (which was discussed further in the comments section of that post) is that currently available gene therapies do not remove the genetic disorder from the germline cells (i.e. sperm or eggs) of the patient and so do not protect that person's children against inheriting the disease. This could be a problem in the long run as it may allow genetic disorders to become more common within the population. The reason for this is that natural selection would normally remove these faulty genes from the gene pool as their carriers would be less likely to survive and reproduce. If we remove this selection pressure by treating carriers so that they no longer die young, then the faulty gene can spread more widely through the population. If something then happened to disrupt the supply to gene therapeutics - conflict, disaster, etc. - then a larger number of people would be adversely affected and could even die.

Although this is a significant problem to be considered, it is one that is fairly simply avoidable by screening or treating the germline cells of people undergoing gene therapy in order to remove the faulty genes from the gene pool. This is currently beyond our resources on a large scale, but will almost certainly become standard practice in the future.

All of this got me thinking: are there any other genes that might be becoming more or less prevalent in the population as a result of medical science and/or civilisation in general? If so, can we prevent/encourage/direct this process and at what point do we draw the line between this and full-blown genetic engineering of human populations? This is the subject of this post, but before we get into this, I want to first give a little extra detail about how evolution works on a genetic scale.

Imperfect copies

Evolution by natural selection, as I'm sure you're aware, is simply the selection of traits within organisms based on the way in which those traits affect that organism's fitness. An organism with an advantageous trait is more likely to survive and reproduce and so that trait becomes more and more common within the population. Conversely, traits that disadvantage the organism are quickly lost through negative selection as the organism is less likely to reproduce. The strength  of selection in each case is linked to how strongly positive or negative that trait is - i.e. a mutation that reduces an animal's strength by 5% might be lost only slowly from a population, whereas one that reduces it by 90% will probably not make it past one generation. In turn, the strength of that trait is determined by the precise genetic change that has occurred to generate it.

Monday, May 5, 2014

The human machine: replacing damaged components

The previous post in this series can be found here.

The major theme of my 'human machine' series of posts has been that we are, as the name suggests, machines; explicable in basic mechanical terms. Sure, we are incredibly sophisticated biological machines, but machines nonetheless. So, like any machine, there is theoretically nothing stopping us from being able to play about with our fundamental components to suit our own ends. This is the oft feared spectre of 'genetic modification' that has been trotted out in countless works of science fiction, inexorably linked to concepts of eugenics and Frankenstein-style abominations. Clearly genetic modification of both humans and other organisms is closely tied to issues of ethics, and biosafety, and must obviously continue to be thoroughly debated and assessed at all stages, but in principle there is no mechanistic difference between human-driven genetic modification and the mutations that arise spontaneously in nature. The benefit of human-driven modification, however, is that it has foresight and purpose, unlike the randomness of nature. As long as that purpose is for a common good and is morally defensible, then in my eyes such intervention is a good thing.

One fairly obvious beneficial outcome of genetic modification is in the curing of various genetic disorders. Many human diseases are the result of defective genes that can manifest symptoms at varying times of life. Some genetic disorders are the result of mutations that cause a defect in a product protein, others are the complete loss of a gene, and some are caused by abnormal levels of gene activity - either too much or too little.  A potential means to cure such disorders is to correct the problematic gene within all of the affected tissue. The most efficient means to do that would be to correct it very early in development, since if you corrected it in the initial embryo then it would be retained in all of the cells that subsequently develop from that embryo. This is currently way beyond our technical limitations for several reasons. Firstly, we don't routinely screen embryos for genetic abnormalities and so don't know which ones might need treatment. Secondly, the margin for error in this kind of gene therapy is incredibly narrow as you have to ensure that every single cell that the person has for the rest of their life will not be adversely affected by what you do to the embryonic cells in this early stage - we're not there yet. Thirdly, our genetic technology is not yet sophisticated enough to allow us to remove a damaged gene and replace it with a healthy one in an already growing embryo - the best we can do it stick in the healthy gene alongside the defective one and hope it does the job. There is certainly no fundamental reason why our technology could not one day reach the stage where this kind of procedure is feasible, but we are a long way off yet.

So, for the time being what can we do? Well instead of treating the body at the embryonic stage, the next best approach is to treat specifically the affected cells later on in life.  This involves identifying the problematic gene and then using a delivery method to insert the correct gene into whatever tissues manifest the disease, preferably permanently. This is broadly known as gene therapy, and is one of the most promising current fields of 'personalised' medicine.