Wednesday, September 9, 2015

The future is now! The rise of genome editing


It’s an exciting time to be a biologist! Every few years it seems like there is another significant technical breakthrough that allows biological research either to speed up exponentially, or to enter into areas that were previously inaccessible. In just the last decade or so we’ve seen the publication and digitisation of the human genome (without which most current life sciences work would be either impossible or impractical), the development of super-resolution microscopy (allowing us for the first time to see live biological processes on a truly molecular scale), the facilitation of DNA sequencing (making it economical on a large scale), and the invention or improvement of a whole range of technologies (enzyme-conjugation systems, flow cytometry, fluorescence-activated cell sorting etc.) that won’t mean much to anyone outside the field but that have revolutionised the way research is done. It’s been a long road, but it finally seems like the ambitions of researchers are starting to be matched by the available technology, whether it be computational, mechanical, chemical, or biological. The latest innovation that is taking the biology world by storm is the enormous progress that has recently been made in an area that has incalculable potential in both academic and clinical contexts: genome editing. In this post I will try to explain these recent advancements, why researchers are excited, and why you should be too!

What is genome editing?

Genome editing is pretty much what you’d expect from the name; editing the DNA sequence within the genome of a particular cell. This can involve adding DNA, removing DNA, swapping some DNA for other DNA, or moving DNA around within the genome. It is difficult to overstate how powerful a tool genome editing can be when it comes to biological research. Much of the work done in molecular life sciences is trying to work out how various molecules fit into the whole machine that is an organism – genome editing allows researchers to directly tinker with these molecules (typically proteins, which are of course encoded by a their associated DNA sequence) and observe the effects. This could involve removing the gene encoding a given protein from an organism and seeing what defects arise. Alternatively, you could introduce a specific mutation in a gene to see if that has functional relevance, or introduce DNA encoding fluorescent marker proteins into the end of your protein of interest to see where it goes and what it’s up to. Genome editing elevates researchers from the level of pure observers into direct manipulators of a system.

Wednesday, August 26, 2015

Hypothesis: The future of peer review?

If I could recreate the way research results are quality checked and revealed to the world, I would probably change almost all of what is currently done. I think the isolated scientific paper is a product of the 20th century, being imposed on the 21st purely because of inertia. A better solution would be to give a "living paper" to each general research project an individual researcher has. This living paper can then be updated as results change/improve. In such a system I would probably have ~5 living papers so far in my career, instead of ~20 old-style papers. Or, even better, would be a large wiki edited, annotated, moderated and discussed by the science community as knowledge is gained.

Even if you to wish to keep "the paper" as how science is presented, I think that the journal system, while invaluable in the 20th century, also exists in the 21st century only due to inertia. Pre-print servers like the arXiv are already taking care of the distribution of the papers, and the peer review, which is responsible for the quality check side of things, can (and might?) be organised collectively by the community on top of that. But why should we stick with peer review anyway? Could there be a better way?

Firstly, let me stress, peer review is definitely an incredibly effective way to progress knowledge accurately and rapidly. The best ideas are the ones that withstand scrutiny. The better an idea is, the more scrutiny it can withstand. Therefore, holding every idea up to as much scrutiny as possible is the best way to proceed. However, by peer review I simply mean criticism and discussion by the rest of the scientific community. I think the way peer review is currently done, at least what people normally mean by "peer review" is very nearly worthless (and when you factor in the time required to review and respond to review, as well as the money spent facilitating it I'd be tempted to claim that it has a negative impact on research overall). The real peer review is what happens in informal discussions: via emails, at conferences, over coffee, in the corridor, on facebook, in other papers, etc. The main benefit the current method of peer review has is simply that the threat of peer review forces people to work harder to write good papers. If you removed that threat, without replacing it with something else, then over time people would get lazy and paper quality would degrade, probably quite a lot.

But that would only happen if the 20th century form of peer review was removed without replacing it with something from the 21st century. I wrote above that the real form of peer review happens through conversations at conferences, in emails, etc. The rapid access to papers that we get now makes this possible. In the early-mid 20th century, because the (expensive) telephone was the only way to rapidly communicate with anyone outside your own institute, word of mouth would spread slowly. Therefore some a priori tick was needed, that confirmed the quality of a paper, before it was distributed; hence peer review. But now communication can and does happen much more rapidly. Today, if a paper in your field is good, people talk about it. This gets discussed in emails amongst collaborators, which then disperses into departmental journal clubs and the information about the quality of the paper is disseminated like that. It's worth emphasising that, at least in high energy physics and cosmology, this often happens long before the paper is technically "published" via the slow, conventional peer-review.

However, this information probably still doesn't disseminate as widely or as quickly as might be ideal, given the tools of the web today. What would be ideal is to find a way for the discussions that do happen to be immediately visible somewhere. For example, what if, instead of having an anonymous reviewer write a review that only the paper's authors and journal editor ever sees, there was instead a facility for public review (either anonymous or not), visible at the same site where the paper exists, where the authors' replies are also visible, and where other interested people can add their views? The threat of peer review would still be there. If a paper was not written with care, people could add this in a review. This review would remain unless or until the paper was revised. Moreover, negative reviews that would hold up a paper could also be publicly seen. Then, if a reviewer makes unfair criticisms, or misunderstands a paper, the authors could makes this clear and the readers can judge who is correct. Or, even better, the readers can add to the discussion and perhaps enlighten both the authors and the reviewer (with words that all other readers can see)!

Wednesday, July 1, 2015

Cosmological Backreaction

In the last few weeks a disagreement has surfaced at the arXiv. The disagreement concerns whether backreaction is important in cosmology.

To summarise my take on the whole thing, it seems to me that the two sides of this disagreement are, to a large extent, talking past each other. I don't doubt that there is genuine disagreement where definitions overlap, but, at least to my present understanding, much of the disagreement actually just lies in what should be considered "backreaction". There seems to be a secondary, though related, disagreement concerning whether one should start with observations and use them to methodically construct a model of the universe, or instead start with a model of the universe and then see whether it fits the data. The side that favours first constructing the model would say that a model without any backreaction is entirely self-consistent and fits the data well enough not to be concerned. To the other side this still doesn't prove that backreaction must be negligible.

But OK, what is cosmological backreaction?

Backreaction itself is quite a common term in physical sciences.

In a surprising proportion of calculations about nature we would normally analyse some sort of interesting object, existing within some external system, but in a scenario where the behaviour of the object has no measurable influence on the overall system. Then, calculating predictions essentially amounts to two independent steps: firstly, calculating what the background system is doing, and then calculating how the interesting object will react to that.

However, this type of scenario isn't always accurate. When it isn't, the background system could be described as "backreacting" to the object's behaviour.

Wednesday, April 29, 2015

Mysterious news stories about supervoids

Early last week a news story broke about a supervoid. The supervoid was claimed to be a number of things, from an explanation for "the cold spot", to the biggest "structure" yet found in the universe, to just "mysterious".

Whether it is a structure or not entirely depends on how you define structure, so I won't discuss whether it is or isn't a structure. However, if you do allow it to be a structure, it isn't the biggest structure yet found. It's hard to do a like for like comparison with other "superstructures". However, there are regions of the universe where the density of observable matter is smaller, for a wider range, so by any definition I can think of, this structure has been beaten.

The cold spot is a region in the cosmic microwave background (CMB) that has a temperature profile that is somewhat unexpected (due to a combination of a cold central spot and a hot ring around it). Whether this void could be the explanation of the cold spot has been explained in this paper and this blog post by Sesh. It can't, not without a significant deviation from General Relativity (and a sufficiently big deviation that it would be very strange that these deviations haven't been seen elsewhere). It's worth stressing right now that it isn't the coldness of the cold spot that is itself anomalous. This is a subtle point so just about anyone who says "the cold spot is too cold" can be forgiven for the mistake, but in reality the cold spot isn't too cold. In fact it has more or less exactly the coldness expected of the coldest spot in the CMB. What isn't expected is that there will be a hot ring around such a cold spot. Actually, it's worth stressing further that it isn't even the hot ring that is, by itself, anomalous. Such a hot ring is also quite likely in the CMB. The anomalousness of the cold spot is caused by the fact that both of these features are present, right next to each other. I explained this curiosity in this blog entry, but it is worth repeating.

I want to address now quickly the claim that this supervoid is mysterious. The quantitative source for the claim that the void is mysterious comes from the claim in the paper about the void that it is "at least a \(3.3 \sigma\) fluctuation" and that "\(p=0.007\) ... characterizing the cosmic rarity of the supervoid". However (and this is the crucial point) what these numbers quantify is the probability that something as extreme as this void could exist at a random point of the universe (or, more precisely, a random point within the part of the universe seen by a particular observational survey). What these numbers do not quantify is the probability that the whole survey could have seen something this extreme. These are two separate statistical things and the relevant one for claiming mysteriousness is the second one. I'll try to estimate this probability.

I don't have any reason to doubt the numbers they quote for the probability that this void could exist at a random line of sight in the survey. If I use the quoted radius, density contrast and redshift of the void I also calculate it to be a \(\sim 3\sigma\) fluctuation in the matter field. This can be done first by calculating the root-mean-square of the density (contrast) field of the universe when it is smoothed over a particular radius. This quantity, "\(\sigma_R\)", is commonly used in large scale structure. Then, the ratio of the density (contrast) of the obtained void and the \(\sigma_R\) value for the radius of the void gives you \(\sim 3.5\) so I trust that the more sophisticated analyses in the paper are correct, or at least aren't obtaining wildly wrong answers. If one assumes (probably validly) that the large scale density field of the universe has a Gaussian distribution this can be translated into a probability that the observed fluctuation could occur at any random position in the universe.

So, the crucial question that now needs to be asked before calling this supervoid mysterious is whether the survey used to find it saw enough of the universe to witness this rare an event. The size of the void in the sky is approximately \(10\) degrees (as quoted in their abstract). This means it has an area of approximately \(100\) square degrees on the sky. The void was found using data from the WISE and 2MASS all-sky surveys. However the whole sky isn't usable for robust analysis due to foregrounds, the galaxy, etc. Thankfully for our goal, the authors of the supervoid paper also wrote a paper about the catalogue of galaxies they used to find the supervoid and in the abstract of that paper they estimate that their catalogue covers 21,200 square degrees of the sky.

What does this mean when we pull it all together? Well, the catalogue used to find the 100 square degree thing, covered 21,200 square degrees of the sky. Therefore, there were \(\sim 21200/100 \simeq 200\) independent \(100\) square degree patches of the sky seen by the survey. Using their own probability for this void existing at any particular line of sight of \(p=0.007\) this gives a very approximate estimate of the expected number of under-dense regions of the universe at least as extreme as the "mysterious" supervoid. The answer is \(N \sim 200*0.007 = 1.4\).

So, not only is the supervoid not actually mysterious, it is in fact more or less exactly in line with naive expectations!

Twitter: @just_shaun

Wednesday, March 25, 2015

The science of three-parent children



2015 has already been a significant year in the field of human medicine as February saw the UK become the first country in the world to legalise the generation of so-called 'three-parent' children. This marks a milestone for preventative genetics and embryology and offers hope to many people around the UK and beyond who would be unable to have healthy children otherwise. The votes to bring this into law were fairly comfortably won by those in favour - 382 vs 128 in the House of Commons (the lower house) and 280 vs 48 in the House of Lords (the upper house) - however there have been a number of vocal opponents to the measure. In this post I hope to explain just what the process involves, and why it is considered necessary by the majority of British MPs.

A cellular energy crisis


Mitochondria, as you may recall from a previous post, are the powerhouses of our cells. They metabolise a range of molecules derived from food at use them to generate energy in the form of another molecule, ATP. You would not last long without them - just try holding your breath for a few minutes, since anaerobic respiration is all a cell without mitochondria would be able to manage. It is not surprising, therefore, that problems with mitochondrial function can be fairly nasty. Mitochondrial diseases are a range of genetic disorders in which the proper role of the mitochondria is disrupted due to mutations in one of the genes responsible for making mitochondrial proteins. These diseases never completely knock out mitochondrial function (since an embryo with such a disease could never survive to full development) but still cause severe symptoms in sufferers. Depending on the exact mutation, these can include blindness, deafness, diabetes, muscle weakness, cardiac problems, and problems with the central nervous system. Prognoses vary from one disorder to the next, but they invariably shorten lifespan, often severely. Sufferers of Leigh's disease, for example, rarely live past 7 years of age, and spend their short lives experiencing muscle weakness, lack of control over movement (particularly of the eyes), vomiting, diarrhea, an inability to swallow, and heart problems, among others. 

Tuesday, February 3, 2015

Combined constraints from BICEP2, Keck, Planck and WMAP on primordial gravitational waves

This week, the joint analysis of BICEP2 (+ BICEP2's successor Keck) and Planck has finally arrived. The result is more or less what was expected, which is that what BICEP2 saw last year in the B-mode polarisation signal of the CMB was not actually primordial gravitational waves (as had originally been hoped and claimed), but was unfortunately actually due to dust in the Milky Way. Such is life. Though we did of course have the best part of a year to come to grips with this reality.

Combined constraint on \(r\) from polarisation and temperature measurements (in blue). Freshly digitised in the spirit of modern cosmology. Gives \(r\lesssim 0.09\) at \(95\%\) confidence.

As a result of subtracting the dust component in BICEP2/Keck's signal (obtained by comparing the measurements from BICEP2/Keck and Planck), the final constraint on the "tensor to scalar ratio" (or \(r\)) from the BICEP2/Keck measurement is that \(r<0.12\) at \(95\%\) confidence. This \(r\) parameter essentially measures the amplitude of a primordial gravitational wave signal, so the net result is that the subtraction of dust takes BICEP2's high significance measurement of non-zero \(r\) and converts it into simply an upper bound.

I've seen some comments on blogs, in media, on Twitter, etc that there is still evidence of some sort of excess signal in BICEP2/Keck over and above the dust, but I can't see any evidence of that in any of their published results. The final likelihood for \(r\) (shown above in black) shows a plot consistent with \(r=0\) at less than \(1-\sigma\) (i.e. \(r=0\) is less than one standard deviation away from the maximum likelihood value). In fact, it would seem that the measurement of the dust that has been obtained by comparing BICEP2/Keck's measurements with Planck's measurements has been so good that the B-mode constraint on \(r\) from BICEP2/Keck is now competitive with (or even slightly better than) the constraint arising from temperature measurements of the CMB. This was always going to happen at some point in the future and it seems that this future has now arrived.