Showing posts with label Scheduled Posts. Show all posts
Showing posts with label Scheduled Posts. Show all posts

Wednesday, October 22, 2014

Why is Ebola so scary?


Unless you've been living under a reasonably sizable rock for the last few months, it can't have escaped your attention that the world has yet another terror to throw on the mountain of things we should be scared of: Ebola. The ongoing situation in Africa is the largest Ebola outbreak in history and has seen the disease spread beyond Africa for the first time. At the time of writing this, nearly 10,000 people have become infected, almost half of whom have died. This number is growing...rapidly.
Ebola cases and deaths in the 2014 outbreak.
In this post, I will describe what Ebola is, why it is so scary, and what chances we have of defeating it.

What is Ebola?

'Ebola' as a biological term actually refers to a group of five viruses within the Filoviridae family, of which four can cause the disease generally called Ebola, but more specifically known as Ebola virus disease. The recent outbreak has been caused by just one of these viruses, which used to be known as Zaire Ebolavirus, but is now simply 'Ebola virus' given that it is the most common among humans, and Zaire no longer exists! It doesn't look a whole lot like most viruses, it has to be said - with long, tubular filaments waving around rather than the tight, spherical viruses we're used to seeing for 'flu, HIV, and most others.

The Ebola virus.

Friday, June 27, 2014

The human machine: obsolete components



The previous post in this series can be found here.

In my last post in this series I described some of the ways in which gene therapy is beginning to help in the treatment of genetic disorders. A caveat of this (which was discussed further in the comments section of that post) is that currently available gene therapies do not remove the genetic disorder from the germline cells (i.e. sperm or eggs) of the patient and so do not protect that person's children against inheriting the disease. This could be a problem in the long run as it may allow genetic disorders to become more common within the population. The reason for this is that natural selection would normally remove these faulty genes from the gene pool as their carriers would be less likely to survive and reproduce. If we remove this selection pressure by treating carriers so that they no longer die young, then the faulty gene can spread more widely through the population. If something then happened to disrupt the supply to gene therapeutics - conflict, disaster, etc. - then a larger number of people would be adversely affected and could even die.

Although this is a significant problem to be considered, it is one that is fairly simply avoidable by screening or treating the germline cells of people undergoing gene therapy in order to remove the faulty genes from the gene pool. This is currently beyond our resources on a large scale, but will almost certainly become standard practice in the future.

All of this got me thinking: are there any other genes that might be becoming more or less prevalent in the population as a result of medical science and/or civilisation in general? If so, can we prevent/encourage/direct this process and at what point do we draw the line between this and full-blown genetic engineering of human populations? This is the subject of this post, but before we get into this, I want to first give a little extra detail about how evolution works on a genetic scale.

Imperfect copies

Evolution by natural selection, as I'm sure you're aware, is simply the selection of traits within organisms based on the way in which those traits affect that organism's fitness. An organism with an advantageous trait is more likely to survive and reproduce and so that trait becomes more and more common within the population. Conversely, traits that disadvantage the organism are quickly lost through negative selection as the organism is less likely to reproduce. The strength  of selection in each case is linked to how strongly positive or negative that trait is - i.e. a mutation that reduces an animal's strength by 5% might be lost only slowly from a population, whereas one that reduces it by 90% will probably not make it past one generation. In turn, the strength of that trait is determined by the precise genetic change that has occurred to generate it.

Monday, May 5, 2014

The human machine: replacing damaged components


The previous post in this series can be found here.


The major theme of my 'human machine' series of posts has been that we are, as the name suggests, machines; explicable in basic mechanical terms. Sure, we are incredibly sophisticated biological machines, but machines nonetheless. So, like any machine, there is theoretically nothing stopping us from being able to play about with our fundamental components to suit our own ends. This is the oft feared spectre of 'genetic modification' that has been trotted out in countless works of science fiction, inexorably linked to concepts of eugenics and Frankenstein-style abominations. Clearly genetic modification of both humans and other organisms is closely tied to issues of ethics, and biosafety, and must obviously continue to be thoroughly debated and assessed at all stages, but in principle there is no mechanistic difference between human-driven genetic modification and the mutations that arise spontaneously in nature. The benefit of human-driven modification, however, is that it has foresight and purpose, unlike the randomness of nature. As long as that purpose is for a common good and is morally defensible, then in my eyes such intervention is a good thing.

One fairly obvious beneficial outcome of genetic modification is in the curing of various genetic disorders. Many human diseases are the result of defective genes that can manifest symptoms at varying times of life. Some genetic disorders are the result of mutations that cause a defect in a product protein, others are the complete loss of a gene, and some are caused by abnormal levels of gene activity - either too much or too little.  A potential means to cure such disorders is to correct the problematic gene within all of the affected tissue. The most efficient means to do that would be to correct it very early in development, since if you corrected it in the initial embryo then it would be retained in all of the cells that subsequently develop from that embryo. This is currently way beyond our technical limitations for several reasons. Firstly, we don't routinely screen embryos for genetic abnormalities and so don't know which ones might need treatment. Secondly, the margin for error in this kind of gene therapy is incredibly narrow as you have to ensure that every single cell that the person has for the rest of their life will not be adversely affected by what you do to the embryonic cells in this early stage - we're not there yet. Thirdly, our genetic technology is not yet sophisticated enough to allow us to remove a damaged gene and replace it with a healthy one in an already growing embryo - the best we can do it stick in the healthy gene alongside the defective one and hope it does the job. There is certainly no fundamental reason why our technology could not one day reach the stage where this kind of procedure is feasible, but we are a long way off yet.

So, for the time being what can we do? Well instead of treating the body at the embryonic stage, the next best approach is to treat specifically the affected cells later on in life.  This involves identifying the problematic gene and then using a delivery method to insert the correct gene into whatever tissues manifest the disease, preferably permanently. This is broadly known as gene therapy, and is one of the most promising current fields of 'personalised' medicine.  

Monday, March 24, 2014

The human machine: finely-tuned sensors


The previous post in this series can be found here.

All good machines need sensors, and we are no different. Everyone is familiar with the five classic senses of sight, smell, touch, taste, and hearing, but we often forget just how amazingly finely tuned these senses are, and many people have little appreciation of just how complex the biology behind each sense is. In this week's post, I hope to give you an understanding of how one of our senses, smell, functions and how, in light of recent evidence, is far more sensitive than we previously thought.

Microscopic sensors

The olfactory system is an extremely complex one, but it is built up from fairly simple base units. The sense of smell is of course located in the nose, but more specifically it is a patch of tissue approximately 3 square centimetres in size at the roof of the nasal cavity that is responsible for all of the olfactory ability in humans. This is known as the olfactory epithelium and contains a range of cell types, the most important of which is the olfactory receptor neuron. There are roughly 40 million of these cells packed into this tiny space and their job is to bind odorant molecules and trigger neuronal signals up to the brain to let it know which odorants they've detected. They achieve this using a subset of a huge family of receptors that I've written about before, the G protein-coupled receptors (GPCRs). These receptors are proteins that sit in the membranes of cells and recognise various ligands (i.e. molecules for which they have a specific affinity) and relay that information into the cell. There are over 800 GPCRs in the human genome and they participate in a broad range of processes, from neurotransmission to inflammation, but the king of the GPCRs has to be the olfactory family, which make up over 50% of all the GPCRs in our genome.

Monday, February 3, 2014

The human machine: picoscale engineering





The previous post in this series can be found here.

Over the course of my 'human machine' series of posts I've tried to convey the intricacy and beauty of our biological engineering, and demonstrate that we are incredibly well-engineered machines whose complexity and originality go all the way down to the atomic level. In this week's post, I will be exemplifying this with one of the best cases that I can think of; how we transport oxygen around our bodies. I feel that this is a great story to tell because it is one that most people might think that they know well, but that actually is far more complex and subtle than it may appear, and that demonstrates how our lives are highly dependent on perfectly evolved processes working on the subatomic scale.

"It will have blood, they say."

I'm sure that anyone reading this blog is fully aware that we need oxygen to survive (although if you want a more detail explanation of exactly why then I direct your attention to a previous post of mine available here), and anyone remembering their primary school biology will know that oxygen is transported around the body by the circulatory system, i.e. the blood. Most of the cells within your blood are the famous red blood cells (to distinguish them from the immune cells - the white blood cells), which are, unsurprisingly, responsible for blood's distinctive colour - earning them the respect of horror movie aficionados everywhere. You have roughly 20-30 trillion red blood cells in you as you read this, each of which is about 7 microns (i.e. 7 millionths of a metre) in diameter. They shoot around your body, taking roughly 20 seconds to make one circulation, and have just one job; take oxygen from the lungs (where there's lots of it) to the tissues (where there's not). So specific are they to this job that they don't even bother having a nucleus, thereby removing all possibility of them doing anything else. 


Human red blood cells - you make 2 million every second!

Monday, December 9, 2013

Cubism. Realism. Sciencism.

Post-impressionist carcinoma.

This week's post from me will be relatively brief as I am currently on honeymoon, and although Shaun somehow manages to find time on his travels to blog about his conference experiences I doubt my new wife would be too happy if I did likewise! So, in lieu of my intended Human Machine series post (which will be coming early in the new year), I wanted to bring your attention to something wonderful going on at the Cell Picture Show.

The Cell Picture Show has appeared in the Trenches before (here and here), but this recent instalment is to me even more original and interesting than anything they've had before. The image above has two halves, both of which are strikingly reminiscent of Van Gogh's famous and beautiful Starry Night. They look like the product of some brilliant mind with an extraordinary mastery of colour and texture. In fact, the image on the left is a cross section of mouse skin containing basal skin carcinoma, with various components stained different colours and imaged using a fluorescence microscope. The image on the right is an artist's rendering of the same image using different coloured fabrics stitched together. This is part of Cell's exhibition 'Art Under the Microscope', in which images of biological samples are recreated into works of art by professional artists. The image above is just one example, but there are many others of similar standard available in the exhibition. The technical prowess of the researchers who obtained the original images is impressive, as are the aesthetic abilities of the artists who recreated them.

A fire-like network of neural stem cells in the human brain (left) and their artistic equivalent (right).

What I find so exciting about this exhibition is that it is redefining what can be the subject of art, and what can be the source of artistic inspiration. Nature has been inspiring artists for thousands of years, why should that be restricted to what we can see with our own eyes? We are now at a level of technology that we can begin to unveil many of the secrets that nature had previously been hiding from art. This should be exciting to both scientists and artists. Artists can be excited by the wealth of new subject matter that is beginning to open up to them, and scientists can be excited by the prospect of art adding to the ever-growing popularity and appreciation of science. Perhaps one day there will be great debates in artistic circles about new avant-garde artists who paint their proteins in a controversial way, much like the debates between the surrealists and romanticists on how to depict more macroscopic areas of nature. 

Although 'Art Under the Microscope' is a small exhibition, it nonetheless marks a growing trend in the use of science as inspiration for art. I sincerely hope that this continues as it will enrich both science and art, and help to blur the boundaries between them. Art has been very successful in entering many concious aspects of our daily lives, and we are the better for it. Science is still catching up in many regards, although to be fair it hasn't had all that long to make up the difference. If science were as exposed in the public conciousness and as everyday as art is, then I believe we would benefit similarly as we have from the ubiquity of art. The fusion of the two is a match made in heaven that is finally beginning to take hold. I look forward to the days when parks are adorned with pieces of science-based public art, and the spectrum of human endeavour is appreciated as single entity rather than as the separated, delineated pigeon holes of 'art' and 'science' as discrete subjects.


Monday, November 25, 2013

How does one measure the mass of a neutrino, using cosmology?

I'm going to tell you how, soon, humanity might measure the masses of neutrinos just by observing past events in the universe. I like this topic because it is one of the few situations in fundamental physics where a measurement of the greater universe might detect something about fundamental particles and/or their interactions, before we manage to measure it in a lab. Another example is the existence of dark matter; however the mass of dark matter will almost certainly be first measured in a lab. Perhaps with neutrinos it will go in the other direction?

What is a neutrino?

I guess that before telling you how to measure a neutrino's mass, it might be pertinent to tell you what a neutrino is and how we can know it has mass before we've measured that mass. Well...

When an atomic nucleus decays, the decay products we see are other nuclei, electrons and/or positrons. These visible products always carry less energy and momentum than the amount that the initial nucleus had. This suggests strongly that some unknown other particle is also being created in the decay and that we just can't see it. This hypothetical particle was dubbed the neutrino and when theories were developed for the force responsible for nuclear decays, the neutrino became an important part of them. And, eventually, neutrinos were detected directly. It took a while because neutrinos interact incredibly weakly, which means you need either a lot of neutrinos or a lot of transparent stuff for the neutrino to interact with (or both) before you will see them.

Initially, it was assumed that neutrinos are massless. They don't need to be massless, but for a long time there was no evidence that they did have mass, so the simplest assumption was that they didn't. There are three types of neutrinos: those emitted in interactions with electrons, those emitted in interactions with muons and those emitted in interactions with tau particles. If neutrinos were massless, then a neutrino emitted as an electron neutrino would always remain an electron neutrino. Similarly, a muon neutrino would always remain a muon neutrino. However, if neutrinos do have mass, then a neutrino emitted in an interaction with an electron will actually travel as a superposition of an electron neutrino, muon neutrino and tau neutrino. The net result being that this neutrino could be detected as a different type of neutrino. Therefore, a smoking gun thing to look for when determining whether neutrinos have mass is this characteristic signal whereby one type of neutrino appears to oscillate into another type of neutrino.

This effect was then seen and seen and seen again. Neutrinos appear to have mass. From the perspective of particle physics this is a bit weird. Neutrinos must have really small masses and it is unclear why these masses are so small. Unfortunately, this mechanism of neutrino oscillations doesn't directly give the masses of the neutrinos. Although, it can be used to measure the differences of the masses of the neutrinos, thus setting lower bounds on the possible masses of the neutrinos.

What has this got to do with cosmology?

Monday, October 28, 2013

The human machine: non-standard components


The previous post in this series can be found here.

In a previous post I alluded to the origins of mitochondria, the tiny chemical power plants found within all our cells. These hard-working machines are responsible for aerobic respiration, which is the way in which the vast, vast majority of the energy you use is released from the chemical energy in the food you eat. The way in which they do this is very cool, involving currents of electrons and protons in a manner very similar to standard battery. If you're interested in this then I direct you to my earlier post on this topic here, but in this post I will be discussing a rather odd thing about mitochondria: they're not in fact human...

What do I mean by this? Well, obviously they are, kind of, human since they're inside all of us, they're born with us and die with us, they don't wander off on their own to live an independent life elsewhere. Nonetheless, mitochondria are different to the rest of the machinery in our cells - they have their own genomes, they regulate their own replication, they make proteins their own unique way - in fact they closely resemble lifeforms that we might consider to be evolutionary polar opposites of ourselves: bacteria. That sounds pretty odd, right, that there might be bacteria living inside our cells that somehow want to help us by churning out energy for us to use? Seems pretty implausible, but there is a mountain of evidence supporting it.

If it barks like a bacterium...

Firstly, mitochondria do, kind of, look like bacteria. They are about the right size to be bacteria (0.5-1 micron in length) and have internal structures similar to many bacteria. The main difference is that mitochondria possess two membranes and no cell wall, whereas most bacteria for one membrane and a robust cell wall. The inner membrane of mitochondria is also far more ruffled than most bacteria, creating a much larger surface area - this is highly important for reasons that I'll come to!



Spot the difference: mitochondria on top, bacteria on the bottom. 

Wednesday, August 14, 2013

Working hard or hardly working?



"Life grants nothing to us mortals without hard work." So said Horace, the Roman lyric poet, over two millennia ago and little has changed since. I am currently one to attest to that sentiment as I am in the middle of writing up my PhD thesis and have accordingly developed the peculiar mania that grips many students at this stage in their degree where non-thesis pursuits become shamefully wasteful or even patently corrosive of your time! So, I'm afraid that this week's post from me is just a brief one, and the long-promised 'human machine' edition on stem cells is being pushed back yet again, apologies.

In light of this sudden idiopathic workaholism that overtaken me, it seems appropriate that my post this week be on the subject of how hard scientists work. Coming into science I knew that the pay is generally crap, and it's not particularly glamorous, and you have to look for a new job every three years until you settle down with your own cosy lab somewhere - but at least it's a fairly nice lifestyle, right? Well yes and no. I love the academic lifestyle - it's the right mix of individual freedom and motivating challenges, by which I mean that it isn't too stressful but isn't boring either. That has been my experience (present situation excluded), but a recent report from the University of Nottingham suggests that I may have been one of the lucky ones, or perhaps that things are going to worsen for me! 

The report (available here) looked at the working hours of conservation scientists in several countries by analysing the time and day of 25,000 publication submissions to the journal Biological Conservation. It's true that this is not, perhaps, the most reliable indicator of general working patterns since people tend to put in extra hours in the run-up to publication, but the results are still intriguing nonetheless. The long and the short of their findings are that scientists, basically, work pretty damn hard (well, conservation biologists at least). They observed that 16% of manuscripts were submitted late at night, and 12% were submitted at weekends, and that the proportion of work submitted outside of normal hours has been increasing ~5% each year. This paints a fairly bleak picture for the future if working hours are going to stretch further and further into personal time.

Perhaps unsurprisingly, the study also found significant differences in working habits between different countries. The countries whose scientists seem to work the most unsociable hours are Japan and Mexico, who seem to work late (~30% manuscripts submitted out of hours on weekdays), as well as China and India, who work weekends a lot (up to 40% submitted at weekends). The most relaxed scientists were found in Belgium and Norway, who like their weekends off (~5% submitted on weekends), as well as South Africa and Finland, who go home at 5 (less than 10% submitted after hours on weekdays) - thus explaining Shaun's abrupt move to Helsinki three years ago! British and American scientists were about average in their working habits.

So what makes many scientists so busy, and why do they stick at it for often quite poor salaries? Well the combined research, teaching, reviewing, and administrative duties of senior scientists puts a big strain on their time. The authors of this investigation warn that this can may be having a negative impact on the quality of the science produced, as well as the happiness of the researchers themselves.  Dr Ahimsa Campos-Arceiz, who led the study, reflects:

 "We call for academic institutions to remember that good science requires time to read and think and over-stressed scientists are likely to be less productive overall. We also recommend that peer-review activities are included as part of the academic job description and considered in staff performance evaluations. At the end of the day, working on this paper has been an opportunity to reflect about our own behaviour and priorities. Next time I go to Bali, I will spend more time swimming and talking with my wife and less working on manuscripts."
Why so many scientists are willing to put up with the current situation is perhaps the more informative question. People become scientists often because of a burning curiosity that they must fulfil, and the realisation of that goal is its own reward. In many ways, academic science is an indulgence that most other professions wouldn't tolerate. Researchers are, by and large, able to investigate whatever they're interested in, in whatever way they see fit. Clearly, dead-end research is eventually weeded out by funding bodies (*all hail the funding bodies*) but generally it's fairly flexible and if you're interested in something and stick in science then there's a good chance you'll end up working on it. As well as this, there is the feeling that you are contributing to something bigger than yourself. Research never disappears, it will outlive you and become your legacy once you're just a memory. This is the same sensation that artists must get when creating their masterpiece, or writers have as they pen their latest novel. Moreover, if your research is useful then it can have ramifications far beyond anything you could achieve in most other jobs, but even if it's not then you're still helping to take one more step along the path of human progress. This is why people chose to be scientists and work unreasonable hours for a lot less money than an investment banker, and it's why I would always encourage anyone who is interesting in entering science as a career.
So, Horace was right, life gives you nothing without hard work, but then if that work is intoxicating enough then life begins to mean nothing without it either. 

Wednesday, June 12, 2013

Cosmological perturbations post-Planck - wrap up

Helsinki at midnight. OK, that's not Helsinki, and the photo wasn't taken at midnight. But it is in Finland (Kemiö) and was taken after 11:30pm. Image credit either Chris Byrnes or Michaela D'Onofrio, I'm not sure, although because I got it off facebook, I guess it belongs to Mark Zuckerberg now.

I'm very sorry. As I wrote last week, we just hosted a conference here in Helsinki. I wanted to cover it as the conference happened and I just didn't have the combination of time and mental energy to do so. I won't be covering it in any detail retrospectively either because I need to get on with research. Nevertheless, this blog is slightly more than a hobby for me, it is also slightly ideological, so I will try to work out how to do it all better next time and try again then (this will be the annual theoretical cosmology conference "COSMO" in early September).

Here's a summary of some of the more interesting aspects that I'll quickly write up, starting with some closure concerning the topic I was halfway through in my last post...

David Lyth, the curvaton and the power asymmetry

David Lyth receiving the Hoyle Medal. David's the one in the photo who doesn't already have two medals. From this photo it seems that the guy on the left is graciously donating one of his many medals to David. I got this image from Lancaster University.

Where I left my last post I was describing David Lyth's talk about explaining the possible asymmetry in the amplitude of fluctuations on the sky (as seen through the temperature of the CMB). It's a small effect, the sky is almost symmetric; but it could be a real effect, the sky might be slightly asymmetric.

The possible asymmetry was seen before Planck and one candidate explanation involves quite large super-horizon fluctuations in some of the properties of the universe. "Super-horizon" here means fluctuations whose characteristic scale is bigger than the currently observable universe, i.e they are outside of our observable horizon. Such a fluctuation would be seen by us, within the observable universe as a smooth gradient in the fluctuating observable. Put simply, the idea is to have a smooth gradient in the amplitude of the measured temperature anisotropies. This would quite naturally result in a bigger amplitude in one direction, than another.

It seems that simple inflation can't achieve this without making the fluctuations in the universe significantly non-Gaussian. However, the curvaton can do it (according to David and a paper he is working on). Quite nicely, there is a relationship that David discussed that occurs between the amplitude of the asymmetry and the amount of deviation from a Gaussian distribution one would expect in both an inflation model and a curvaton model. For inflation, the deviation is too big, but for the curvaton it is small but not insignificant. This is nice because, according to David, if this asymmetry is real and the curvaton is responsible for it, then the fluctuations will be measurably non-Gaussian.

This means we can either rule this mechanism out as the cause of the apparent asymmetry, or even better, get evidence supporting it and thus supporting both the curvaton model and the real-ness of the asymmetry. So, watch this space...

Monday, April 29, 2013

TEDx CERN

If you live in Helsinki come to our live webcast of the event. We will feed you.

I had intended to write today's post on anomalies in cosmology. Unfortunately, I have suffered a crisis of confidence and have decided to postpone such a post for the future. I now have both a bunch of notes on the topic, left over from the Planck conference and a half-written post, left over from the weekend. The topic is a bit controversial and when I publish some thoughts on it I want to be very careful and precise so as not to accidentally annoy anyone.

Instead, I will tell you quickly about a really cool event that is taking place this Friday.

CERN is hosting a TED-x event. What is that? Well, a TED-x event is similar to a TED event, except that it isn't organised by TED itself. It is only endorsed by TED. What is TED? OK, well, TED is an organisation that organises a set of conferences around the world. The theme of the conferences is "ideas worth spreading" and speakers are given quite short time slots (typically less then twenty minutes) to express these ideas. Consequently the talks are often very fascinating as the speakers are forced to only say what really matters, leaving all the superfluous details aside. At the main TED events the speakers are also almost universally very good at giving talks, so the quality is high.

George Smoot, the host of the webcast/show. He has also been awarded one of the most illustrious honours any scientist can, a Nobel Prize guest appearance on The Big Bang Theory.

In fact, the TED realm of YouTube is one of the most dangerous black-holes of procrastination you can find. The shortness of the talks, combined with how interesting and intellectually stimulating they are is like the perfect storm of procrastination conditions. They don't last long enough for you to think that watching just one more is a problem. They are interesting, so you don't get bored. And they stimulate your mind so you don't even feel like you're using your time poorly (always my biggest procrastination danger). Then, half the day has gone.

Anyway, I have been making an analogy between science and sports in my mind for a long time now, and first wrote about it here more than a year ago. I really think that there is the potential for fundamental research to be as popular in today's society as sports is. Seriously! You might wonder why, if this is true, science isn't as popular as sports. Football matches fill out arenas and tennis players earn millions each year, entirely from the private sector throwing money at them to do nothing that is even remotely productive, yet even the highest profile fundamental research event of 2012, the discovery of the Higgs particle, was only front page news for a day.

Wednesday, April 10, 2013

The human machine: setting the dials




The previous post in this series can be found here.

It may seem sometimes that nature is a cruel mistress. We are all dealt our hand from the moment of  liaison between our lucky gold-medalist sperm and its egg companion. We are short or tall, broad or skinny, strong or weak because of the haphazard combination of genes that we wind up with, and that should be the end of the matter. Yet, as any seasoned card player will tell you, it is not the hand that matters, but how you play it! This, it turns out, also holds true when it comes to our genetic makeup - we can only play the cards we're dealt, but we don't have to play them all and can rely on some more heavily than others. In this post I'm going to discuss the ways in which DNA is organised and its activity regulated, and how this regulation is a dynamic, ever-changing process with cards moving in and out of play all the time. What's more, we'll explore the ways in which we can all consciously take control of our own DNA to help promote good health and long life!

Esoteric instructions laid bare

Most people are familiar with the concept of DNA - the instruction manual for every component that makes you you - but most are perhaps unaware of how DNA is actually organised within your cells. The importance of DNA has led to it achieving a somewhat mystical image in the public perception: a magical substance that sits inside you with omnipotent influence over every aspect of your construction. This perhaps might lead a layperson to think that we don't really understand how genes work, a perception that is encouraged by the abstract way in which the link between genetics and diseases is reported in the mainstream media. However, this impression is entirely false; we understand very well how genes work: DNA acts as a template for the generation of information-encoding molecules called RNA, which are in turn used as templates to make proteins, which then make everything else. This is called the 'central dogma' of molecular biology, which I'm not going to go into in detail now but have touched upon more thoroughly in a previous post: here.

The mystification of genetics in the mainstream perception can encourage people to forget that DNA is just a molecule, with as much physical presence and chemical potential as any other molecule in your body. As such, its supreme influence over you is dependent on pure chemistry and physics. The most obvious consequence of its being a physical entity is that it needs, in some way, to be arranged and organised. DNA exists within the nuclei of your cells, but it doesn't just float around randomly and aimlessly - its organisation is tightly regulated. First of all, DNA exists as a number of different strands, each its own molecule. These are chromosomes, humans have 46 in each cell nucleus, 23 of which you inherit from your mother, and 23 from your father. The classic image of a chromosome is the tightly packed 'X' shape like those in the image below, but actually this is a comparatively rare structure in the life of DNA as this only forms as the cell is dividing.

Chromosomes seen under an electron microscope. Image is from http://trynerdy.com/?p=145.
In non-dividing cells, DNA does not exist in the cosily familiar 'X' shapes, but instead spreads out to fill the whole nucleus. This is out of physical necessity - the DNA in compact chromosomes like those above is simply too tightly packed to do anything! Proteins and other molecules that need to interact with the DNA in order for its influence to be felt just can't get to it because there's no space. If the DNA spreads out to fill the nucleus, however, there's plenty of room for manoeuvre. Nonetheless, this organisation is not random and is still highly organised. DNA never exists on its own in a live cell - it is always bound to proteins called histones, which act as a scaffold around which DNA is able to wind, like a string around a ball. There is about 1.8m of DNA in each cell of your body, but once wound around histones it has a length of only around 0.09mm - a pretty significant space saving measure! Each little ball of DNA and histone is called a nucleosome; it is held together by attraction between the negatively charged backbone of the DNA and the positively charged side chains of the amino acids making up the histone proteins. 

DNA wrapped around histone proteins to form nucleosomes. Adapted from Muthurajan et al. (2004) EMBO J. 2004; 23(2):260-71

Tuesday, March 19, 2013

Planck rumours will soon become Planck results

On Thursday, the Planck satellite will be revealing its first cosmological results. In terms of fundamental physics, this will be the biggest event since the Higgs discovery last year. In the cosmology community it is the biggest event for the best part of a decade (possibly in both directions of time). If you don't follow cosmology too closely, you might wonder why this particular experiment might generate so much excitement. After all, aren't there all sorts of experiments, all of the time?

If so, I hope you've come to the right place.

The sky as seen by Planck in 2010. Only, they hadn't removed the foregrounds yet. There's a whole Milky Way galaxy in the way. Why must they make us wait so long?

If you're unaware, Planck is a satellite put in space by the European Space Agency to measure the cosmic microwave background (CMB). The CMB is an incredibly useful source of cosmological information. The impending release of Planck's results on Thursday is big news because Planck has measured the CMB with better resolution than any other experiment that can see the whole sky. Planck might have discovered evidence of interesting new physics, such as extra neutrinos or additional types of dark matter. It might even reveal some effects relating to how physics works at energies we could never probe on Earth. But even if it hasn't discovered anything dramatically new, the precision with which Planck has measured the parameters of the standard cosmological model will immediately make it the new benchmark.

There have been surprisingly few rumours leaked to the rest of the cosmology community about what to expect on Thursday. This has resulted in the most pervasive rumour being that they have simply not found anything worth leaking. Whatever the reality, on Thursday rumours will become results.

What has Planck actually done that is so interesting?

Monday, February 25, 2013

The human machine: probing the mechanics


The previous post in this series can be found here.

This week, inspired by Shaun's most recent post covering exciting new results in cosmology, I have decided to also take a quick look at one of the fascinating recent findings of molecular biology. I hope to give some insight into how this work is done, and why it is not only intellectually interesting, but also potentially practically useful. 

What do we know?

Those of you who have been following this series for a while might remember a post that I wrote last year (biological batteries and motors) where I discuss how energy is converted from myriad chemical forms in your food into the single energy currency of the cell, ATP. The system by which this is achieved is quite beautiful, chemical energy is converted into an electrical current within the mitochondria of your cells, which is in turn converted into a current of protons. This proton current drives a motor (ATP synthase) that churns out ATP, thereby converting it back into chemical energy. I'm not going to go into the whole process again here, but if you'd like a quick refresher then just hop back to my older post here, go on - you know you want to! I don't mind waiting.

So, a key player in this whole process is the so-called respiratory complex I (or NADH dehydrogenase), which is the first link in the chain that converts electrical current into proton current. Complex I takes electrons from a molecule known as NADH, which is produced from energy in your food by a range of complex metabolic chemical reactions. It moves the electrons that it takes from NADH and sticks them onto a molecule called ubiquinone, which then moves on to the next stage in the process: the perhaps confusingly named complex III.

Tuesday, February 5, 2013

The “ISW mystery” deepens considerably (II)

[... continued from yesterday]

[Note added April 29: Through correspondence with some of the authors from what I label as the "French group" below I have learned that the density threshold was actually applied in their work as well. This makes things rather confusing as it means that their methods and the methods of "the DHB" are much more similar. However, they have also updated their paper to reflect new knowledge about the void catalogues and see a slightly more significant signal, similar to what Planck find (see note below). Everything is rather confusing right now. Again, once the dust has settled, I will write a post clearing everything up.]

[Note added March 21: Wow, sometimes science moves quickly. Today Planck released its data. They appear to confirm the anomalous spots in the original "Granett" (Hawaiian) result. They also appear to confirm the new anomalous result that was present in the paper that is now retracted (see the note below from March 19), albeit with a slightly reduced significance. It is unclear exactly what is going on, but it is clear that it is something interesting. I will keep you informed as things progress.]

[Noted added March 19: The paper described in the second half of this post (I called its authors the DHB) has been withdrawn from the journal it was submitted to (see the new abstract at this link: http://arxiv.org/abs/1301.6136). It is unclear whether the problems that the authors found in their analysis will affect their conclusions. However, I suggest you are cautious regarding how you interpret the conclusions I have drawn below based on this paper. I will keep you informed as/when things progress.]

A really neat figure from arXiv:1301.5849 showing the locations and sizes of the various catalogues of voids being examined. A larger redshift means the void is further away from us and one Megaparsec (Mpc) corresponds to three million light years. The purple "Granett et al." box is the original catalogue used by the Hawaiian group back in 2008. 

Isn't this just "a posteriori" statistics?

There is another possible explanation for the mystery. The probability of ZOBOV picking out these lines of sight at random is exceedingly small (less than 0.003), but it isn't zero. Might this have just been a crazy fluke?

Suppose 100 different groups of physicists look for unexpected, but interesting, signals in cosmological data. Then, even if each group is very careful you still expect one of them to find something that would seem to them to be unlikely. Unfortunately, they would be the only ones to publish their results. So we wouldn't see one “detection” paper and ninety-nine papers consistent with no detection. We would just see the one “detection” paper.

The best way to determine whether this is what happened is to look for the signal in other surveys.  If the original measurement was a fluke, it won't show up anywhere else. But, if it does show up again, then the chances that it was a fluke will significantly diminish.

The Friday before last a paper appeared that did exactly this. A French group took two catalogues of voids (so no over-densities), which have been produced by applying ZOBOV to a new catalogue of galaxies (these ones are closer to us). The French group then did more or less the same thing as the Hawaiians did. They examined images of the CMB along the lines of sight of these voids, averaged the temperature in all the images and checked whether the resulting signal could have happened at random.

They found no significant result.

This was quite sobering to read on the day. The paper did verify the significance of the original measurement, but not finding it in the new catalogues was highly suggestive that the story I painted above of a sort of community wide “look elsewhere effect” was true.

Hold on though!

Things at this date in time did look bad for the anomaly, but there was one important piece missing from the French group's analysis. The Hawaiians only used the most extreme over and under-dense regions in their analysis. ZOBOV found many more than 50 regions for them and if they had used all of them, they also wouldn't have obtained a statistically significant signal. This was always a crucial part of their analysis because we already knew from other observations that the observed ISW effect from most of the universe is as small as the predicted signal.

What would the French group have seen if they had only examined the most extreme voids?

A new observation


Apparently it is a rule of thumb for observers, that the more interesting your observation is, the more boring you are meant to make your title. These guys probably deserve a promotion. The paper is here.

Three days later (last Monday) a mixture of physicists from Durham, Hawaii and Baltimore (the DHB) released a paper. It answered the question posed above. For anybody interested in finding new physics, the answer is very exciting.

Monday, February 4, 2013

The “ISW mystery” deepens considerably

Other than my initials, what secrets does the CMB hide that are waiting to be seen only when the CMB is examined in just the right way?

This time last year I wrote a few posts describing what I called the “ISW mystery” (Part I, II, III and IV). A year has passed, it is time for an update on the mystery.

The very short summary is that things are starting to get more than a little bit exciting. All of the plausible ways in which the calculation of the expected ISW signal could have been wrong have been checked and eliminated as possibilities; if the measured signal is real, it is too large for the standard cosmological model. Much, much more excitingly, the observation that generated the mystery has now been repeated in another region of the universe and a very similar and equally anomalous signal was found; the apparent anomaly was not a statistical fluke.

The preprint of the paper describing this new observation was released just a week ago.

What is the “ISW mystery”?


The image that began the mystery. Why is that spot so hot, and how did it get that cold ring around it?

A quick recap will probably be useful. The integrated Sachs-Wolfe (ISW) effect describes the heating and cooling of light as it passes through gravitational peaks and valleys late in the evolution of the universe. In the standard cosmological model, these peaks and valleys decay with time, so a light ray gains (or loses) more energy entering an over-dense (or under-dense) region of the universe than it loses (or gains) leaving it. The effect is very, very small. Almost every source of light in the universe is not known well enough to be used to detect it. Only the cosmic microwave background (CMB) is uniform enough that these tiny fluctuations could ever be detected.

However, even then, the primary fluctuations in the the temperature of the CMB are bigger than the secondary ones created by the ISW effect. We can measure these fluctuations but we could never know how much is due to the ISW effect and how much is primordial. The only thing we can do is look at the structures in the universe nearby and see if on average the CMB is slightly hotter (colder) along lines of sight where the nearby universe is over-dense (under-dense). The bigger, primordial fluctuations in the CMB should have nothing to do with local structures (the CMB has come from much further away). Therefore, if this signal were to be found in the CMB, the most plausible explanation would be an ISW effect.

A group in Hawaii decided to look for this signal in a slightly unusual way. Firstly, they made a catalogue of significant over and under-dense regions in a particular survey of galaxies. Then, they only examined patches of the CMB that existed along the line of sight of each of these regions. They then found that the patches aligned with over-densities were hotter on average than a randomly selected patch and those aligned with under-densities were colder (with more than “\(4\sigma\)” significance). This is what one would expect from an ISW effect. The “ISW mystery” is that these patches were too hot and too cold. The ISW effect simply shouldn't be that big.

The importance of checking the anomaly from every angle


Monday, January 14, 2013

The human machine: decommissioned components


The previous post in this series can be found here

Happy 2013 from all of us here in the Trenches! We successfully made it one more time around the sun, and if that's not a good excuse for a party I don't know what is! Sadly, however, not all of your cells have been having such a swimmingly good time since the calendar ticked over to January the first - in fact nearly one trillion of them have died in the past fortnight alone, at a rate of roughly 70 billion a day, or 800,000 per second. Don't be alarmed, however, as this has been going on for your whole life and is a vitally important part of being a multicellular organism such as yourself. A human without cell death would be like society without human death - overcrowded, unpleasant, and rife with infirmity. Your body needs a system by which damaged, old, or infected cells can be removed in a controlled manner; this process is known as apoptosis.

In this post I will be discussing what we know about how apoptosis works and how it is a key player in the development of cancer and the fighting of infectious disease. I'll also show how our understanding of how this process works has allowed us to devise targeted therapeutics against a number of debilitating conditions.

Cellular suicide - picking the moment

Your cells are team players - they're willing to do anything to serve you, including laying down their lives. Apoptosis depends on this loyalty because it is actually a form of suicide that your cells perform on themselves. Arguably the most important aspect of this is timing - if your cells are in the habit of committing suicide before it is necessary then you'll waste a lot of energy and resources building replacements that shouldn't be needed. On the other hand, if the cell leaves it too late to kill itself then it may find itself incapable of doing so.

So, how does a cell know when to die? Well the most obvious markers for cell death are simply the various forms of damage that can occur to the components of the cell itself. If a cell's membrane becomes damaged, for example, this can cause excess calcium to leak into the cell and so be sensed by a number of calcium-binding proteins, such as calpain, which in turn signal that apoptosis should begin. Similarly, damage to DNA is sensed by the complex machinery of the DNA repair pathway. For example, PARP is a protein that binds to single-strand breaks in DNA caused by DNA damaging agents such as radiation (think sunburn!) or chemical mutagens like free radicals. PARP and other DNA damage sensors relay their information to a number of signalling proteins, most importantly p53. If p53 is activated in response to DNA damage it signals to stop the usual processes of cell division and begin DNA repair, but if the damage is just too bad it makes the call to start apoptosis and destroy the cell.

Tuesday, December 4, 2012

The human machine: circuits and wires


The previous post in this series can be found: here.

In the first post of this 'human machine' series, I explained how 'energy' (that abstract entity) is processed and used by our bodies in order to converted the chemical energy in our food into the work energy required to keep us ticking over nicely. I discussed in this how we are all actually powered by electrical circuits that buzz along in the internal membranes of our cell's power stations, the mitochondria. Better yet, not only are we powered by currents of electrons, familiar to us as standard electricity, but also by currents of protons, and so are actually working off energy being extracted from two forms of electrochemical potential. We're pretty sophisticated machines!

The work energy generated by these processes is used in myriad ways, but one very important one is the creation of another electrical current that is the foundation of everything you've ever done and every thought you've ever had: the neuronal action potential. This is the electrical signals that run along the neurons in your brain and body in general, constantly relaying information back and forth throughout the whole complex machine. Without it we would be like plants, with one part of our bodies completely unaware of what's happening to the rest of it, and animal life as it is familiar to us would be entirely impossible. Most people have, I expect, heard of the notion of electrical signals running throughout our bodies (it's why the machines built the Matrix, right?), but few will actually know what that means. In today's post I'm going to be talking about what neuronal signals actually are, and so explain why being hit by lightning is a bad thing but being defibrillated (like in ER) can be a good thing.

Tuesday, November 20, 2012

Solar Eclipse









Southern Hemisphere view. This eclipse was only visible from New Zealand, Australia, and Chile.

Tuesday, November 6, 2012

Science and Video Games

A long time ago I posted about an online game called FoldIt. After I made that post various other examples of crossovers between video games and science have been brought to my attention. Many of these, I have linked to from The Trenches of Discovery's Facebook and Google+ pages and some I've been saving for a rainy day. (Note, we often post links/comments to those pages when we don't consider them worth a whole blog post. If you read this blog, but aren't following either of those pages, you're missing out on some interesting stuff. You should remedy this.)

Well, it isn't rainy, but it is foggy, so the day has as good as come. What follows is a run through of some of the various science/video game crossovers I'm aware of. Some of these are neat video games, designed to either teach an aspect of science, or  try to give a phenomenological experience of what that aspect of science means for the world. Others are more like FoldIt, they are puzzle games that you try to solve, and in the process you're actually helping the scientists solve their research problems.

Phylo