Showing posts with label genomics. Show all posts
Showing posts with label genomics. Show all posts

Sunday, May 11, 2014

Integrative Genomics in Neuropsychiatric Disorders

 

Unless you are really into genomics and how they play out in mental illness, this may be something to skip. On the other hand, it's really interesting stuff. Integrative genomics may offer new and innovative approaches to working with neuropsychiatric disorders.

Integrative Genomics in Neuropsychiatric Disorders

Air date: Monday, May 05, 2014
Runtime: 01:09:44


Description: Neuroscience Seminar Series
Dr. Geschwind laboratory is working to improve our understanding of human neuropsychiatric diseases, such as autism and neurodegenerative diseases, and their relationship to the range of normal human higher cognitive function. They use a combination of genetic, functional genomic, and neurobiological methods in our work--frequently in collaboration with other laboratories or disciplines. Their methodological focus involves the application of network analyses and systems biology, which offer the promise of integration of multiple levels of data, connecting molecular pathways to nervous system function in health and disease.

Author: Daniel Geschwind, M.D., Ph.D., University of California, Los Angeles

Download: To download this event, select one of the available bitrates:
[64k] [150k] [240k] [440k] [740k] [1040k] [1240k] [1440k] [1840k] How to download a Videocast
Caption Text: Download Caption File
Permanent link 

Saturday, May 03, 2014

Is Our Junk DNA Really Junk?

The notion of junk DNA has always seemed like an oxymoron to me. Just because we don't know what it does (yet), does not mean it is junk. Apparently my view is shared by people who actually know a hell of lot more about genetics and DNA than do I. Still, there are even more cellular biologists who are not convinced.

The excellent article from Cosmos Magazine, by Dyani Lewis, looks at the current state of the field and the arguments from each side of the debate.

What is our junk DNA for?

By Dyani Lewis

Scientists still argue whether the genome's 'dark matter' has any purpose. Dyani Lewis reports.


English geneticist Ewan Birney accepted a bet by his Australian colleague John Mattick that ‘junk DNA’ was our genome’s operating system. Mattick is still to collect. FAIRFAX

Click to listen to this article or here to subscribe to our podcast


A little over a year ago it looked like Australian geneticist John Mattick had won a bet against his English colleague, Ewan Birney, over the way the human genome works. Like many others, Birney maintained that our genome was mostly comprised of “junk”, excess DNA that padded it out. Mattick, director of Sydney’s Garvan Institute, had long believed otherwise. In his view, so-called junk DNA would prove to be a code, our genome’s equivalent of a high-level operating system. In 2007 the two made a bet that at least 20% of the “junk” would be found to have a function. The stakes were a case of good Australian red. It was a well-timed wager. A worldwide project known as ENCODE was gearing up to examine the output of every one of the three billion letters of DNA that comprise the human genome. The results were announced in September 2012 with great fanfare. At a worldwide media conference, Birney declared that 80% of our DNA code was “functional”. Sometime, somewhere, one cell or another in the body was reading almost every bit of the genome.

So can we call it quits on the debate over junk DNA? Far from it. As critics were quick to point out, simply reading out the DNA code is not proof that the code is functional. It might just be the cells’ equivalent of web surfing: a lot of useless sites get perused before anything useful is found. Mattick’s case of wine suddenly wasn’t looking quite such a sure thing.



John Rasko of the University of Sydney hates the term ‘junk DNA’. He believes it holds the key to how complex an organism is. CREDIT: JOHN RASKO

How to settle this argument? One way to decide whether junk DNA is useful would be to get rid of it and see what happens. Not an experiment you can do on people. But last year, Victor Albert at the State University of New York in Buffalo reported that nature might have done the experiment for us.

Like us, the genomes of plants, insects and other animals also consist of vast amounts of DNA, much of which we can’t decipher. Albert claimed he had found a carnivorous plant, the bladderwort, which has a virtually junk-free genome and does just fine. Could the debate soon be settled?

The term “junk DNA” was originally coined in 1972 by Japanese American evolutionary biologist Susumo Ohno. It’s easy to forget how little was known about genomes just four decades ago. In 1972 scientists could only speculate about what a whole genome might look like – how a four-letter DNA code of As, Ts, Gs and Cs might be strung together to write an instruction manual. But even without reading it, scientists knew that ours was big. The way Ohno saw it in the early 1970s, with a genome the size of ours, only a small percentage could possibly be made up of genes or we would suffer dangerous mutations that would quickly accrue over the generations.

For decades, scientists focused on genes and ignored the junk.

As many early geneticists found, if you mutate a gene, important developmental processes could be disrupted. At the time, a gene was thought of as a recipe for a protein. Proteins are the construction-site workers charged with turning the information in a one-dimensional DNA code into a living organism. They do it all, forming the bricks and mortar of our cells, the enzymes that drive our metabolism and the components of cell communications systems. But junk DNA could not be deciphered into any protein and the term became shorthand for any stretch of DNA that was not a protein-coding gene.

Almost immediately the term seemed doomed. It was imprecise, and ignored growing evidence that some DNA sequences had other essential biological functions. For instance, researchers in the 1960s had already found that small tracts of DNA, known as “promoters”, lie directly ahead of protein-coding genes and act as helipads – landing sites for enzymes that read genes. These enzymes “transcribe” stretches of the DNA code into an almost identical stringy molecule called RNA.

During the 1980s and 1990s, scientists managed to decipher even more novel functions for junk DNA. Other types of helipads, called “enhancers”, were identified, often located thousands of letters away from the gene they controlled. Yet other stretches of DNA carried instructions not for protein recipes, but for RNA recipes alone.

Like a photocopy of a page from a recipe book, RNA was thought to be produced only for the purpose of instructing protein synthesis (see figure above). But it turns out that each transcribed RNA molecule could have a function. Some of these functional RNA molecules, dubbed “ribozymes”, work like enzymes to catalyse cellular reactions. Others, known as “microRNAs”, interfere with the RNA copies of other genes, effectively switching them off by preventing proteins from being made from the RNA recipe.

Although these discoveries were momentous, they did not blow away the concept of junk. When the Human Genome Project finally unveiled its completed 3.2 billion letters of genetic code in 2003, the mystery of our un-deciphered genome hit prime time. The idea that only 1.5% of our DNA coded for genes seemed to fire the public imagination

Our total number of genes was also humiliating. Ours was not the first genome to be unveiled: a microbe, a roundworm and a fruit fly all preceded us and revealed gene numbers ranging from 4,000 to 20,000. Surely our vastly more complex species would have at least an order of magnitude more.

Not so. It turns out we have 20,000 protein-coding genes, the same number as the roundworm, a one millimetre long transparent creature boasting just 1,000 cells.

“It was a great shock to everyone,” says University of Sydney haematologist John Rasko. Perhaps what set us apart from simpler organisms lay not in the genes, but in the 98.5% of our DNA still waiting to be decoded – the view firmly held by Mattick. He believes that the complexity of an organism does not relate to the number of genes, but to what’s in the junk DNA. Indeed there is a modest correlation between an organism’s complexity and the amount of junk DNA it carries: the bacterium E. coli contains little more than 10% non-protein coding DNA; roundworms 75%; for humans it’s 98.5%.

Rasko hates the term “junk DNA”. “It still riles a lot of people in the field that the term ‘junk’ even took up traction,” he says. It’s not surprising that he is unimpressed with the phrase. Rasko’s “current obsession” is introns, the sort of DNA sequences Ohno would have dismissed as junk. Introns, as their name hints, are found interspersed within protein-coding genes and range in size from 10 to thousands of letters long. When a protein is made, the gene is first transcribed into an RNA copy with introns intact. But before the RNA molecule is finally translated into protein, the introns are edited out. Should that editing fail, the RNA molecule bearing an intact intron is sent to what Rasko calls “the molecular trash can” (see figure below).

Rasko and his team have found that during the development of white blood cells, many RNA molecules actually hang on to their introns; a perplexing observation since these transcripts are made only to be trashed. “Why would a cell go to all of that trouble?” asks Rasko.

The answer, he says, is “complexity”. Just as in the performance of a symphony orchestra, each instrument must play or be silent at precisely the right time, so too in the development of cells. Particular proteins need to be turned on and off at the stroke of a baton. By making transcripts that are destined for the shredder, Rasko believes that the genome has come up with “an elegant system” for orchestrating protein levels during the development of white blood cells. What’s more, entire suites of proteins can be orchestrated using the same molecular baton.

Rasko identified 86 genes involved in white blood cell development that were all diminished in concert. And it turns out shredding the RNA instructions, rather than making unnecessary proteins, is much easier on the cell’s energy budget. “The energy costs on a cell by controlling the editing of introns are tens-fold less than it would be if you had to use a protein degradation mechanism,” he says. Introns are just one example of DNA sequences once viewed as superfluous, but now thought to be critical to the development of a complex organism such as a human. Disrupt intron editing and, as Rasko found, you disrupt the entire symphony. White blood cells unable to wield the baton failed to develop into the cells of the immune system. Rasko’s work illustrates how a once-overlooked component of the genome can turn out to be vital. The question is, how many other parts of the genome, once dubbed junk, are essential? That’s where ENCODE comes in. A small army of researchers joined forces in the wake of the Human Genome Project’s completion in 2003 to systematically sift through the vast tracts of mystery DNA. The purpose was to find which bits have a biological function.

The massive international undertaking aimed to create the Encyclopaedia of DNA Elements (ENCODE’s full name) and brought together 442 scientists from around the globe. In September 2012, in an event that typifies the coordination required of such an immense project, their initial results were unveiled in a clutch of 30 scientific papers simultaneously published in three different scientific journals.

The bottom line, as Birney – ENCODE’s lead analysis coordinator – announced to the media, is that 80% of the genome has a “biochemical function”. To arrive at this estimate, 147 types of cells were subjected to 24 different experiments to search for meaning in the oceans of DNA. What was surprising was the number of potentially useful sequences dotted throughout the genome. Instead of an immense ocean of junk DNA punctuated with occasional islands of protein-coding genes, the genome began to look like a thick soup, packed with active ingredients.

Promoters and enhancers were known to be important residents of the mysterious non-coding DNA. But ENCODE found more than four million of them, many more than had previously been recognised. Combined with the 1.5% of protein-coding DNA, that takes the proportion of our genome with known function up to around 10%.

ENCODE then measured other hints of function by looking at where proteins dock on to the long strands of DNA, finding three million of these sites. But the vast majority of “function” was inferred from the fact that in some cell somewhere in the body, at some time, DNA was being read, that is, transcribed into RNA.


Plant geneticist Jeffrey Bennetzen believes most DNA is useless. CREDIT: JEFFREY BENNETZEN

The ENCODE fanfare was answered with a storm of criticism. A “meaningless measure of functional significance”, tweeted Michael Eisen from the US Howard Hughes Medical Institute. The definition of “function” was “so loose as to be all but meaningless”, opined T. Ryan Gregory from the University of Guelph in Canada. The conclusions were “absurd” and full of “logical and methodological transgressions”, wrote Dan Graur from the University of Houston. Jeffrey Bennetzen, a plant geneticist from the University of Georgia, summarised the feeling: “I don’t think there’s anybody who believes that because something is transcribed, that means it has a function.”

Mattick, who was involved in the pilot phase of ENCODE, disagrees. “I personally think it’s intellectually lazy to say it’s noisy transcription.” If it were noisy transcription, he says, then ENCODE would have seen random patterns of transcription. Instead it found precisely orchestrated patterns, tuned to particular cell types. Mattick believes that while gene number does not relate to complexity, those orchestrations of RNA transcribed from “junk” DNA do. As analysis of ENCODE continues, he predicts that the percentage of the human genome with proven function will edge towards 100%.

For Magdalena Skipper, the editor at Nature who shepherded the publication of ENCODE’s Nature papers, arguments over the numbers are missing the point. “The value of ENCODE goes so much beyond this discussion of what is the percentage of the genome that is functional and in what way we define function.”

No doubt. But we still want to know what most of our DNA is really doing. The answer might come from an unexpected place.

The floating bladderwort is an unassuming carnivorous pondweed that captures its prey using tiny suction traps that lie beneath the water. But it wasn’t the bladderwort’s appearance or eating habits that intrigued evolutionary biologist Victor Albert. “It was known to have a tiny genome,” he says. “The question was, what’s missing?”


Albert and his colleagues found that the bladderwort genome contains a meagre 82 million letters. That’s 1/40 the size of our own, and an even punier 1/240 that of its plant relative, the Norway Spruce. But size was only half the story.

“There’s essentially no junk DNA,” Albert says. The tiny genome contains around 28,500 protein-coding genes, but only 3% is what he would consider junk. “It’s an interesting counterpoint to the human genome situation.”

Some have suggested that the bladderwort may have rid itself of excess DNA to save on phosphorous, an element that is part of the DNA molecule. Bladderworts live in an environment that is poor in phosphorous, and eat meat to bolster their intake of the element. (Albert himself doesn’t buy this explanation as to why they ditched their junk, since other phosphate-hungry carnivorous plants don’t have tiny genomes). So if the bladderwort can do all sorts of complex things without its excess genomic baggage, does it follow that junk DNA is irrelevant? Not necessarily. By “junk”, Albert was restricting his definition to a particular class of junk DNA known as “transposons” – repeating tracts that are relics of ancient viruses.

And indeed the bladderwort seems to have dispensed with them. But as Mattick points out, even the minimalist bladderwort genome contains plenty of other non-protein coding sequences in the form of introns and tracts between genes that were traditionally termed junk – by his calculation some 65% of its genome. So, says Mattick, rather than spelling the death knell to junk, the bladderwort actually bolsters the view that no genome can truly go without.

For Mattick, the bladderwort’s claim is just a replay of the claims made for the fugu, the highly poisonous Japanese puffer fish. For geneticists, it’s best known for having the tiniest genome of any back-boned animal, one-eighth the size of ours. When its genome was first read in 2002 it was similarly billed as a complex creature that had managed to do away with its “junk” DNA. But as Mattick points out, in fact 89% of fugu’s DNA does not code for proteins. So bladderworts and fugu still have a very high proportion of non-coding DNA, comparable to that of other complex organisms.


The carnivorous pondweed, bladderwort. The bladder trap, below, is an unusual structure that gives the plant its name. But a more surprising feature is the plant’s shrunken genome. CREDIT: GETTY IMAGES


CREDIT: ENRIQUE IBARRA-LACLETTE, CLAUDIA ANAHÍ PÉREZ-TORRES AND PAULINA LOZANO-SOTOMAYOR

As for the transposons, the bits of old virus that seem to multiply in genomes, Mattick concedes that they could be padding the genomes of some plants. “But you don’t see nearly so much in animals,” he says, possibly because they are under greater evolutionary pressure than plants to streamline their genomes, keeping sequences that are useful, and jettisoning the rest.

While no one argues that all non-protein coding DNA lacks function, the question now is how much is, in fact, junk? As Dan Graur cautions, when it comes to thinking about genomes, it’s a mistake to think in terms of a “Goldilocks genome” where every bit of DNA is perfectly fit for its function. “Evolution never breeds perfection,” he says. But even if a stretch of DNA is not perfectly functional, having some junk DNA to tinker with could be a big plus. As Mattick points out, bacteria with little “junk” have stayed stuck in the single-celled world whereas those with junk-laden genomes have formed the kingdoms of plants, animals and fungi. Perhaps genomes hang on to junk to allow the flexibility to evolve new and complex traits.

But that loose association between junk DNA and complexity still doesn’t wash with many biologists. Until the function of the various sequences is demonstrated, biologists such as Albert, Bennetzen and Graur say that we are a long way from relegating the term “junk DNA” to the history books.

Scientists such as Mattick and Rasko continue to pore over the “functional” DNA identified by ENCODE. But how much of the genome will eventually pass muster for the tougher critics is still open to wager. As geneticist Daniel MacArthur at Harvard University’s Broad Institute has declared, “I’d still take on Mattick’s wager any day, so long as I got to specify clearly what was meant by ‘functional’.”

Dyani Lewis is a freelance science journalist based in Melbourne, Australia.

Monday, November 18, 2013

Carl Zimmer - How Our Minds Went Viral

brain virus.001

Carl Zimmer's 2012 book, A Planet of Viruses, offered an intriguing and somewhat mind-boggling account of the role viruses played in our evolution (and in the evolution of the entire planet).
Viruses are the smallest living things known to science, yet they hold the entire planet in their sway. We are most familiar with the viruses that give us colds or the flu, but viruses also cause a vast range of other diseases, including one disorder that makes people sprout branch-like growths as if they were trees. Viruses have been a part of our lives for so long, in fact, that we are actually part virus: the human genome contains more DNA from viruses than our own genes. Meanwhile, scientists are discovering viruses everywhere they look: in the soil, in the ocean, even in caves miles underground.

This fascinating book explores the hidden world of viruses—a world that we all inhabit. Here Carl Zimmer, popular science writer and author of Discover magazine’s award-winning blog The Loom, presents the latest research on how viruses hold sway over our lives and our biosphere, how viruses helped give rise to the first life-forms, how viruses are producing new diseases, how we can harness viruses for our own ends, and how viruses will continue to control our fate for years to come. In this eye-opening tour of the frontiers of biology, where scientists are expanding our understanding of life as we know it, we learn that some treatments for the common cold do more harm than good; that the world’s oceans are home to an astonishing number of viruses; and that the evolution of HIV is now in overdrive, spawning more mutated strains than we care to imagine.
In a recent article for his blog at National Geographic, The Loom, Zimmer provides a capsule explanation of how the human mind "went viral."
Viruses invaded the genomes of our ancestors several times over the past 50 million years or so, and their viral signature is still visible in our DNA. In fact, we share many of the same stretches of virus DNA with apes and monkeys. Today we carry half a million of these viral fossils, which make up eight percent of the human genome. (Here are some posts I’ve written about endogenous retroviruses.) 
Our DNA has small stretches of coding called enhancers. When a specific protein connects with the enhancer for a gene, the gene's production of proteins is more rapid. Viruses have enhancers, too, that act to help the virus reproduce. However, when some viruses become fossils in our DNA and the viral enhancer becomes a permanent part of our DNA.

Scientists have identified 6 viral enhancers that have been incorporated into our DNA since our evolutionary split with chimpanzees. 
Known as PRODH, it encodes an enzyme that’s involved in making signaling molecules in the brain. And if the enzyme isn’t working properly, the brain can go awry.  
This viral enhancer no longer spurs the reproduction of its original DNA, but it does help cells in the brain make signaling molecules that are essential to brain function.
Other researchers have also found evidence for the importance of PRODH in the human brain. In some studies, mutations to the gene have been linked to schizophrenia, for example. (One study has failed to find that link, though.) A mutation that deletes the PRODH gene and its surrounding DNA has been linked to a rare psychiatric disorder, called DiGeorge syndrome. 

Here is the whole post.

How Our Minds Went Viral

by Carl Zimmer

The Loom | November 13, 2013

Did viruses help make us human? As weird as it sounds, the question is actually a reasonable one to ask. And now scientists have offered some evidence that the answer may be yes.

If you’re sick right now with the flu or a cold, the viruses infecting you are just passing through. They invade your cells and make new copies of themselves, which burst forth and infect other cells. Eventually your immune system will wipe them out, but there’s a fair chance some of them may escape and infect someone else.

But sometimes viruses can merge into our genomes. Some viruses, for example, hijack our cells by inserting its genes into our own DNA. If they happen to slip into the genome of an egg, they can potentially get a new lease on life. If the egg is fertilized and grows into an embryo, the new cells will also contain the virus’s DNA. And when that embryo becomes an adult, the virus has a chance to move into the next generation.

These so-called endogenous retroviruses are sometimes quite dangerous. Koalas, for example, are suffering from a devastating epidemic of them. The viruses are spreading both on their own from koala to koala and from parents to offspring. As the viruses invade new koala cells, they sometimes wreak havoc on their host’s DNA. If a virus inserts itself in the wrong place in a koala cell, it may disrupt its host’s genes. The infected cell may start to grow madly, and give rise to cancer.

If the koalas manage to survive this outbreak, chances are that the virus will become harmless. Their immune systems will stop their spread from one host to another, leaving only the viruses in their own genomes. Over the generations, mutations will erode their DNA. They will lose the ability to break out of their host cell. They will still make copies of their genes, but those copies will only get reinserted back into their host’s genome. But eventually they will lose even this feeble ability to replicate.

We know this is the likely future of the koala retroviruses, because we can see it in ourselves. Viruses invaded the genomes of our ancestors several times over the past 50 million years or so, and their viral signature is still visible in our DNA. In fact, we share many of the same stretches of virus DNA with apes and monkeys. Today we carry half a million of these viral fossils, which make up eight percent of the human genome. (Here are some posts I’ve written about endogenous retroviruses.)

Most of this viral DNA is just baggage that we hand down to the next generation. But sometimes mutations can transform viral DNA into something useful. Tens of millions of years ago, for example, our ancestors started using a virus protein to build the placenta.

But proteins aren’t the only potentially useful parts that we can harvest from our viruses.

Many human genes are accompanied by tiny stretches of DNA called enhancers. When certain proteins latch onto the enhancer for a gene, they start speeding up the productions of proteins from it. Viruses that infect us have enhancers, too. But instead of causing our cells to make more of our own proteins, these virus enhancers cause our cells to make more viruses.

But what happens when a virus’s enhancer becomes a permanent part of the human genome? Recently a team of scientists carried out a study to find out. They scanned the human genome for enhancers from the youngest endogenous retroviruses in our DNA. These viruses, called human-specific endogenous retroviruses, infected our ancestors at some point after they split off from chimpanzees some seven million years ago. We know this because these viruses are in the DNA of all living people, but missing from other primates.

Once the scientists had cataloged these virus enhancers, they wondered if any of them were now enhancing human genes, instead of the genes of viruses. If that were the case, these harnessed enhancers would need to be close to a human gene. The scientists found six such enhancers.

Of these six enhancers, however, only one showed signs of actually boosting the production of the nearby gene. Known as PRODH, it encodes an enzyme that’s involved in making signaling molecules in the brain. And if the enzyme isn’t working properly, the brain can go awry.

In 1999, scientists shut down the PRODH gene in mice and found a striking change in their behavior. They ran an experiment in which they played a loud noise to the mice at random times. Then they started playing a soft tone just before the noise. Normal mice learn to connect the two sounds, and they become less startled by the loud noise. But mice without PRODH remained as startled as ever.

Other researchers have also found evidence for the importance of PRODH in the human brain. In some studies, mutations to the gene have been linked to schizophrenia, for example. (One study has failed to find that link, though.) A mutation that deletes the PRODH gene and its surrounding DNA has been linked to a rare psychiatric disorder, called DiGeorge syndrome.

Once the scientists had found the virus enhancer near PRODH, they took a closer look at how they work in human cells. As they report in the Proceedings of the National Academy of Sciences this week, they searched for the activity of PRODH in tissue from human autopsies. PRODH is most active in the brain–and most active in a few brain regions in particular, such as the hippocampus, which organizes our memories.

The new research suggests that the virus enhancer is partly responsible for PRODH becoming active where it does. Most virus enhancers in our genome are muzzled with molecular caps on our DNA. That’s probably a defense to keep our cells from making proteins willy-nilly. But the hippocampus and other regions of the brain where PRODH levels are highest, the enhancer is uncapped. It may be left free to boost the PRODH gene in just a few places in the brain.

The scientists also found one protein that latches onto the virus enhancer, driving the production of PRODH proteins. And in a striking coincidence, that protein, called SOX2, is also produced at high levels in the hippocampus.

What makes all this research all the more provocative is that this situation appears to be unique to our own species. Chimpanzees have the PRODH gene, but they lack the virus enhancer. They produce PRODH at low levels in the brain, without the intense production in the hippocampus.

Based on this research, the scientists propose a scenario. Our ancestors millions of years ago were infected with a virus. Eventually it became lodged in our genome. At some point, a mutation moved the virus enhancer next to the PRODH gene. Further mutations allowed it to helped boost the gene’s activity in certain areas of the brain, such as the hippocampus.

The scientists can’t say how this change altered the human brain, but given what we know about brain disorders linked to the PRODH gene, it could have been important.

It’s always important approach studies on our inner viruses with some skepticism. Making a compelling case that a short stretch of DNA has an important function takes not just one experiment, but a whole series of them. And even if this enhancer does prove to have been one important step in the evolution of the human brain, our brains are also the result of many other mutations of a far more conventional sort.

Still, the intriguing possibility remains. Perhaps our minds are partly the way they are today thanks to an infection our ancestors got a few millions of years ago.

[For more on the mighty influence of these tiny life forms, see my book A Planet of Viruses.]

Thursday, July 11, 2013

Brain Epigenomics Mapped

This new study from The University of Western Australia maps the epigenome of the human brain.  While the ‘genome' acts as the instruction manual that contains the blueprints (genes) for all of the components of our cells and our body, the ‘epigenome' acts as an additional layer of information on top of our genes that change the way they are used.

This is huge breakthrough in brain imaging.

Brain Epigenomics Mapped


THE UNIVERSITY OF WESTERN AUSTRALIA
MONDAY, 08 JULY 2013

The new research will allow scientists to investigate the role the epigenome plays in learning, memory formation, brain structure and mental illness.                            Image: Jezper/Shutterstock

Comprehensive mapping of the human brain epigenome by UWA and US scientists uncovers large-scale changes that take place during the formation of brain circuitry.

Ground-breaking research by scientists from The University of Western Australia and the US, published in Science, has provided an unprecedented view of the epigenome during brain development.

High-resolution mapping of the epigenome has discovered unique patterns that emerge during the generation of brain circuitry in childhood.

While the ‘genome' can be thought of as the instruction manual that contains the blueprints (genes) for all of the components of our cells and our body, the ‘epigenome' can be thought of as an additional layer of information on top of our genes that change the way they are used.

"These new insights will provide the foundation for investigating the role the epigenome plays in learning, memory formation, brain structure and mental illness," says UWA Professor Ryan Lister, a genome biologist in the ARC Centre for Excellence in Plant Energy Biology, and a corresponding author in this new study.

Joseph R. Ecker, senior author of this study, and professor and director of the Genomic Analysis Laboratory at California's Salk Institute for Biological Studies in California, said the research shows that the period during which the neural circuits of the brain mature is accompanied by a parallel process of large-scale reconfiguration of the neural epigenome.

A healthy brain is the product of a long period of developmental processes, Professor Ecker said. These periods of development forge complex structures and connections within our brains. The front part of our brain, called the frontal cortex, is critical for our abilities to think, decide and act.

The frontal cortex is made up of distinct types of cells, such as neurons and glia, which each perform very different functions. However, we know that these distinct types of cells in the brain all contain the same genome sequence; the A, C, G and T ‘letters' of the DNA code that provides the instructions to build the cell; so how can they each have such different identities?

The answer lies in a secondary layer of information that is written on top of the DNA of the genome, referred to as the ‘epigenome'. One component of the epigenome, called DNA methylation, consists of small chemical tags that are placed upon some of the C letters in the genome. These tags alert the cell to treat the tagged DNA differently and change the way it is read, for example causing a nearby gene to be turned off. DNA methylation plays an essential role in our development and in our bodies's ability to make and distinguish different cell types.

To better understand the role of the epigenome in brain development, the scientists used advanced DNA sequencing technologies to produce comprehensive maps of precisely which C's in the genome have these chemical tags, in brains from infants through to adults. The study delivers the first comprehensive maps of DNA methylation and its dynamics in the brain throughout the lifespan of both humans and mice.

"Surprisingly, we discovered that a unique type of DNA methylation emerges precisely when the neurons in a child's developing brain are forming new connections with each other; essentially when critical brain circuitry is being formed." says co-first author Eran Mukamel from Salk's Computational Neurobiology Laboratory.

Conventionally, DNA methylation in humans had been thought to occur almost exclusively at C's that are followed by a G in the genome sequence, so-called ‘CG methylation'. However, in a surprise discovery in 2009, the researchers found that a distinct form of DNA methylation, called ‘non-CG methylation' constitutes a large fraction of DNA methylation in the human embryonic stem cell genome.

The researchers had previously observed both forms of DNA methylation in plant genomes when conducting earlier research that pioneered many of the techniques required for this brain study.

"Because of our earlier plant epigenome research we approached our human investigations from a distinct angle," Professor Lister said. "We were actively looking for these non-CG methylation sites that were not widely thought to exist. Our new study adds to this picture by showing that abundant non-CG methylation also exists in the human brain."

Surprisingly, this unique form of DNA methylation is almost exclusively found in neurons, and in patterns that are very similar between individuals. "Our research shows that a highly-ordered system of DNA tagging operates in our brain cells and that this system is unique to the brain," says co-author Dr Julian Tonti-Filippini, a computational biologist of the ARC Centre for Excellence in Plant Energy Biology and the WA Centre of Excellence for Computational Systems Biology.

This finding is very important, as previous studies have suggested that DNA methylation may play an important role in learning, memory formation, and flexibility of human brain circuitry. "These results extended our knowledge of the unique role of DNA methylation in brain development and function," Professor Ecker said. "They offer a new framework for testing the role of the epigenome in healthy function and in pathological disruptions of neural circuits."

"We found that patterns of methylation are dynamic during brain development, in particular for non-CG methylation during early childhood and adolescence, which changes the way that we think about normal brain function and dysfunction." says study co-author Terrence J. Sejnowski, head of Salk's Computational Neurobiology Laboratory. Recent studies have suggested that DNA methylation may be involved in mental illnesses, including bipolar disorder, depression, and schizophrenia. 
Environmental or experience-dependent alteration of these unique patterns of DNA methylation in neurons could lead to changes gene expression, adds co-corresponding author M. Margarita Behrens, a scientist in Salk's Computational Neurobiology Laboratory, "the alterations of these methylation patterns will change the way in which networks are formed, which could, in turn, lead to the appearance of mental disorders later in life."

This study is the culmination of more than two years' hard work from an international, interdisciplinary team involving science superstars from The Salk Institute for Biological Studies in La Jolla, California, UWA and several other institutes internationally.

Professor Lister and Dr Tonti-Filippini are now focussing their new research at UWA on how to control these epigenetic patterns within plant and animal genomes, which they hope will translate into breakthrough applications benefitting both human health and agriculture.

The work was supported by the Australian Research Council, the Western Australian State Government, the National Institute of Mental Health, the Howard Hughes Medical Institute, the Gordon and Betty Moore Foundation, the California Institute for Regenerative Medicine, the Leukemia and Lymphoma Society, and the Centre for Theoretical Biological Physics at the University of California, San Diego.

Editor's Note: Original news release can be found here.

Thursday, January 24, 2013

RSA - The Rise of the "Biotechnosciences"


Interesting discussion - the video is simply the highlights, but there is a link to the full podcast with audience questions and answers.
In the last thirty years, the so-called life sciences have been completely transformed. We now have the hybridised ‘biotechnosciences’ which blur the boundaries between science, technology, universities, entrepreneurial biotech companies, and global pharmaceuticals. But what are the implications of this shift, and who benefits?

When the modern era of genomics opened in the 1990’s, we were told that decoding the human genome would lead to cures for everything from cancer and schizophrenia to homelessness, and that a cornucopia of health and wealth would result. It’s now twenty years on, and the genome has been decoded, vast DNA ‘biobanks’ have been set up, some companies and individuals have become very rich, but both hypes and hopes are greatly diminished.

What went wrong?

Join renowned sociologist Hilary Rose and neuroscientist Steven Rose at the RSA as they tackle the claims of the bioscience industry head on.

Chair: Marek Kohn, science writer, journalist and author of 'Trust: Self-Interest and the Common Good' and 'Turned Out Nice: How the British Isles Will Change as the World Heats Up'.
Enjoy the discussion as the Roses take on the bioscience industry and their claims.

RSA - The Rise of the Biotechnosciences

22 Nov 2012


Leading-edge bioscience promised so much - but did it really deliver? Renowned neuroscientist Steven Rose and sociologist Hilary Rose visit the RSA to tackle the claims of the bioscience industry head on.

Listen to the podcast of the full event including audience Q&A

Wednesday, November 28, 2012

RSA - The Rise of the ‘Biotechnosciences’


From the RSA, this is an interesting discussion about the newly emerging field of biotechnoscience, which blurs "the boundaries between science, technology, universities, entrepreneurial biotech companies, and global pharmaceuticals." Neuroscientist Steven Rose and sociologist Hilary Rose discuss the implications of this trend. Hilary and Steven Rose are the authors of Genes, Cells and Brains: The Promethean Promises of the New Biology.


The Rise of the ‘Biotechnosciences’

22nd Nov 2012

Listen to the audio

(full recording including audience Q&A)
Please right-click link and choose "Save Link As..." to download audio file onto your computer.

RSA Thursday

In the last thirty years, the so-called life sciences have been completely transformed. We now have the hybridised ‘biotechnosciences’ which blur the boundaries between science, technology, universities, entrepreneurial biotech companies, and global pharmaceuticals. But what are the implications of this shift, and who benefits?

When the modern era of genomics opened in the 1990’s, we were told that decoding the human genome would lead to cures for everything from cancer and schizophrenia to homelessness, and that a cornucopia of health and wealth would result. It’s now twenty years on, and the genome has been decoded, vast DNA ‘biobanks’ have been set up, some companies and individuals have become very rich, but both hypes and hopes are greatly diminished. 

What went wrong?

Join renowned sociologist Hilary Rose and neuroscientist Steven Rose at the RSA as they tackle the claims of the bioscience industry head on.

Chair: Marek Kohn, science writer, journalist and author of 'Trust: Self-Interest and the Common Good' and 'Turned Out Nice: How the British Isles Will Change as the World Heats Up'.

See what people said on Twitter: #RSARose

Get the latest RSA Audio

Subscribe to RSA Audio iTunes Podcast iTunes | RSA Audio RSS Feed RSS | RSA Mixcloud page Mixcloud

You are welcome to link to, download, save or distribute our audio/video files electronically. Find out more about our open access licence.

Speakers

Books

Genes, Cells and Brains: The Promethean Promises of the New Biology - Hilary Rose and Steven Rose (Verso Books Ltd, 2012)

Saturday, September 15, 2012

ENCODE - The ENCyclopedia Of DNA Elements

Early last week, the science world was buzzing with the release of more than 30 papers highlighting the results from the 2nd phase of ENCODE -"a consortium-driven project tasked with building the ‘ENCyclopedia Of DNA Elements’, a manual of sorts that defines and describes all the functional bits of the genome."

In the following article, Nature offered a comprehensive overview of the ENCODE project - following the article, there are three links that look at the results so far and some of what they suggest.

ENCODE: The human encyclopaedia

First they sequenced it. Now they have surveyed its hinterlands. But no one knows how much more information the human genome holds, or when to stop looking for it.


By Brendan Maher

05 September 2012


Ewan Birney would like to create a printout of all the genomic data that he and his collaborators have been collecting for the past five years as part of ENCODE, the Encyclopedia of DNA Elements. Finding a place to put it would be a challenge, however. Even if it contained 1,000 base pairs per square centimetre, the printout would stretch 16 metres high and at least 30 kilometres long.

ENCODE was designed to pick up where the Human Genome Project left off. Although that massive effort revealed the blueprint of human biology, it quickly became clear that the instruction manual for reading the blueprint was sketchy at best. Researchers could identify in its 3 billion letters many of the regions that code for proteins, but those make up little more than 1% of the genome, contained in around 20,000 genes — a few familiar objects in an otherwise stark and unrecognizable landscape. Many biologists suspected that the information responsible for the wondrous complexity of humans lay somewhere in the ‘deserts’ between the genes. ENCODE, which started in 2003, is a massive data-collection effort designed to populate this terrain. The aim is to catalogue the ‘functional’ DNA sequences that lurk there, learn when and in which cells they are active and trace their effects on how the genome is packaged, regulated and read.


After an initial pilot phase, ENCODE scientists started applying their methods to the entire genome in 2007. Now that phase has come to a close, signalled by the publication of 30 papers, in Nature, Genome Research and Genome Biology. The consortium has assigned some sort of function to roughly 80% of the genome, including more than 70,000 ‘promoter’ regions — the sites, just upstream of genes, where proteins bind to control gene expression — and nearly 400,000 ‘enhancer’ regions that regulate expression of distant genes (see page 57)1. But the job is far from done, says Birney, a computational biologist at the European Molecular Biology Laboratory’s European Bioinformatics Institute in Hinxton, UK, who coordinated the data analysis for ENCODE. He says that some of the mapping efforts are about halfway to completion, and that deeper characterization of everything the genome is doing is probably only 10% finished. A third phase, now getting under way, will fill out the human instruction manual and provide much more detail.
Many who have dipped a cup into the vast stream of data are excited by the prospect. ENCODE has already illuminated some of the genome’s dark corners, creating opportunities to understand how genetic variations affect human traits and diseases. Exploring the myriad regulatory elements revealed by the project and comparing their sequences with those from other mammals promises to reshape scientists’ understanding of how humans evolved.

Yet some researchers wonder at what point enough will be enough. “I don’t see the runaway train stopping soon,” says Chris Ponting, a computational biologist at the University of Oxford, UK. Although Ponting is supportive of the project’s goals, he does question whether some aspects of ENCODE will provide a return on the investment, which is estimated to have exceeded US$185 million. But Job Dekker, an ENCODE group leader at the University of Massachusetts Medical School in Worcester, says that realizing ENCODE’s potential will require some patience. “It sometimes takes you a long time to know how much can you learn from any given data set,” he says.

Even before the human genome sequence was finished2, the National Human Genome Research Institute (NHGRI), the main US funder of genomic science, was arguing for a systematic approach to identify functional pieces of DNA. In 2003, it invited biologists to propose pilot projects that would accrue such information on just 1% of the genome, and help to determine which experimental techniques were likely to work best on the whole thing.

The pilot projects transformed biologists’ view of the genome. Even though only a small amount of DNA manufactures protein-coding messenger RNA,for example, the researchers found that much of the genome is ‘transcribed’ into non-coding RNA molecules, some of which are now known to be important regulators of gene expression. And although many geneticists had thought that the functional elements would be those that are most conserved across species, they actually found that many important regulatory sequences have evolved rapidly. The consortium published its results3 in 2007, shortly after the NHGRI had issued a second round of requests, this time asking would-be participants to extend their work to the entire genome. This ‘scale-up’ phase started just as next-generation sequencing machines were taking off, making data acquisition much faster and cheaper. “We produced, I think, five times the data we said we were going to produce without any change in cost,” says John Stamatoyannopoulos, an ENCODE group leader at the University of Washington in Seattle.

The 32 groups, including more than 440 scientists, focused on 24 standard types of experiment (see ‘Making a genome manual’). They isolated and sequenced the RNA transcribed from the genome, and identified the DNA binding sites for about 120 transcription factors. They mapped the regions of the genome that were carpeted by methyl chemical groups, which generally indicate areas in which genes are silent. They examined patterns of chemical modifications made to histone proteins, which help to package DNA into chromosomes and can signal regions where gene expression is boosted or suppressed. And even though the genome is the same in most human cells, how it is used is not. So the teams did these experiments on multiple cell types — at least 147 — resulting in the 1,648 experiments that ENCODE reports on this week1, 4–8.



Stamatoyannopoulos and his collaborators4, for example, mapped the regulatory regions in 125 cell types using an enzyme called DNaseI (see page 75). The enzyme has little effect on the DNA that hugs histones, but it chops up DNA that is bound to other regulatory proteins, such as transcription factors. Sequencing the chopped-up DNA suggests where these proteins bind in the different cell types. The team discovered around 2.9 million of these sites altogether. Roughly one-third were found in only one cell type and just 3,700 showed up in all cell types, suggesting major differences in how the genome is regulated from cell to cell.

The real fun starts when the various data sets are layered together. Experiments looking at histone modifications, for example, reveal patterns that correspond with the borders of the DNaseI-sensitive sites. Then researchers can add data showing exactly which transcription factors bind where, and when. The vast desert regions have now been populated with hundreds of thousands of features that contribute to gene regulation. And every cell type uses different combinations and permutations of these features to generate its unique biology. This richness helps to explain how relatively few protein-coding genes can provide the biological complexity necessary to grow and run a human being. ENCODE “is much more than the sum of the parts”, says Manolis Kellis, a computational genomicist at the Massachusetts Institute of Technology in Cambridge, who led some of the data-analysis efforts.

The data, which have been released throughout the project, are already helping researchers to make sense of disease genetics. Since 2005, genome-wide association studies (GWAS) have spat out thousands of points on the genome in which a single-letter difference, or variant, seems to be associated with disease risk. But almost 90% of these variants fall outside protein-coding genes, so researchers have little clue as to how they might cause or influence disease.

The map created by ENCODE reveals that many of the disease-linked regions include enhancers or other functional sequences. And cell type is important. Kellis’s group looked at some of the variants that are strongly associated with systemic lupus erythematosus, a disease in which the immune system attacks the body’s own tissues. The team noticed that the variants identified in GWAS tended to be in regulatory regions of the genome that were active in an immune-cell line, but not necessarily in other types of cell and Kellis’s postdoc Lucas Ward has created a web portal called HaploReg, which allows researchers to screen variants identified in GWAS against ENCODE data in a systematic way. “We are now, thanks to ENCODE, able to attack much more complex diseases,” Kellis says.

Are we there yet?

Researchers could spend years just working with ENCODE’s existing data — but there is still much more to come. On its website, the University of California, Santa Cruz, has a telling visual representation of ENCODE’s progress: a grid showing which of the 24 experiment types have been done and which of the nearly 180 cell types ENCODE has now examined. It is sparsely populated. A handful of cell lines, including the lab workhorses called HeLa and GM12878, are fairly well filled out. Many, however, have seen just one experiment.

Scientists will fill in many of the blanks as part of the third phase, which Birney refers to as the ‘build out’. But they also plan to add more experiments and cell types. One way to do that is to expand the use of a technique known as chromatin immunoprecipitation (ChIP), which looks for all sequences bound to a specific protein, including transcription factors and modified histones. Through a painstaking process, researchers develop antibodies for these DNA binding proteins one by one, use those antibodies to pull the protein and any attached DNA out of cell extracts, and then sequence that DNA.

But at least that is a bounded problem, says Birney, because there are thought to be only about 2,000 such proteins to explore. (ENCODE has already sampled about one-tenth of these.) More difficult is figuring out how many cell lines to interrogate. Most of the experiments so far have been performed on lines that grow readily in culture but have unnatural properties. The cell line GM12878, for example, was created from blood cells using a virus that drives the cells to reproduce, and histones or other factors may bind abnormally to its amped-up genome. HeLa was established from a cervical-cancer biopsy more than 50 years ago and is riddled with genomic rearrangements. Birney recently quipped at a talk that it qualifies as a new species.

ENCODE researchers now want to look at cells taken directly from a person. But because many of these cells do not divide in culture, experiments have to be performed on only a small amount of DNA, and some tissues, such as those in the brain, are difficult to sample. ENCODE collaborators are also starting to talk about delving deeper into how variation between people affects the activity of regulatory elements in the genome. “At some places there’s going to be some sequence variation that means a transcription factor is not going to bind here the same way it binds over here,” says Mark Gerstein, a computational biologist at Yale University in New Haven, Connecticut, who helped to design the data architecture for ENCODE. Eventually, researchers could end up looking at samples from dozens to hundreds of people.

The range of experiments is expanding, too. One quickly developing area of study involves looking at interactions between parts of the genome in three-dimensional space. If the intervening DNA loops out of the way, enhancer elements can regulate genes hundreds of thousands of base pairs away, so proteins bound to the enhancer can end up interacting with those attached near the gene. Dekker and his collaborators have been developing a technique to map these interactions. First, they use chemicals that fuse DNA-binding proteins together. Then they cut out the intervening loops and sequence the bound DNA, revealing the distant relationships between regulatory elements. They are now scaling up these efforts to explore the interactions across the genome. “This is beyond the simple annotation of the genome. It’s the next phase,” Dekker says.

The question is, where to stop? Kellis says that some experimental approaches could hit saturation points: if the rate of discoveries falls below a certain threshold, the return on each experiment could become too low to pursue. And, says Kellis, scientists could eventually accumulate enough data to predict the function of unexplored sequences. This process, called imputation, has long been a goal for genome annotation. “I think there’s going to be a phase transition where sometimes imputation is going to be more powerful and more accurate than actually doing the experiments,” Kellis says.

Yet with thousands of cell types to test and a growing set of tools with which to test them, the project could unfold endlessly. “We’re far from finished,” says geneticist Rick Myers of the HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. “You might argue that this could go on forever.” And that worries some people. The pilot ENCODE project cost an estimated $55 million; the scale-up was about $130 million; and the NHGRI could award up to $123 million in the next phase.

Some researchers argue that they have yet to see a solid return on that investment. For one thing, it has been difficult to collect detailed information on how the ENCODE data are being used. Mike Pazin, a programme director at the NHGRI, has scoured the literature for papers in which ENCODE data played a significant part. He has counted about 300, 110 of which come from labs without ENCODE funding. The exercise was complicated, however, because the word ‘encode’ shows up in genetics and genomics papers all the time. “Note to self,” says Pazin wryly, “make up a unique project name next time around.”

A few scientists contacted for this story complain that this isn’t much to show from nearly a decade of work, and that the choices of cell lines and transcription factors have been somewhat arbitrary. Some also think that the money eaten up by the project would be better spent on investigator-initiated, hypothesis-driven projects — a complaint that also arose during the Human Genome Project. But unlike the genome project, which had a clear endpoint, critics say that ENCODE could continue to expand and is essentially unfinishable. (None of the scientists would comment on the record, however, for fear that it would affect their funding or that of their postdocs and graduate students.)

Birney sympathizes with the concern that hypothesis-led research needs more funding, but says that “it’s the wrong approach to put these things up as direct competition”. The NHGRI devotes a lot of its research dollars to big, consortium-led projects such as ENCODE, but it gets just 2% of the total US National Institutes of Health budget, leaving plenty for hypothesis-led work. And Birney argues that the project’s systematic approach will pay dividends. “As mundane as these cataloguing efforts are, you’ve got to put all the parts down on the table before putting it together,” he says.

After all, says Gerstein, it took more than half a century to get from the realization that DNA is the hereditary material of life to the sequence of the human genome. “You could almost imagine that the scientific programme for the next century is really understanding that sequence.”


Nature 489, 46–48 (06 September 2012) | doi:10.1038/489046a

References

  1. The ENCODE Project Consortium Nature 489, 5774 (2012).
    Show context
  2. International Human Genome Sequencing Consortium Nature 431, 931945 (2004).
    Show context
  3. The ENCODE Project Consortium Nature 447, 799816 (2007).
    Show context
  4. Thurman, R. E. et al. Nature 489, 7582 (2012).
    Show context
  5. Neph, S. et al. Nature 489, 8390 (2012).
    Show context
  6. Gerstein, M. B. et al. Nature 489, 91100 (2012).
    Show context
  7. Djebali, S. et al. Nature 489, 101108 (2012).
    Show context
  8. Sanyal, A., Lajoie, B. R., Jain, G. & Dekker, J. Nature 489, 109113 (2012).
    Show context

Related stories and links:

Sunday, October 23, 2011

TED Talks - Mark Pagel: How language transformed humanity

This is from a couple of months back, but it's an interesting theory - we can only make conjectures about how language transformed human culture, or even created it, but it's hopeful to think that if language helped us develop culture, it can also help us transform our culture from the mess it is now to something better, based on cooperation and compassion.

Mark Pagel: How language transformed humanity


Biologist Mark Pagel shares an intriguing theory about why humans evolved our complex system of language. He suggests that language is a piece of "social technology" that allowed early human tribes to access a powerful new tool: cooperation.





Why you should listen to him:

Mark Pagel builds statistical models to examine the evolutionary processes imprinted in human behavior, from genomics to the emergence of complex systems -- to culture. His latest work examines the parallels between linguistic and biological evolution by applying methods of phylogenetics, or the study of evolutionary relatedness among groups, essentially viewing language as a culturally transmitted replicator with many of the same properties we find in genes. He’s looking for patterns in the rates of evolution of language elements, and hoping to find the social factors that influence trends of language evolution.

At the University of Reading, Pagel heads the Evolution Laboratory in the biology department, where he explores such questions as, "Why would humans evolve a system of communication that prevents them with communicating with other members of the same species?" He has used statistical methods to reconstruct features of dinosaur genomes, and to infer ancestral features of genes and proteins.

He says: "Just as we have highly conserved genes, we have highly conserved words. Language shows a truly remarkable fidelity."
"What the new studies accomplish is a far more sophisticated analysis of the regularity of language change that earlier scholars noted or theorized,"
Linguistic Anthropology blog