Offering multiple perspectives from many fields of human inquiry that may move all of us toward a more integrated understanding of who we are as conscious beings.
From Nautilus, this is a very cool article by Ed Yong (blogger at the National Geographic's Phenomena: Not Exactly Rocket Science blog) on the emergence of eukaryotic cells, an enormous leap in complexity and arguably one of the "most important event in the history of life on Earth."
The emergence of eukaryotic life also brought along with it the emergence of mitochondria, the cell's "power plant" in that it generates most of the adenosine triphosphate (ATP) used for energy.
Interestingly, unlike other evolutionary innovations, the eukaryotic cell and mitochondria only appear once in the evolutionary timeline. Fortunately for us, they stuck around.
All sophisticated life on the planet Earth may owe its existence to one freakish event.
By Ed Yong | September 4, 2014
ILLUSTRATION BY GRACIA LAM
AT FIRST GLANCE, a tree could not be more different from the caterpillars that eat its leaves, the mushrooms sprouting from its bark, the grass growing by its trunk, or the humans canoodling under its shade. Appearances, however, can be deceiving. Zoom in closely, and you will see that these organisms are all surprisingly similar at a microscopic level. Specifically, they all consist of cells that share the same basic architecture. These cells contain a central nucleus—a command center that is stuffed with DNA and walled off by a membrane. Surrounding it are many smaller compartments that act like tiny organs, carrying out specialized tasks like storing molecules or making proteins. Among these are the mitochondria—bean-shaped power plants that provide the cells with energy. This combination of features is shared by almost every cell in every animal, plant, fungus, and alga, a group of organisms known as “eukaryotes.” Bacteria showcase a second, simpler way of building a cell—one that preceded the complex eukaryotes by at least a billion years. These “prokaryotes” always consist of a single cell, which is smaller than a typical eukaryotic one and bereft of internal compartments like mitochondria and a nucleus. Even though limited to a relatively simple cell, bacteria are impressive survival machines. They colonize every possible habitat, from miles-high clouds to the deep ocean. They have a dazzling array of biological tricks that allow them to cause diseases, eat crude oil, conduct electric currents, draw power from the Sun, and communicate with each other. Still, without the eukaryotic architecture, bacteria are forever constrained in size and complexity. Sure, they have their amazing skill sets, but it’s the eukaryotes that cover the Earth in forest and grassland, that navigate the planet looking for food and mates, that build rockets to Mars. The transition from the classic prokaryotic model to the deluxe eukaryotic one is arguably the most important event in the history of life on Earth. And in more than 3 billion years of existence, it happened exactly once. Life is full of complex structures that evolve time and again. Individual cells have united to form many-celled creatures like animals and plants on dozens of separate occasions. The same is true for eyes, which have independently evolved time and again. But the eukaryotic cell is a one-off innovation. Bacteria have repeatedly nudged along the path towards complexity. Some are very big (for microbes); others move in colonies that behave like single, many-celled creatures. But none of them have acquired the full suite of crucial features that define eukaryotes: large size, the nucleus, internal compartments, mitochondria, and more. As Nick Lane from University College London writes, “Bacteria have made a start up every avenue of eukaryotic complexity, but then stopped short.” Why? It is not for lack of opportunity. The world is swarming with countless prokaryotes that evolve at breathtaking rates. Even so, they were not quick about inventing eukaryotic cells. Fossils tell us that the oldest bacteria arose between 3 and 3.5 billion years ago, but there are no eukaryotes from before 2.1 billion years ago. Why did the prokaryotes remain as simple cells for so damn long? There are many possible explanations, but one of these has recently gained a lot of ground. It tells of a prokaryote that somehow found its way inside another, and formed a lasting partnership with its host. This inner cell—a bacterium—abandoned its free-living existence and eventually transformed into the mitochondria. These internal power plants provided the host cell with a bonanza of energy, allowing it to evolve in new directions that other prokaryotes could never reach. If this story is true, and there are still those who doubt it, then all eukaryotes—every flower and fungus, spider and sparrow, man and woman—descended from a sudden and breathtakingly improbable merger between two microbes. They were our great-great-great-great-...-great-grandparents, and by becoming one, they laid the groundwork for the life forms that seem to make our planet so special. The world as we see it (and the fact that we see it at all; eyes are a eukaryotic invention) was irrevocably changed by that fateful union—a union so unlikely that it very well might not have happened at all, leaving our world forever dominated by microbes, never to welcome sophisticated and amazing life like trees, mushrooms, caterpillars, and us.
IN 1905, the Russian biologist Konstantin Mereschkowski first suggested that some parts of eukaryotic cells were once endosymbionts—free-living microbes that took up permanent residence within other cells. He thought the nucleus originated in this way, as did the chloroplasts that allow plant cells to harness sunlight. He missed the mitochondria, but the American anatomist Ivan Wallin pegged them for endosymbionts in 1923. These ideas were ignored for decades until an American biologist—the late Lynn Margulis—revived them in 1967. In a radical paper, she made the case that mitochondria and chloroplasts were once free-living bacteria that had been sequentially ingested by another ancient microbe. That is why they still have their own tiny genomes and why they still superficially look like bacteria. Margulis argued that endosymbiosis was not a crazy, oddball concept—it was one of the most important leitmotivs in the eukaryotic opera. The paper was a tour de force of cell biology, biochemistry, geology, genetics, and paleontology. Its conclusion was also grossly unorthodox. At the time, most people believed that mitochondria had simply come from other parts of the cell.
“[Endosymbiosis] was taboo,” says Bill Martin from Heinrich Heine University Düsseldorf, in Germany. “You had to sneak into a closet to whisper to yourself about it before coming out again.” Margulis’ views drew fierce criticism, but she defended with equal vigor. Soon she had the weight of evidence behind her. Genetic studies, for example, showed that mitochondrial DNA is similar to that of free-living bacteria. Now, very few scientists doubt that mergers infused the cells of every animal and plant with the descendants of ancient bacteria. But the timing of that merger, the nature of its participants, and its relevance to the rise of eukaryotes are all still hotly debated. In recent decades,origin stories for the eukaryotes have sprouted up faster than old ones could be tested, but most fall into two broad camps. The first—let’s call it the “gradual-origin” group—claimed that prokaryotes evolved into eukaryotes by incrementally growing in size and picking up traits like a nucleus and the ability to swallow other cells. Along the way, these proto-eukaryotes gained mitochondria, because they would regularly engulf bacteria. This story is slow, steady, and classically Darwinian in nature. The acquisition of mitochondria was just another step in a long, gradual transition. This is what the late Margulis believed right till the end. The alternative—let’s call it the “sudden-origin” camp—is very different. It dispenses with slow, Darwinian progress and says that eukaryotes were born through the abrupt and dramatic union of two prokaryotes. One was a bacterium. The other was part of the other great lineage of prokaryotes: the archaea. (More about them later.) These two microbes look superficially alike, but they are as different in their biochemistry as PCs and Macs are in their operating systems. By merging, they created, in effect, the starting point for the first eukaryotes. Bill Martin and Miklós Müller put forward one of the earliest versions of this idea in 1998. They called it the hydrogen hypothesis. It involved an ancient archaeon that, like many modern members, drew energy by bonding hydrogen and carbon dioxide to make methane. It partnered with a bacterium that produced hydrogen and carbon dioxide, which the archaeon could then use. Over time, they became inseparable, and the bacterium became a mitochondrion. There are many variants of this hypothesis, which differ in the reasons for the merger and the exact identities of the archaeon and the bacterium that were involved. But they are all united by one critical feature setting them apart from the gradual-origin ideas: They all say that the host cell was still a bona fide prokaryote. It was an archaeon, through and through. It had not started to grow in size. It did not have a nucleus. It was not on the path to becoming a eukaryote; it set off down that path because it merged with a bacterium. As Martin puts it, “The inventions came later.” This distinction could not be more important. According to the sudden-origin ideas, mitochondria were not just one of many innovations for the early eukaryotes. “The acquisition of mitochondria was the origin of eukaryotes,” says Lane. “They were one and the same event.” If that is right, the rise of the eukaryotes was a fundamentally different sort of evolutionary transition than the gradual changes that led to the eye, or photosynthesis, or the move from sea to land. It was a fluke event of incredible improbability—one that, as far as we know, only happened after a billion years of life on Earth and has not been repeated in the 2 billion years since. “It’s a fun and thrilling possibility,” says Lane. “It may not be true, but it’s beautiful.” IN 1977, microbiologist Carl Woese had the bright idea of comparing different organisms by sequencing their genes. This is an everyday part of modern biology, but at the time, scientists relied on physical traits to deduce the evolutionary relationships between different species. Comparing genes was bold and new, and it would play a critical role in showing how complicated life like us—the eukaryotes—came to be. Woese focused on 16S rRNA, a gene that is involved in the essential task of making proteins and is found in all living things. Woese reasoned that as organisms diverge into new species, their versions of rRNA should become increasingly dissimilar. By comparing the gene across a range of prokaryotes and eukaryotes, the branches of the tree of life should reveal themselves. They did, but no one expected the results. Woese’s tree had three main branches. Bacteria and eukaryotes sat on two of them. But the third consisted of an obscure bunch of prokaryotes that had been found in hot, inhospitable environments. Woese called them archaea, from the Greek word for ancient. Everyone had taken them for obscure types of bacteria, but Woese’s tree announced them as a third domain of life. It was as if everyone was staring at a world map, and Woese had politely shown that a full third of it had been folded underneath. In Woese’s classic three-domain tree, the eukaryotes and archaea are sister groups. They both evolved from a shared ancestor that split off from the bacteria very early in the history of life on Earth. But this tidy picture started to unravel in the 1990s, as the era of modern genetics kicked into high gear and scientists started sequencing more eukaryotic genes. Some were indeed closely related to archaeal genes, but others turned out to be more closely related to bacterial ones. The eukaryotes turned out to be a confusing hodgepodge, and their evolutionary affinities kept on shifting with every new sequenced gene. In 2004, James Lake changed the rules of engagement. Rather than looking at any single gene, he and his colleague Maria Rivera compared the entire genomes of two eukaryotes, three bacteria, and three archaea. Their analysis supported the merger-first ideas: They concluded that the common ancestor of all life diverged into bacteria and archaea, which evolved independently until two of their members suddenly merged. This created the first eukaryotes and closed what now appeared to be a “ring of life.” Before that fateful encounter, life had just two major domains. Afterward, it had three. Rivera and Lake were later criticized for only looking at seven species, but no one could possibly accuse Irish evolutionary biologist James McInerney of the same fault. In 2007, he crafted a super-tree using more than 5,700 genes from across the genomes of 168 prokaryotes and 17 eukaryotes. His conclusion was the same: Eukaryotes are merger organisms, formed through an ancient symbiosis between a bacterium and an archaeon. The genes from these partners have not integrated seamlessly. They behave like immigrants in New York’s Asian and Latino communities, who share the same city but dominate different areas. For example, they mostly interact with their own kind: archaeal genes with other archaeal genes, and bacterial genes with bacterial genes. “You’ve got two groups in the playground and they’re playing with each other differently, because they’ve spent different amounts of time with each other,” says McInerney. They also do different jobs. The archaeal genes are more likely to be involved in copying and making use of DNA. The bacterial genes are more involved in breaking down food, making nutrients, and the other day-to-day aspects of being a microbe. And although the archaeal genes are outnumbered by their bacterial neighbors by 4 to 1, they seem to be more important. They are nearly twice as active. They produce proteins that play more central roles in their respective cells. They are more likely to kill their host if they are mistakenly deleted. Over the last four years, McInerney has found this same pattern again and again, in yeast, in humans, in dozens of other eukaryotes. This all makes sense if you believe the sudden-origin idea. When those ancient partners merged, the immigrant bacterial genes had to be integrated around a native archaeal network, which had already been evolving together for countless generations. They did integrate, and while many of the archaeal genes were displaced, an elite set could not be ousted. Despite 2 billion years of evolution, this core network remains, and retains a pivotal role out of all proportion to their small number. THE SUDDEN-ORIGIN hypothesis makes one critical prediction: All eukaryotes must have mitochondria. Any exceptions would be fatal, and in the 1980s, it started to look like there were exceptions aplenty. If you drink the wrong glass of water in the wrong part of the world, your intestines might become home to a gut parasite called Giardia. In the weeks that follow, you can look forward to intense stomach cramps and violent diarrhea. Agony aside, Giardia has a bizarre and interesting anatomy. Itconsists of a single cell that looks like a malevolent teardrop with four tail-like filaments. Inside, it has not one nucleus but two. It is clearly a eukaryote. But it has no mitochondria.
Mitochondria (left) are domesticated versions of bacteria (right) that now provide the cells of every animal, plant and fungus with energy. ShutterstockThere are at least a thousand other single-celled eukaryotes, mostly parasites, which also lack mitochondria. They were once called archezoans, and their missing power plants made them focal points for the debate around eukaryotic origins. They seemed to be living remnants of a time when prokaryotes had already turned into primitive eukaryotes, but before they picked up their mitochondria. Their very existence testified that mitochondria were a late acquisition in the rise of eukaryotes, and threatened to deal a knockout blow to the sudden-origin tales.
That blow was deflected in the 1990s, when scientists slowly realized that Giardia and its ilk have genes that are only ever found in the mitochondria of other eukaryotes. These archezoans must have once had mitochondria, which were later lost or transformed into other cellular compartments. They aren’t primitive eukaryotes from a time before the mitochondrial merger—they are advanced eukaryotes that have degenerated, just as tapeworms and other parasites often lose complex organs they no longer need after they adopt a parasitic way of life. “We’ve yet to find a single primitive, mitochondria-free eukaryote,” says McInerney, “and we’ve done a lot of looking.” With the archezoan club dismantled, the sudden-origin ideas returned to the fore with renewed vigor. “We predicted that all eukaryotes had a mitochondrion,” says Martin. “Everyone was laughing at the time, but it’s now textbook knowledge. I claim victory. Nobody’s giving it to me—except the textbooks.” IF MITOCHONDRIA were so important, why have they only evolved once? And for that matter, why have eukaryotes only evolved once? Nick Lane and Bill Martin answered both questions in 2010, in a bravura paper called, “The energetics of genome complexity,” published in Nature. In a string of simple calculations and elegant logic, they reasoned that prokaryotes have stayed simple because they cannot afford the gas-guzzling lifestyle that all eukaryotes lead. In the paraphrased words of Scotty: They cannae do it, captain, they just don’t have the power. Lane and Martin argued that for a cell to become more complex, it needs a bigger genome. Today, for example, the average eukaryotic genome is around 100–10,0001 times bigger than the average prokaryotic one. But big genomes don’t come for free. A cell needs energy to copy its DNA and to use the information encoded by its genes to make proteins. The latter, in particular, is the most expensive task that a cell performs, soaking up three-quarters of its total energy supply. If a bacterium or archaeon was to expand its genome by 10 times, it would need roughly 10 times more energy to fund the construction of its extra proteins. One solution might be to get bigger. The energy-producing reactions that drive prokaryotes take place across their membranes, so a bigger cell with a larger membrane would have a bigger energy supply. But bigger cells also need to make more proteins, so they would burn more energy than they gained. If a prokaryote scaled up to the same size and genome of a eukaryotic cell, it would end up with 230,000 times less energy to spend on each gene! Even if this woefully inefficient wretch could survive in isolation, it would be easily outcompeted by other prokaryotes. Prokaryotes are stuck in an energetic canyon that keeps them simple and small. They have no way of climbing out. If anything, evolution drives them in the opposite direction, mercilessly pruning their genomes into a ring of densely packed and overlapping genes. Only once did a prokaryote escape from the canyon, through a singular and improbable trick—it acquired mitochondria. Mitochondria have an inner membrane that folds in on itself like heavily ruched fabric. They offer their host cells a huge surface area for energy-producing chemical reactions. But these reactions are volatile, fickle things. They involve a chain of proteins in the mitochondrial membranes that release energy by stripping electrons from food molecules, passing them along to one another, and dumping them onto oxygen. This produces high electric voltages and unstable molecules. If anything goes wrong, the cell can easily die. But mitochondria also have a tiny stock of DNA that encodes about a dozen of the proteins that take part in these electron-transfer chains. They can quickly make more or less of any of the participating proteins, to keep the voltages across their membranes under check. They supply both power and the ability to control that power. And they do that without having to bother the nucleus. They are specialized to harness energy. Mitochondria are truly the powerhouse of the eukaryotic cell. “The command center is too bureaucratic and far away to do anything,” says Lane. “You need to have these small teams, which have limited powers but can use them at their discretion to respond to local situations. If they’re not there, everything dies.” Prokaryotes do not have powerhouses; they are powerhouses. They can fold their membranes inwards to gain extra space for producing energy, and many do. But they do not have the secondary DNA outposts that produce high-energy molecules so the central government (the nucleus) has the time and energy to undertake evolutionary experiments. The only way to do that is to merge with another cell. When one archaeon did so, it instantly leapt out of its energetic canyon, powered by its new bacterial partner. It could afford to expand its genome, to experiment with new types of genes and proteins, to get bigger, and to evolve down new and innovative routes. It could form a nucleus to contain its genetic material, and absorb other microbes to use as new tiny organs, like the chloroplasts that perform photosynthesis in plants. “You need a mitochondrial level of power to finance those evolutionary adventures,” says Martin. “They don’t come for free.” Lane and Martin’s argument is a huge boon for the sudden-origin hypothesis. To become complex, cells need the stable, distributed energy supply that only mitochondria can provide. Without these internal power stations, other prokaryotes, for all their evolutionary ingenuity, have always stayed as single, simple cells. The kind of merger that creates mitochondria seems to be a ludicrously unlikely event. Prokaryotes have only managed it once in more than 3 billion years, despite coming into contact with each other all the time. “There must have been thousands or millions of these cases over evolutionary time, but they’ve got to find a way of getting along, of reconciling and co-adapting to each other,” says Lane. “That seems to be genuinely difficult.” This improbability has implications for the search for alien life. On other worlds with the right chemical conditions, Lane believes that life would be sure to emerge. But without a fateful merger, it would be forever microbial. Perhaps this is the answer to the Fermi paradox—the puzzling contradiction between the high apparent odds that intelligent life would exist elsewhere among the billions of planets in the Milky Way, and our inability to find any signs of such intelligence. As Lane wrote in 2010, “The unavoidable conclusion is that the universe should be full of bacteria, but more complex life will be rare.” And if intelligent aliens did exist, they would probably have something like mitochondria, too. THE ORIGIN of eukaryotes is, by no means, a settled matter of fact. Ideas have waxed and waned in influence, and although many lines of evidence currently point to a sudden origin, there is still plenty of dissent. Some scientists support radical notions like the idea that prokaryotes are versions of eukaryotes that evolved to greater simplicity, rather than their ancestors. Others remain stalwart devotees of Woese’s tree. Writing in 2007, Anthony Poole and David Penny accused the sudden-origin camp of pushing “mechanisms founded in unfettered imagination.” They pointed out that archaea and bacteria do not engulf one another—that’s a hallmark of eukaryotes. It is easy to see how a primitive eukaryote might have gained mitochondria by engulfing a bacterium, but very hard to picture how a relatively simple archaeon did so. This powerful retort has lost some of its sting thanks to a white insect called the citrus mealybug. Its cells contain a bacterium called Tremblaya, and Tremblaya contains another bacterium called Moranella. Here is a prokaryote that somehow has another prokaryote living inside it, despite its apparent inability to engulf anything. Still, the details of how the initial archaeon-bacterium merger happened are still a mystery. How did one get inside the other? What sealed their partnership—was it hydrogen, as Martin and Müller suggested, or something else? How did they manage to stay conjoined? “I think we have the roadmap right, but we don’t have all the white lines and the signposts in place,” says Martin. “We have the big picture but not all the details.” Perhaps we will never know for sure. The origin of eukaryotes happened so far back in time that it’s a wonder we have even an inkling of what happened. Dissent is inevitable; uncertainty, guaranteed.
“You can’t convince everyone about anything in early evolution, because they hold to their own beliefs,” says Martin. “But I’m not worried about trying to convince anyone. I’ve solved these problems to my own satisfaction and it all looks pretty consistent. I’m happy.” ~ Ed Yong is an award-winning science writer. His work has appeared in Wired, Nature, the BBC, New Scientist, the Guardian, the Times, Aeon, Discover, Scientific American, The Scientist, the BMJ, Slate, and more. This article originally appeared in our “Mergers & Acquisitions” issue in February 2014.
Terrence Deacon is the author of Incomplete Nature: How Mind Emerged from Matter (2013) and The Symbolic Species: The Co-evolution of Language and the Brain (1998). In the talk below, Deacon explores the "origins of life problem" by
attempting to identify the necessary and sufficient molecular
relationships required to transform inert chemicals into biological systems. Deacon introduces a model system - autogenesis - that redefines
biological information and opens the search for life's origin to cosmic
and planetary contexts seldom considered.
I begin with a simple molecular model system consisting of coupled
reciprocal catalysis and self-assembly in which one of the catalytic
bi-products tends to spontaneously self-assemble into a containing shell
(analogous to a viral capsule). I term this dynamical relationship
autogenesis because it is self-reconstituting in response to
degradation. Self-reconstitution (and reproduction) is made possible by
the fact that each of these linked self-organizing processes generates
boundary constraints that promote and limit the other, and because this
synergy thereby becomes embodied as a persistent rate-independent and
substrate-indifferent higher order constraint on component constraint
generation processes. It is proposed that this formal synergy is
necessary and sufficient to constitute regulation as opposed to mere
constraint. Two minor elaborations of this simple model system
demonstrate that this simplest form of regulation can be the foundation
for the evolution of two higher-order forms: cybernetic and
template-based regulation.
The
investigation of the origins of life has been hindered by what we think
we know about current living organisms. This includes three assumptions
about necessary conditions: 1) that it emerged entirely on Earth, 2)
that it is dependent on the availability of liquid water, and 3) that it
is coextensive with the emergence of molecules able to replicate
themselves.
In addition, the three most widely explored
alternative general models for a molecular process that could serve as a
precursor to life also reflect reductionistically-envisioned fragments
of current living systems: e.g. container-first, metabolism-first, or
information-first scenarios. Finally, we are hindered by a technical
concept of information that is fundamentally incomplete in precisely
ways that are critical to characterizing living processes.
These
all reflect reductionistic "top-down" approaches to the extent that they
begin with a reverse-engineering view of what constitutes a living
Earth-organism and explore possible re-compositional scenarios. This is a
Frankensteinian enterprise that also begins with assumptions that are
highly Earth-life specific and therefore unlikely to lead to a general
exo-biology.
The approach Dr. Deacon will outline instead begins
from an unstated conundrum about the origins of life. The initial
transition to a life-like process necessarily exemplified two almost
inconceivably incompatible properties: 1) it must have involved
exceedingly simple molecular interactions, and 2) it must have embodied a
thermodynamic organization with the unprecedented capacity to locally
compensate for spontaneous thermodynamic degradation as well as to
stabilize one or more intrinsically self-destroying self-organizing
processes.
This talk will explore the origins of life problem by
attempting to identify the necessary and sufficient molecular
relationships able to embody these two properties. From this perspective
Dr. Deacon will develop a model system - autogenesis - that redefines
biological information and opens the search for life's origin to cosmic
and planetary contexts seldom considered.
From UC Berkeley's Greater Good Science Center, here is a collection of 10 research summaries on topics related to having a meaningful life, for example the idea that a meaningful and healthy life is not the same as a happy life; or that mindfulness meditation can make people more altruistic (even when doing so has barriers) and that the emotional benefits of altruism are likely to be human universals.
There is some nice research summarized here - and for a nice change of pace, the news is good.
The past few years have been marked by two major trends in the science of a meaningful life. One is that researchers continued to add sophistication and depth to our understanding of positive feelings and behaviors. Happiness is good for you, but not all the time; empathy ties us together, and can overwhelm you; humans are born with an innate sense of fairness and morality, that changes in response to context. This has been especially true of the study of mindfulness and attention, which is producing more and more potentially life-changing discoveries. The other factor involves intellectual diversity. The turn from the study of human dysfunction to human strengths and virtues may have started in psychology, with the positive psychology movement, but that perspective spread to adjacent disciplines like neuroscience and criminology, and from there to fields like sociology, economics, and medicine. Across all these fields, we’re seeing more and more support for the idea that empathy, compassion, and happiness are more than you-have-it-or-not capacities, but skills that can be cultivated by individuals and by groups of people through deliberate decisions. In 2013, the UC Berkeley Greater Good Science Center is now part of a mature, multidisciplinary movement. Here are 10 scientific insights published in peer-reviewed journals from the past year that we anticipate will be cited in scientific studies, help shift public debate, and change individual behavior in the year to come.
A meaningful life is different—and healthier—than a happy one.
The research we cover here at the Greater Good Science Center is often referred to as “the science of happiness,” yet our tagline is “The Science of a Meaningful Life.” Meaning, happiness—is there a difference? New research suggests that there is. When a study in the Journal of Positive Psychology tried to disentangle the concepts of “meaning” and “happiness” by surveying roughly 400 Americans, it found considerable overlap between the two—but also some key distinctions. Based on those surveys, for instance, feeling good and having one’s needs met seem integral to happiness but unrelated to meaning. Happy people seem to dwell in the present moment, not the past or future, whereas meaning seems to involve linking past, present, and future. People derive meaningfulness (but not necessarily happiness) from helping others—being a “giver”—whereas people derive happiness (but not necessarily meaningfulness) from being a “taker.” And while social connections are important to meaning and happiness, the type of connection matters: Spending time with friends is important to happiness but not meaning, whereas the opposite is true for spending time with loved ones. And other research published in the Proceedings of the National Academy of Sciences suggests that these differences might have important implications for our health. When Barbara Fredrickson and Steve Cole compared the immune cells of people who reported being “happy” with those of people who reported “a sense of direction and meaning,” the people leading meaningful lives seemed to have stronger immune systems.
The emotional benefits of altruism might be a human universal.
One of the most significant findings to have emerged from the sciences of happiness and altruism has been this: Altruism boosts happiness. Spending on others makes us happier than spending on ourselves—at least among the relatively affluent North Americans who have participated in this research. But a paper published in the Journal of Personality and Social Psychology suggested that this finding holds up around the world, even in countries where sharing with others might threaten someone’s own subsistence. In one study, the researchers examined data of more than 200,000 people from 136 countries; they determined that donating to charity in the past month boosts happiness “in most individual countries and all major regions of the world,” cutting across cultures and levels of economic well-being. It was even true regardless of whether someone said they’d had trouble securing food for their family in the past year. When the researchers zeroed in on three countries with vastly different levels of wealth—Canada, Uganda, and India—they found that people reported greater happiness recalling a time when they’d spent money on others than when they’d spent on themselves. And in a study comparing Canada and South Africa, people reported feeling happier after donating to charity than after buying themselves a treat, even though they would never meet the beneficiary of their largess. This suggests to the researchers that their happiness didn’t result from feeling like they were strengthening social connections or improving their reputation but from a deeply ingrained human instinct. In fact, they argue, the nearly universal emotional benefits of altruism suggest it is a product of evolution, perpetuating behavior that “may have carried short-term costs but long-term benefits for survival over human evolutionary history.”
Mindfulness meditation makes people more altruistic—even when confronted with barriers to compassionate action.
In March, the GGSC hosted a conference called “Practicing Mindfulness & Compassion,” where speakers made the case that the practice of mindfulness—the moment-by-moment awareness of our thoughts, feelings, and surrounding—doesn’t just improve our individual health but also makes us more compassionate toward others. Coincidentally, just weeks after the conference, two new studies bolstered this claim. The first study, published in Psychological Science, found that people who took an eight-week mindfulness meditation course were significantly more likely than a control group to give up their waiting-room seat for a person on crutches. This was true despite the fact that other people in the waiting room (who were secretly working with the researchers) didn’t acknowledge the person in need or make any gesture to give up their own seats; prior research suggests that this kind of inaction strongly deters bystanders from helping out, but that wasn’t the case when the bystanders had received training in mindfulness. A few weeks later, another study published in Psychological Science echoed that finding. In this second study, which was unrelated to the first, people who had practiced a mindfulness-based “compassion meditation” for a total of just seven hours over two weeks were significantly more likely than people who hadn’t received the training to give money to a stranger in need. What’s more, after completing their training, the meditation group showed noticeable changes in brain activity, including in networks linked to understanding the suffering of others. “Our findings,” write the authors of the second study, “support the possibility that compassion and altruism can be viewed as trainable skills rather than as stable traits.”
Meditation changes gene expression.
Are genes destiny? They certainly influence our behavior and health outcomes—for example, one study published in 2013 found that genes make some people more inclined to focus on the negative. But more and more research is revealing how it’s a two-way street: Our choices can also influence how our genes behave. In 2013, a collaborative project between researchers in Spain and France and at the University of Wisconsin found that when experienced meditators meditate, they quiet down the genes that express bodily inflammation in response to stress. How did they figure this out? Before and after two different retreat days, the researchers drew blood samples from 19 long-term meditators (averaging more than 6000 lifetime hours) and 21 inexperienced people. During the retreat, the meditators meditated and discussed the benefits and advantages of meditation; the non-meditators read, played games, and walked around. After this experience, the meditators’ inflammation genes—measured by blood concentrations of enzymes that catalyze or are a byproduct of gene expression—were less active. Blood samples from the people in the leisure-day condition did not show these changes. Why does this matter? The researchers also looked at their study participants’ ability to recover from a stressful event. Long-term meditators’ ability to turn down inflammatory genes, it turns out, predicted how quickly stress hormones in their saliva diminished after a stressful experience—a sign of healthy coping and resilience that can potentially lead to a longer life. This is good news to people who come from a family of stress cases who are stress-prone themselves: There are steps you can take to mitigate the impact of stressful events. Hard as it may be to find time or get excited about meditating, mounting evidence suggests that it can offer more concrete advantages to a healthy life than the leisurely activities we more readily seek.
Mindfulness training improves teachers’ performance in the classroom.
For educators grappling with students’ behavioral problems and other sources of stress, new research suggested an effective response: mindfulness. Although mindfulness-based programs are not uncommon in schools these days, they’ve mainly been deployed to enhance students’ social, emotional, and cognitive skills; only a handful of programs and studies have examined the benefits of mindfulness for teachers, and in those cases, the research has focused largely on the general benefits for teachers’ mental health. But in 2013, researchers at the University of Wisconsin’s Center for Investigating Healthy Minds broke new ground when they studied the impact of an eight-week mindfulness course developed specifically for teachers, looking not only at its effects on the teachers’ emotional well-being and levels of stress but also on their performance in the classroom. They found that teachers randomly assigned to take the course felt less anxious, depressed, and burned out afterward, and felt more compassionate toward themselves. What’s more, according to experts who watched the teachers in action, these teachers ran more productive classrooms after completing the course and improved at managing their students’ behavior as well. The results, published in Mind, Brain, and Education, show that stress and burnout levels actually increased among teachers who didn’t take the course. The researchers speculate that mindfulness may carry these benefits for teachers because it helps them cope with classroom stress and stay focused on their work. “Mindfulness-based practices offer promise as a tool for enhancing teaching quality,” write the researchers, “which may, in turn, promote positive student outcomes and school success.”
There’s nothing simple about happiness.
Who doesn’t want to be happy? Happy is always good, right? Sure. Just don’t be too happy, OK? Because June Gruber and her colleagues analyzed health data and found that it’s much better to be a little bit happy over a long period of time than to experience wild spikes in happiness. Another study, published in the journal Emotion, showed how seeking happiness at the right time may be more important than seeking happiness all the time. Instead, allowing yourself to feel emotions appropriate to a situation—whether or not they are pleasant in the moment—is a key to long-lasting happiness. In a study published earlier in the year in the journal Psychological Science, Sonja Lyubomirsky and Kristin Layous found that not all research-approved happiness practices work for everyone all the time. “Let’s say you publish a study that shows being grateful makes you happy—which it does,” Lyubomirsky recently told us. “But, actually, it’s much harder than that. It’s actually very hard to be grateful, and to be grateful on a regular basis, and at the right time, and for the right things.” She continued:
So, for example, some people have a lot of social support, some people have little social support, some people are extroverted, some people are introverted—you have to take into account the happiness seeker before you give them advice about what should make them happy. And then there are factors relevant to the activity that you do. How is it that you’re trying to become happier? How is it that you’re trying to stave off adaptation? Are you trying to appreciate more? Are you trying to do more acts of kindness? Are you trying to savor the moment? The kind of person you are, the different kinds of activities, and how often you do them, and where you do them—these are all going to matter.
The bottom line might be that if happiness were really that simple, we’d all be happy all the time. But we’re not, and that appears to be because there is no rigid formula for happiness. It’s a state that comes and goes in response to how we’re changing and how our world is changing.
Gratitude can save your life.
Or at least help lessen suicidal thoughts, says a study published in the Journal of Research in Personality. Across a four-week period, 209 college students answered questions to measure depression, suicidal thoughts, grit, gratitude, and meaning in life. The idea was to see if the positive traits—grit and gratitude—mitigated the negative ones. Since depression is a large contributing factor to suicide, they controlled for that variable throughout the study. Grit, said the authors, is “characterized by the long-term interests and passions, and willingness to persevere through obstacles and setbacks to make progress toward goals aligned or separate from these passionate pursuits.” It stands to reason that someone with lots of grit wouldn’t waste much time on suicidal thoughts. But what about gratitude? That entails noticing the benefits and gifts received from others, and it gives an individual a sense of belonging. That should make life living—and, indeed, the researchers found that gratitude and grit worked synergistically together to make life more meaningful and to reduce suicidal thoughts, independent of depression symptoms. As the authors note, their study has huge clinical implications: If therapists can specifically foster gratitude in suicidal people, they should be able to increase their sense that life is worth living. This new finding adds to a pile of new research on the benefits of gratitude. Saying “thanks” can make you happier, sustain your marriage through tough times, reduce envy, and even improve physical health.
Employees are motivated by giving as well as getting.
Over the past two decades, work satisfaction has declined, while time spent at work has significantly increased. Not a good combination! Would paying people more money help? Some studies have shown that rewarding employees for their hard work and late nights at the office with a bonus will make things a little better and quiet dissatisfaction. But in September, through the collaborative research of Lalin Anik, Lara B. Aknin, Michael I. Norton, Elizabeth W. Dunn, and Jordi Quoidbach, we learned that employee bonuses might have the most positive effects when they’re spent on others. The researchers suggested an alternative bonus offer that has the potential to provide some of the same benefits as team-based compensation—increased social support, cohesion, and performance—while carrying fewer drawbacks. Their first experiment focused on broad, self-reported measures of the impact of prosocial bonuses on an employee’s job satisfaction. They were either given a bonus to spend on charity or were not given a bonus at all. Those who gave to charities reported increased happiness and job satisfaction. The second experiment was conducted in two parts—both focused on “sports team orientation” by looking at the difference between donating to a charity or a fellow employee—and attempted to see if these improved actual performance. In the first part of the experiment, these participants were given $20 and told to spend it on a teammate or on themselves over the course of the week. In the second part of this experiment, they were instructed to spend $22 on themselves or on a specified teammate over the course of the week. Both of these experiments found more positive effects for givers than those who spent the $22 on themselves. This collaborative research indicates that prosocial bonuses can benefit both individuals and teams, on both psychological and “bottom line” indicators, in both the short and long-term. So when you receive your bonus this year, you might want to think twice before buying those pair of shoes you’ve been dying for, instead consider spending it on someone else—because, according to this research, you’ll probably be much happier and more satisfied with your job.
Subtle contextual factors influence our sense of right and wrong.
An out-of-control train will kill five people. You can switch the train onto another track and save them—but doing so will kill one person. What should you do? A series of experiments published in the journal Psychological Science suggests that on one day you’ll divert the train and save those five lives—but on another you might not. It all depends on how the dilemma is framed and how we’ve been thinking about ourselves. Through the train dilemma and other experiments, the study revealed two factors that can influence our moral decisions. The first involves how morality has been defined for you, in this case around consequences or rules. For example, when researchers asked participants to think in terms of consequences, some readily diverted the train, thus saving four lives. On the other hand, those who prompted to think in terms of rules (e.g., “thou shalt not kill”) let the five die. But that factor was influenced by another that depends on memory and whether your past ethical or unethical behavior is on your mind—a memory of a good deed might make you more likely to cheat, for example, if urged to think of consequences. It’s the complex interaction between those two factors that shapes your decision. That wasn’t the only study published during the past year that revealed how susceptible we are to context. One study found that people are more moral in the morning than in the afternoon. Another study, cleverly titled “Hunger Games,” found that when people are hungry, they express more support for charitable giving. Yet another experiment discovered that thinking about money makes you more inclined to cheat at a game—but thinking about time keeps you honest. The bottom line is that our sense of right and wrong is heavily influenced by seemingly trivial variables in memory, in our bodies, and in changes within our environment. This doesn’t necessarily lead us to pessimistic conclusions about humanity—in fact, knowing how our minds work might help us to make better moral decisions.
Anyone can cultivate empathic skills—even psychopaths.
In daily life, calling someone a “psychopath” or a “sociopath” is a way of saying that the person is beyond redemption. Are they? When neuroscientist James Fallon accidentally discovered that his brain resembled that of a psychopath—showing less activity in areas of the frontal lobe linked to empathy—he was confused. After all, Fallon was a happily married man, with a career and good relationships with colleagues. How could he be beyond redemption? Additional genetic tests revealed “high-risk alleles for aggression, violence and low empathy.” What was going on? Fallon decided he was a “pro-social psychopath,” someone whose genetic and neurological inheritance makes it hard for him to feel empathy, but who was gifted with a good upbringing and environment—good enough to overcome latent psychopathic tendencies. This self-description found support in a study published this year by Swiss and German researchers, which showed education levels and “social desirability” seemed to improve empathy in diagnosed psychopaths. Another new study found that empathy deficits don’t necessarily lead to aggression. It seems that psychopaths can be taught to feel empathy and compassion, though they have a disability that makes developing those skills difficult. When a team of researchers looked at the brain activity of psychopathic criminals in the Netherlands, for example, they discovered the predictable empathic deficits. But they also found that it made a difference in their brains to simply ask the criminals to empathize with others—hinting that empathy may be repressed rather than missing entirely in people classified as psychopaths. For some, at least, it may help a great deal to lift that repression. Psychopathy remains an intractable mental illness and social problem—this year’s studies of treatment did not reveal a magic bullet that would turn psychopaths into angels. But we can take heart in the fact that if they can develop empathic skills, anyone can.
"The idea of "being present in the moment" sounds right but can be a
little elusive and frustrating when seeking to apply it. It can come
across as though one should stop and have some sort of deep or spiritual
experience. Instead, consider the possibility that the spiritual life
is simply responding to situations as they require. If you need to walk
from your kitchen to your bedroom, it's not necessary to stop at each
step and "be present in the moment" and have a "spiritual
experience." Life itself is spiritual and no moment needs you to do
anything to add the spirituality to it. There are some moments, such as
catching a beautiful sunset, when you experience deep feelings and feel a
greater connection to God and life. But no not suppose that such a
moment is more "spiritual" than walking from your kitchen to the
bedroom. It's only that the two situations were different, inviting two
different responses. Your life is your spiritual path... every part of
it."
For 4.6 billion years our living planet has been alone in a vast and
silent universe. But soon, Earth’s isolation could come to an end. Over
the past two decades, astronomers have discovered thousands of
planets orbiting other stars. Some of these exoplanets may be mirror
images of our own world. And more are being found all the time.
Yet as the pace of discovery quickens, an answer to the universe’s
greatest riddle still remains just out of reach: Is the great silence
and emptiness of the cosmos a sign that we and our world are somehow
singular, special, and profoundly alone, or does it just mean that we’re
looking for life in all the wrong places? As star-gazing scientists
come closer to learning the truth, their insights are proving ever more
crucial to understanding life’s intricate mysteries and
possibilities right here on Earth.
Science journalist
Lee Billings explores the past and future of the “exoplanet boom”
through in-depth reporting and interviews with the astronomers and planetary scientists at its forefront. He recounts the stories behind
their world-changing discoveries and captures the pivotal moments that
drove them forward in their historic search for the fi rst
habitable planets beyond our solar system. Billings brings readers close
to a wide range of fascinating characters, such as:
FRANK DRAKE,
a pioneer who has used the world’s greatest radio telescopes to conduct
the first searches for extraterrestrial intelligence and to transmit a
message to the stars so powerful that it briefly outshone our Sun.
JIM KASTING,
a mild-mannered former NASA scientist whose research into the Earth’s
atmosphere and climate reveals the deepest foundations of life on our
planet, foretells the end of life on Earth in the distant future, and
guides the planet hunters in their search for alien life.
SARA SEAGER,
a visionary and iron-willed MIT professor who dreams of escaping the
solar system and building the giant space telescopes required to
discover and study life-bearing planets around hundreds of the Sun’s
neighboring stars. Through these and other captivating
tales, Billings traces the triumphs, tragedies, and betrayals of
the extraordinary men and women seeking life among the stars. In spite
of insu cient funding, clashing opinions, and the failings of some of
our world’s most prominent and powerful scientifi c organizations, these
planet hunters will not rest until they fi nd the meaning of life in
the infi nite depths of space. Billings emphasizes that the heroic quest
for other Earth-like planets is not only a scientifi c pursuit, but
also a refl ection of our own culture’s timeless hopes and fears.
Billings discussed his new book a few days ago at Google.
Since its formation nearly five billion years ago, our planet has been the sole living world in a vast and silent universe. Now, Earth's isolation is coming to an end. Over the past two decades, astronomers have discovered thousands of "exoplanets" orbiting other stars, including some that could be similar to our own world. Studying those distant planets for signs of life will be crucial to understanding life's intricate mysteries right here on Earth. In a firsthand account of this unfolding revolution, Lee Billings draws on interviews with top researchers. He reveals how the search for other Earth-like planets is not only a scientific pursuit, but also a reflection of our culture's timeless hopes, dreams, and fears.
In anticipation of his new book, The Trauma of Everyday Life, being released this week (Aug. 15), Mark Epstein had an interesting column in the New York Times a week ago. His point here, and I would assume in the book, as well, is that being alive entails the experience of trauma in some form or another (whether big T traumas like death, natural disasters, rape, being a refugee, and so on; or small t traumas such as neglect, bullying, social isolation, and so on).
An undercurrent of trauma runs through ordinary life, shot through as it is with the poignancy of impermanence. I like to say that if we are not suffering from post-traumatic stress disorder, we are suffering from pre-traumatic stress disorder. There is no way to be alive without being conscious of the potential for disaster. One way or another, death (and its cousins: old age, illness, accidents, separation and loss) hangs over all of us. Nobody is immune. Our world is unstable and unpredictable, and operates, to a great degree and despite incredible scientific advancement, outside our ability to control it.
I suppose there is truth in this - and a little wisdom. On the other hand, I can see some people becoming frozen in their fear of the next traumatic experience. I see two solutions to this awareness: (1) Allow our brains to do what they do well, keep such awareness below the threshold of consciousness, or (2) Embrace the knowledge that any moment could be our last and carpe diem.
TALKING with my 88-year-old mother, four and a half years after my father died from a brain tumor, I was surprised to hear her questioning herself. “You’d think I would be over it by now,” she said, speaking of the pain of losing my father, her husband of almost 60 years. “It’s been more than four years, and I’m still upset.”
Balint Zsako
I’m not sure if I became a psychiatrist because my mother liked to talk to me in this way when I was young or if she talks to me this way now because I became a psychiatrist, but I was pleased to have this conversation with her. Grief needs to be talked about. When it is held too privately it tends to eat away at its own support. “Trauma never goes away completely,” I responded. “It changes perhaps, softens some with time, but never completely goes away. What makes you think you should be completely over it? I don’t think it works that way.” There was a palpable sense of relief as my mother considered my opinion. “I don’t have to feel guilty that I’m not over it?” she asked. “It took 10 years after my first husband died,” she remembered suddenly, thinking back to her college sweetheart, to his sudden death from a heart condition when she was in her mid-20s, a few years before she met my father. “I guess I could give myself a break.” I never knew about my mother’s first husband until I was playing Scrabble one day when I was 10 or 11 and opened her weather-beaten copy of Webster’s Dictionary to look up a word. There, on the inside of the front cover, in her handwriting, was her name inscribed in black ink. Only it wasn’t her current name (and it wasn’t her maiden name). It was another, unfamiliar name, not Sherrie Epstein but Sherrie Steinbach: an alternative version of my mother at once entirely familiar (in her distinctive hand) and utterly alien. “What’s this?” I remember asking her, holding up the faded blue dictionary, and the story came tumbling out. It was rarely spoken of thereafter, at least until my father died half a century later, at which point my mother began to bring it up, this time of her own volition. I’m not sure that the trauma of her first husband’s death had ever completely disappeared; it seemed to be surfacing again in the context of my father’s death. Trauma is not just the result of major disasters. It does not happen to only some people. An undercurrent of trauma runs through ordinary life, shot through as it is with the poignancy of impermanence. I like to say that if we are not suffering from post-traumatic stress disorder, we are suffering from pre-traumatic stress disorder. There is no way to be alive without being conscious of the potential for disaster. One way or another, death (and its cousins: old age, illness, accidents, separation and loss) hangs over all of us. Nobody is immune. Our world is unstable and unpredictable, and operates, to a great degree and despite incredible scientific advancement, outside our ability to control it. My response to my mother — that trauma never goes away completely — points to something I have learned through my years as a psychiatrist. In resisting trauma and in defending ourselves from feeling its full impact, we deprive ourselves of its truth. As a therapist, I can testify to how difficult it can be to acknowledge one’s distress and to admit one’s vulnerability. My mother’s knee-jerk reaction, “Shouldn’t I be over this by now?” is very common. There is a rush to normal in many of us that closes us off, not only to the depth of our own suffering but also, as a consequence, to the suffering of others. When disasters strike we may have an immediate empathic response, but underneath we are often conditioned to believe that “normal” is where we all should be. The victims of the Boston Marathon bombings will take years to recover. Soldiers returning from war carry their battlefield experiences within. Can we, as a community, keep these people in our hearts for years? Or will we move on, expecting them to move on, the way the father of one of my friends expected his 4-year-old son — my friend — to move on after his mother killed herself, telling him one morning that she was gone and never mentioning her again? IN 1969, after working with terminally ill patients, the Swiss psychiatrist Elisabeth Kübler-Ross brought the trauma of death out of the closet with the publication of her groundbreaking work, “On Death and Dying.” She outlined a five-stage model of grief: denial, anger, bargaining, depression, acceptance. Her work was radical at the time. It made death a normal topic of conversation, but had the inadvertent effect of making people feel, as my mother did, that grief was something to do right. Mourning, however, has no timetable. Grief is not the same for everyone. And it does not always go away. The closest one can find to a consensus about it among today’s therapists is the conviction that the healthiest way to deal with trauma is to lean into it, rather than try to keep it at bay. The reflexive rush to normal is counterproductive. In the attempt to fit in, to be normal, the traumatized person (and this is most of us) feels estranged. While we are accustomed to thinking of trauma as the inevitable result of a major cataclysm, daily life is filled with endless little traumas. Things break. People hurt our feelings. Ticks carry Lyme disease. Pets die. Friends get sick and even die. “They’re shooting at our regiment now,” a 60-year-old friend said the other day as he recounted the various illnesses of his closest acquaintances. “We’re the ones coming over the hill.” He was right, but the traumatic underpinnings of life are not specific to any generation. The first day of school and the first day in an assisted-living facility are remarkably similar. Separation and loss touch everyone. I was surprised when my mother mentioned that it had taken her 10 years to recover from her first husband’s death. That would have made me 6 or 7, I thought to myself, by the time she began to feel better. My father, while a compassionate physician, had not wanted to deal with that aspect of my mother’s history. When she married him, she gave her previous wedding’s photographs to her sister to hold for her. I never knew about them or thought to ask about them, but after my father died, my mother was suddenly very open about this hidden period in her life. It had been lying in wait, rarely spoken of, for 60 years. My mother was putting herself under the same pressure in dealing with my father’s death as she had when her first husband died. The earlier trauma was conditioning the later one, and the difficulties were only getting compounded. I was glad to be a psychiatrist and grateful for my Buddhist inclinations when speaking with her. I could offer her something beyond the blandishments of the rush to normal. The willingness to face traumas — be they large, small, primitive or fresh — is the key to healing from them. They may never disappear in the way we think they should, but maybe they don’t need to. Trauma is an ineradicable aspect of life. We are human as a result of it, not in spite of it.
~ Mark Epstein is a psychiatrist and the author, most recently, of the forthcoming book “The Trauma of Everyday Life.”
Over at the IEET site (Institute for Ethics and Emerging Technology), John Danaher has started a series of posts on uploading human minds into machines (computers). Danaher is riffing on an article by Michael Hauskeller, entitled "My Brain, My Mind, and I: Some Philosophical Assumptions of Mind-Uploading" (International Journal of Machine Consciousness; Vol. 4, No. 1 (2012): 187 -200; DOI: 10.1142/S1793843012400100).
Here is one section of Hauskeller's paper, specifically chosen for its hyperbole and anti-flesh perspective:
2. Messy Bodies
What we witness here is what is often described as an increasing cyborgization of the human, where ‘cyborg' can be defined as a human being some of whose parts are artificial. In light of these developments it may appear not unreasonable to expect that this is only the beginning and we will progress further until we have achieved the goal that is implicitly pursued in all those innovations that couple human beings with fast-paced hyper-technology: complete independence from nature, unrestricted autonomy. For as long as we are hooked to this organic body, we will never be entirely free and safe. The organic body is a limitation that is resented by many, and that they hope we will be able to overcome not too far in the future. "Soon we could be meshing our brains to computers, living, for all practical purposes, on an "immortal" substrate, perhaps eventually discarding our messy, aging, flesh-and-bones body altogether". [Klein, 2003] The human body is not only regarded as dispensable; it is an obstacle, an enemy to be fought and to get rid of. It ages and makes us age with it, eventually annihilating us. It is "messy", disorderly and dirty; it brings chaos and decay into our lives. "Flesh-and-bones" is a material that is deemed unsuitable for an advanced, dignified, enlightened and happy existence. So let's abandon it if we can. Good riddance to bad rubbish! "If humans can merge their minds with computers, why would they not discard the human form and become an immortal being?" [Paul and Cox, 1996, 21].
Yet in order to become truly immortal, our goal should be to become a "cyberbeing", a being that is more than just interlinked with machines, more than just partly a machine itself, and even more than a machine in its entirety. Gradually replacing human biology and the messy organic body by a more durable and more controllable substrate is certainly a considerable improvement, but it is by no means sufficient. Why not go a step further and, if at all possible, discard the physical body altogether? That is, any particular body, any body that is essentially and not merely accidentally ours, not only something we use and can discard when proved not useful enough or no longer useful, but rather something that defines our very existence and has, as it were, pretensions of being us. In other words, why not relocate and transform our existence in such a way that we are no longer bound to any particular material substrate, be it organic or non-organic, because all we need, if anything at all, is the occasional body to-go as a communication facilitator, a hardware on which to run the program which we then will be [Moravec, 1989]. "Imagine yourself a virtual living being with senses, emotions, and a consciousness that makes our current human form seem a dim state of antiquated existence. Of being free, always free, of physical pain, able to repair any damage and with a downloaded mind that never dies". [Paul and Cox, 1996, xv] The telos, the logical end point, of the ongoing cyborgization of the human is thus the attainment of "digital immortality", which is more than just "a radical new form of human enhancement" [Sandberg and Bostrom, 2008, 5]. Rather, the desire to conquer death, that "greatest evil" [More, 1990], is its secret heart, that which gives the demands for radical human enhancement their moral urgency. And the best chance to attain what we desire is through the as yet still theoretical possibility of mind-uploading.
If these paragraphs seems over the top, it's because they are. Hauskeller appears to be mocking some of the beliefs of the transhumanist camp. He is a believer in the situated self, the self as a product of its body-brain, it's experiences, it's cultural and environmental embeddedness, and its relationships with others (or maybe I am reading my own views into his) - its situation in temporal reality.
The brain is only one of our organs (albeit a very important one), that is, an instrument that we use in order to accomplish certain tasks in accordance with our general desire to survive in this world. My brain is situated in a body, as is my mind, which is one of my modes of existence, no more and no less. Although, let's face it, we do not have the slightest clue how conscious experience comes about and how there can be such things as selves in the first place, it is rather unlikely that mind and self are directly produced by the brain, as is commonly assumed. There is no direct evidence for that. The brain develops and changes with the experience we accumulate during our lives, and it does so because it has a particular job to do within the system that we call a living, conscious being. It rises to the occasion. That we can manipulate the mind by manipulating the brain, and that damages to our brains tend to inhibit the normal functioning of our minds, does not show that the mind is a product of what the brain does. The brain could be just a facilitator. When we look through a window and the window is then painted black, our vision is destroyed or prevented, but we cannot infer from this that the window produces our ability to see. The brain might be like a window to the mind. Surely the mind is not in any clear sense localized in the brain. Alva Noe is right when he declares the locus of consciousness to be "the dynamic life of the whole, environmentally plugged-in person or animal" [Noe, 2009, xiii] We are not our brains, we are "out of our heads", as Noe puts it, reaching out to the world as "distributed, dynamically spread-out, world-involving beings". [Noe, 2009, 82]
Suffice it to say that I am more in line with the views of Hauskeller than I am of Danaher, who, in the article below, attempts to rebut or dismiss objections to the proposition of mind-uploading.
A lot of people would like to live forever, or at least for much longer than they currently do. But there is one obvious impediment to this: our biological bodies break down over time and cannot (with current technologies) be sustained indefinitely. So what can be done to avoid our seemingly inevitable demise? For some, like Aubrey de Grey, the answer lies in tweaking and re-engineering our biological bodies. For others, the answer lies in the more radical solution of mind-uploading, or the technological replacement of our current biological bodies.
This solution holds a lot of promise. We already replace various body parts with artificial analogues, what with artificial limbs, organs, and sensory aids (including, more recently, things like artificial retina and cochlear implants). These artificial analogues are typically more sustainable, either through ongoing care and maintenance or renewal and replacement, than their biological equivalents. So why not go the whole hog? Why not replace every body part, including the brain, with some technological equivalent?
That is the question at the heart of Michael Hauskeller’s article “My Brain, My Mind, and I: Some Philosophical Assumptions of Mind Uploading”. The paper offers a sceptical look at some of the assumptions underlying the whole notion of mind-uploading. In this post and the next, I’m going to run through some of Hauskeller’s arguments. In the remainder of this post, I’ll try to do two things. First, I’ll look to clarify what is meant by “mind-uploading” and what we would be trying to achieve by doing it. Second, I’ll introduce the basic argument in favour of mind-uploading, the argument from functionalism, and note some obvious objections to it. This series of posts is probably best read in conjunction with my earlier series on Nicholar Agar’s argument against uploading. That series looked at mind-uploading from a decision-theoretical perspective, and offers what is, to my mind, the most persuasive objection to mind uploading (though, I hasten to add, I’m not sure that it is overwhelmingly persuasive). Hauskeller’s arguments are more general and conceptual. Indeed, he repeatedly relies on the view that the concerns he raises are conceivable, and worth bearing in mind for that reason, and doesn’t take the further step to argue that they are possible or probable. If you are more interested in whether you should go for mind-uploading or not, I think the concerns raised by Hauskeller are possibly best fed back into Agar’s decision-theoretic framework. Still, for the pure philosophers out there — those deeply concerned with metaphysical questions of mind and identity — there is much to grapple with in Hauskeller’s paper.
1. What are we talking about and why?
In my introduction, I noted the obvious link between mind uploading and the quest for life extension. That’s probably enough to pique people’s curiosity, but if we are going to assess mind uploading in a serious way we need to clarify three important issues. First up, we need to clarify exactly what it is we wish to preserve or prolong through mind-uploading. I think the answer is pretty obvious: we want to preserve ourselves (our selfs), where this is defined in terms of Lockean personhood. In other words, I would say that the essence of our existence consists in the fact that we are continuing subjects of experience. That is to say, sentient, self-aware, and aware of our continuing sentience over time (even after occasional bouts of unconsciousness). If we are not preserved as Lockean persons through mind-uploading, then I would suggest that there is very little to be said for it from our perspective (there may be other things to be said for it). One important thing to note here is that Lockean personhood allows for great change over time. I may have a very different set of characteristics and traits now than I did when I was five years old. That’s fine. What matters is that there is a continuing and overlapping stream of consciousness between my five year-old self and my current self. For ease of reference, I’ll refer to the claim that mind-uploading leads to the preservation and prolongation of the Lockean person as the “Mind-Uploading Thesis” (MUT).
The second thing we need to do is to clarify what we actually mean by mind-uploading. In his article, Hauskeller adopts a definition from Adam Kadmon, according to which mind-uploading is the “transfer of the brain’s mindpattern onto a different substrate”. In other words, your brain processes are modelled and then transferred from their current biological neuronal substrate, to a different substrate. This could be anything from a classic digital computer, to a device that uses artificial neurons that directly mirror and replicate the brain’s current processes. Hopefully, that is a reasonably straightforward idea. More important than the basic idea of uploading is the actual method through which it is achieved. Although there may be many such methods, for present purposes two are important:
Gradual Uploading/Replacement: The parts of the brain are gradually replaced by functionally equivalent artificial analogues. Although the original brain is, by the end of this process, destroyed, there is no precise moment at which the biological brain ceases to be and the artificial one begins. Instead, there is a step-by-step progression from wholly biological to wholly artificial.
Discontinuous Uploading/Replacement: The brain is scanned, copied and then emulated in some digital or artificial medium, following which the original brain is destroyed. There is no gradual replacement of the parts of the biological brain.
There may be significant differences between both kinds of uploading, and these differences may have philosophical repercussions. I suspect the latter, rather than the former, is what most people have in mind when they think about uploading, but I could be wrong. Finally, in addition to clarifying the means through which uploading is achieved, we need to clarify the kinds of existence one might have in the digital or artificial form. There are many elaborate possibilities explored in the sci-fi literature, and I would encourage people to check some of these out, but again for present purposes, I’ll limit the focus to two broad kinds of existence, with intermediate kinds obviously also possible:
Wholly Virtual Existence: Once transferred to an artificial medium, the mind ceases to interact directly with the external world (though obviously it relies on that world for some support) and instead lives in a virtual reality, with perhaps occasional communication with the external world.
Non-virtual Existence: Once transferred to an artificial medium, the mind continues to interact directly with the external world through some set of actuators (i.e. tools for bringing about changes in the external world). These might directly replicate the human body, or involve superhuman “bodies”.
An added complication here comes in the shape of multiple copies of the same brain living out different existences in different virtual and non-virtual worlds. This should probably be factored into any complete account of mind-uploading. For an interesting fictional exploration of the idea of virtual existence with multiple copies, I would recommend Greg Egan’s book Permutation City. Anyway, with those clarifications out of the way, we can move on to discuss the arguments for and against the MUT.
Read the whole article, and stay tuned for future installments in this series by Danaher.