Behind that ego is what Ken Wilber calls the anterior self - "The anterior self is a person’s sense of the Witness, the pure Self, or “I-I,” shining through the proximate self at whatever stage of self-development." Few people ever get in touch with that deeper sense of self, but the Big Mind/Big Heart process can help with that - as long as you realize you are not experiencing "enlightenment."
Paul Bloom, writing for The Atlantic, takes a stab at explaining this view of psychology in First Person Plural, a good article for those who are new to the idea.
An evolving approach to the science of pleasure suggests that each of us contains multiple selves—all with different desires, and all fighting for control. If this is right, the pursuit of happiness becomes even trickier. Can one self bind” another self if the two want different things? Are you always better off when a Good Self wins? And should outsiders, such as employers and policy makers, get into the fray?
by Paul Bloom
First Person Plural
Imagine a long, terrible dental procedure. You are rigid in the chair, hands clenched, soaked with sweat—and then the dentist leans over and says, “We’re done now. You can go home. But if you want, I’d be happy to top you off with a few minutes of mild pain.”
There is a good argument for saying “Yes. Please do.”The psychologist and recent Nobel laureate Daniel Kahneman conducted a series of studies on the memory of painful events, such as colonoscopies. He discovered that when we think back on these events, we are influenced by the intensity of the endings, and so we have a more positive memory of an experience that ends with mild pain than of one that ends with extreme pain, even if the mild pain is added to the same amount of extreme pain. At the moment the dentist makes his offer, you would, of course, want to say no—but later on, you would be better off if you had said yes, because your overall memory of the event wouldn’t be as unpleasant.
Such contradictions arise all the time. If you ask people which makes them happier, work or vacation, they will remind you that they work for money and spend the money on vacations. But if you give them a beeper that goes off at random times, and ask them to record their activity and mood each time they hear a beep, you’ll likely find that they are happier at work. Work is often engaging and social; vacations are often boring and stressful. Similarly, if you ask people about their greatest happiness in life, more than a third mention their children or grandchildren, but when they use a diary to record their happiness, it turns out that taking care of the kids is a downer—parenting ranks just a bit higher than housework, and falls below sex, socializing with friends, watching TV, praying, eating, and cooking.
The question “What makes people happy?” has been around forever, but there is a new approach to the science of pleasure, one that draws on recent work in psychology, philosophy, economics, neuroscience, and emerging fields such as neuroeconomics. This work has led to new ways—everything from beepers and diaries to brain scans—to explore the emotional value of different experiences, and has given us some surprising insights about the conditions that result in satisfaction.
But what’s more exciting, I think, is the emergence of a different perspective on happiness itself. We used to think that the hard part of the question “How can I be happy?” had to do with nailing down the definition of happy. But it may have more to do with the definition of I. Many researchers now believe, to varying degrees, that each of us is a community of competing selves, with the happiness of one often causing the misery of another. This theory might explain certain puzzles of everyday life, such as why addictions and compulsions are so hard to shake off, and why we insist on spending so much of our lives in worlds—like TV shows and novels and virtual-reality experiences—that don’t actually exist. And it provides a useful framework for thinking about the increasingly popular position that people would be better off if governments and businesses helped them inhibit certain gut feelings and emotional reactions.
Like any organ, the brain consists of large parts (such as the hippocampus and the cortex) that are made up of small parts (such as “maps” in the visual cortex), which themselves are made up of smaller parts, until you get to neurons, billions of them, whose orchestrated firing is the stuff of thought. The neurons are made up of parts like axons and dendrites, which are made up of smaller parts like terminal buttons and receptor sites, which are made up of molecules, and so on.
This hierarchical structure makes possible the research programs of psychology and neuroscience. The idea is that interesting properties of the whole (intelligence, decision-making, emotions, moral sensibility) can be understood in terms of the interaction of components that themselves lack these properties. This is how computers work; there is every reason to believe that this is how we work, too.
But there is no consensus about the broader implications of this scientific approach. Some scholars argue that although the brain might contain neural subsystems, or modules, specialized for tasks like recognizing faces and understanding language, it also contains a part that constitutes a person, a self: the chief executive of all the subsystems. As the philosopher Jerry Fodor once put it, “If, in short, there is a community of computers living in my head, there had also better be somebody who is in charge; and, by God, it had better be me.”
More-radical scholars insist that an inherent clash exists between science and our long-held conceptions about consciousness and moral agency: if you accept that our brains are a myriad of smaller components, you must reject such notions as character, praise, blame, and free will. Perhaps the very notion that there are such things as selves—individuals who persist over time—needs to be rejected as well.
The view I’m interested in falls between these extremes. It is conservative in that it accepts that brains give rise to selves that last over time, plan for the future, and so on. But it is radical in that it gives up the idea that there is just one self per head. The idea is that instead, within each brain, different selves are continually popping in and out of existence. They have different desires, and they fight for control—bargaining with, deceiving, and plotting against one another.
The notion of different selves within a single person is not new. It can be found in Plato, and it was nicely articulated by the 18th-century Scottish philosopher David Hume, who wrote, “I cannot compare the soul more properly to any thing than to a republic or commonwealth, in which the several members are united by the reciprocal ties of government and subordination.” Walt Whitman gave us a pithier version: “I am large, I contain multitudes.”
The economist Thomas Schelling, another Nobel laureate, illustrates the concept with a simple story:As a boy I saw a movie about Admiral Byrd’s Antarctic expedition and was impressed that as a boy he had gone outdoors in shirtsleeves to toughen himself against the cold. I resolved to go to bed at night with one blanket too few. That decision to go to bed minus one blanket was made by a warm boy; another boy awoke cold in the night, too cold to retrieve the blanket … and resolving to restore it tomorrow. The next bedtime it was the warm boy again, dreaming of Antarctica, who got to make the decision, and he always did it again.
Examples abound in our own lives. Late at night, when deciding not to bother setting up the coffee machine for the next morning, I sometimes think of the man who will wake up as a different person, and wonder, What did he ever do for me? When I get up and there’s no coffee ready, I curse the lazy bastard who shirked his duties the night before.
But anyone tempted by this theory has to admit just how wrong it feels, how poorly it fits with most of our experience. In the main, we do think of ourselves as singular individuals who persist over time. If I were to learn that I was going to be tortured tomorrow morning, my reaction would be terror, not sympathy for the poor guy who will be living in my body then. If I do something terrible now, I will later feel guilt and shame, not anger at some other person.
It could hardly be otherwise. Our brains have evolved to protect our bodies and guide them to reproduce, hence our minds must be sensitive to maintaining the needs of the continuing body—my children today will be my children tomorrow; if you wronged me yesterday, I should be wary of you today. Society and human relationships would be impossible without this form of continuity. Anyone who could convince himself that the person who will wake up in his bed tomorrow is really someone different would lack the capacity for sustained self-interest; he would feel no long-term guilt, love, shame, or pride.
The multiplicity of selves becomes more intuitive as the time span increases. Social psychologists have found certain differences in how we think of ourselves versus how we think of other people—for instance, we tend to attribute our own bad behavior to unfortunate circumstances, and the bad behavior of others to their nature. But these biases diminish when we think of distant past selves or distant future selves; we see such selves the way we see other people. Although it might be hard to think about the person who will occupy your body tomorrow morning as someone other than you, it is not hard at all to think that way about the person who will occupy your body 20 years from now. This may be one reason why many young people are indifferent about saving for retirement; they feel as if they would be giving up their money to an elderly stranger.
Go read the rest of the article.
Here's the deal - multiplicity is on the dissociative spectrum, with DID (Dissociative Identity Disorder) on one end and the "normal" adult on the other. In DID, the various parts are completely independent of each other and often do not know each other. On the normal end, we seldom realize we have distinct parts hidden beneath the surface of our seemingly singular self.
The article does a fair job of explaining this, but a little clarification seemed useful.