2014 – The Year in Books (so far)
Jeremy Rifkin – The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism
In The Zero Marginal Cost Society, New York Times bestselling author Jeremy Rifkin describes how the emerging Internet of Things is speeding us to an era of nearly free goods and services, precipitating the meteoric rise of a global Collaborative Commons and the eclipse of capitalism.Michio Kaku – The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind
Rifkin uncovers a paradox at the heart of capitalism that has propelled it to greatness but is now taking it to its death—the inherent entrepreneurial dynamism of competitive markets that drives productivity up and marginal costs down, enabling businesses to reduce the price of their goods and services in order to win over consumers and market share. (Marginal cost is the cost of producing additional units of a good or service, if fixed costs are not counted.) While economists have always welcomed a reduction in marginal cost, they never anticipated the possibility of a technological revolution that might bring marginal costs to near zero, making goods and services priceless, nearly free, and abundant, and no longer subject to market forces.
Now, a formidable new technology infrastructure—the Internet of things (IoT)—is emerging with the potential of pushing large segments of economic life to near zero marginal cost in the years ahead. Rifkin describes how the Communication Internet is converging with a nascent Energy Internet and Logistics Internet to create a new technology platform that connects everything and everyone. Billions of sensors are being attached to natural resources, production lines, the electricity grid, logistics networks, recycling flows, and implanted in homes, offices, stores, vehicles, and even human beings, feeding Big Data into an IoT global neural network. Prosumers can connect to the network and use Big Data, analytics, and algorithms to accelerate efficiency, dramatically increase productivity, and lower the marginal cost of producing and sharing a wide range of products and services to near zero, just like they now do with information goods.
The plummeting of marginal costs is spawning a hybrid economy—part capitalist market and part Collaborative Commons—with far reaching implications for society, according to Rifkin. Hundreds of millions of people are already transferring parts of their economic lives to the global Collaborative Commons. Prosumers are plugging into the fledgling IoT and making and sharing their own information, entertainment, green energy, and 3D-printed products at near zero marginal cost. They are also sharing cars, homes, clothes and other items via social media sites, rentals, redistribution clubs, and cooperatives at low or near zero marginal cost. Students are enrolling in free massive open online courses (MOOCs) that operate at near zero marginal cost. Social entrepreneurs are even bypassing the banking establishment and using crowdfunding to finance startup businesses as well as creating alternative currencies in the fledgling sharing economy. In this new world, social capital is as important as financial capital, access trumps ownership, sustainability supersedes consumerism, cooperation ousts competition, and “exchange value” in the capitalist marketplace is increasingly replaced by “sharable value” on the Collaborative Commons.
Rifkin concludes that capitalism will remain with us, albeit in an increasingly streamlined role, primarily as an aggregator of network services and solutions, allowing it to flourish as a powerful niche player in the coming era. We are, however, says Rifkin, entering a world beyond markets where we are learning how to live together in an increasingly interdependent global Collaborative Commons.
The New York Times best-selling author of PHYSICS OF THE IMPOSSIBLE, PHYSICS OF THE FUTURE and HYPERSPACE tackles the most fascinating and complex object in the known universe: the human brain.For the first time in history, the secrets of the living brain are being revealed by a battery of high tech brain scans devised by physicists. Now what was once solely the province of science fiction has become a startling reality. Recording memories, telepathy, videotaping our dreams, mind control, avatars, and telekinesis are not only possible; they already exist.Peter Zachar – A Metaphysics of Psychopathology (Philosophical Psychopathology)
THE FUTURE OF THE MIND gives us an authoritative and compelling look at the astonishing research being done in top laboratories around the world—all based on the latest advancements in neuroscience and physics. One day we might have a "smart pill" that can enhance our cognition; be able to upload our brain to a computer, neuron for neuron; send thoughts and emotions around the world on a "brain-net"; control computers and robots with our mind; push the very limits of immortality; and perhaps even send our consciousness across the universe.
Dr. Kaku takes us on a grand tour of what the future might hold, giving us not only a solid sense of how the brain functions but also how these technologies will change our daily lives. He even presents a radically new way to think about "consciousness" and applies it to provide fresh insight into mental illness, artificial intelligence and alien consciousness.
With Dr. Kaku's deep understanding of modern science and keen eye for future developments, THE FUTURE OF THE MIND is a scientific tour de force--an extraordinary, mind-boggling exploration of the frontiers of neuroscience.
In psychiatry, few question the legitimacy of asking whether a given psychiatric disorder is real; similarly, in psychology, scholars debate the reality of such theoretical entities as general intelligence, superegos, and personality traits. And yet in both disciplines, little thought is given to what is meant by the rather abstract philosophical concept of "real." Indeed, certain psychiatric disorders have passed from real to imaginary (as in the case of multiple personality disorder) and from imaginary to real (as in the case of post-traumatic stress disorder). In this book, Peter Zachar considers such terms as "real" and "reality" -- invoked in psychiatry but often obscure and remote from their instances -- as abstract philosophical concepts. He then examines the implications of his approach for psychiatric classification and psychopathology. Proposing what he calls a scientifically inspired pragmatism, Zachar considers such topics as the essentialist bias, diagnostic literalism, and the concepts of natural kind and social construct. Turning explicitly to psychiatric topics, he proposes a new model for the domain of psychiatric disorders, the "imperfect community" model, which avoids both relativism and essentialism. He uses this model to understand such recent controversies as the attempt to eliminate narcissistic personality disorder from the DSM-5. Returning to such concepts as real, true, and objective, Zachar argues that not only should we use these metaphysical concepts to think philosophically about other concepts, we should think philosophically about them.Stephen Finlay – Confusion of Tongues: A Theory of Normative Language (Oxford Moral Theory)
Can normative words like "good," "ought," and "reason" be defined in entirely non-normative terms? Confusion of Tongues argues that they can, advancing a new End-Relational theory of the meaning of this language as providing the best explanation of the many different ways it is ordinarily used. Philosophers widely maintain that analyzing normative language as describing facts about relations cannot account for special features of particularly moral and deliberative uses of normative language, but Stephen Finlay argues that the End-Relational theory systematically explains these on the basis of a single fundamental principle of conversational pragmatics. These challenges comprise the central problems of metaethics, including the connection between normative judgment and motivation, the categorical character of morality, the nature of intrinsic value, and the possibility of normative disagreement. Finlay's linguistic analysis has deep implications for the metaphysics, epistemology, and psychology of morality, as well as for the nature and possibility of normative ethical theory. Most significantly it supplies a nuanced answer to the ancient Euthyphro Question of whether we desire things because we judge them good, or vice versa. Normative speech and thought may ultimately be just a manifestation of our nature as intelligent animals motivated by contingent desires for various conflicting ends.Howard Rachlin – The Escape of the Mind
The Escape of the Mind is part of a current movement in psychology and philosophy of mind that calls into question what is perhaps our most basic, most cherished, and universally accepted belief--that our minds are inside of our bodies. Howard Rachlin adopts the counterintuitive position that our minds, conscious and unconscious, lie not where our firmest (yet unsupported) introspections tell us they are, but in how we actually behave over the long run. Perhaps paradoxically, the book argues that our introspections, no matter how positive we are about them, tell us absolutely nothing about our minds. The name of the present version of this approach to the mind is "teleological behaviorism."
The approaches of teleological behaviorism will be useful in the science of individual behavior for developing methods of self-control and in the science of social behavior for developing social cooperation. Without in any way denigrating the many contributions of neuroscience to human welfare, The Escape of the Mind argues that neuroscience, like introspection, is not a royal road to the understanding of the mind. Where then should we look to explain a present act that is clearly caused by the mind? Teleological behaviorism says to look not in the spatial recesses of the nervous system (not to the mechanism underlying the act) but in the temporal recesses of past and future overt behavior (to the pattern of which the act is a part).But scientific usefulness is not the only reason for adopting teleological behaviorism. The final two chapters on IBM's computer, Watson (how it deviates from humanity and how it would have to be altered to make it human), and on shaping a coherent self, provide a framework for a secular morality based on teleological behaviorism.Robert J. Wicks – Perspective: The Calm Within the Storm
For generations, classic wisdom literature has taught that a healthy perspective can replenish our thirst for a meaningful and rewarding life. From its inception clinical psychology has followed suit, revealing that how we see ourselves and the world is more important than what we see or have-in essence, that a healthy perspective is tantamount to possessing the psychological "pearl of great price."Barbara Ehrenreich – Living with a Wild God: A Nonbeliever's Search for the Truth about Everything
Robert J. Wicks, world-renowned psychologist and author of Bounce: Living the Resilient Life, has written a powerful guide for discovering and regaining a balanced and healthy perspective. Combining classic wisdom with cutting-edge research in cognitive behavioral therapy and positive psychology, his new book, Perspective, offers concrete steps for overcoming doubt and resistance to openness, so that beneficial life changes become possible. Drawing on the psychology of mindfulness, gratitude, and happiness, Dr. Wicks also reveals how a healthy perspective makes us more aware of the beneficial things already present in our lives.
Perspective teaches us to see ourselves more completely and will inspire us to become the calm within the storm, better able to enjoy our experiences, maintain balance in our professional and personal lives, and reach out to others without being pulled down in the process.
From the New York Times bestselling author of Nickel and Dimed comes a brave, frank, and exquisitely written memoir that will change the way you see the world.Nicholas Epley – Mindwise: How We Understand What Others Think, Believe, Feel, and Want
Barbara Ehrenreich is one of the most important thinkers of our time. Educated as a scientist, she is an author, journalist, activist, and advocate for social justice. In LIVING WITH A WILD GOD, she recounts her quest-beginning in childhood-to find "the Truth" about the universe and everything else: What's really going on? Why are we here? In middle age, she rediscovered the journal she had kept during her tumultuous adolescence, which records an event so strange, so cataclysmic, that she had never, in all the intervening years, written or spoken about it to anyone. It was the kind of event that people call a "mystical experience"-and, to a steadfast atheist and rationalist, nothing less than shattering.
In LIVING WITH A WILD GOD, Ehrenreich reconstructs her childhood mission, bringing an older woman's wry and erudite perspective to a young girl's impassioned obsession with the questions that, at one point or another, torment us all. The result is both deeply personal and cosmically sweeping-a searing memoir and a profound reflection on science, religion, and the human condition. With her signature combination of intellectual rigor and uninhibited imagination, Ehrenreich offers a true literary achievement-a work that has the power not only to entertain but amaze.
You are a mind reader, born with an extraordinary ability to understand what others think, feel, believe, want, and know. It’s a sixth sense you use every day, in every personal and professional relationship you have. At its best, this ability allows you to achieve the most important goal in almost any life: connecting, deeply and intimately and honestly, to other human beings. At its worst, it is a source of misunderstanding and unnecessary conflict, leading to damaged relationships and broken dreams.The following books are much less mainstream than any of those listed above. All of these books are edited and include a variety of authors presenting their own views on the topics. Most, if not all, are from Springer, and consequently are stupid expensive (which is when it's nice to get review copies).
How good are you at knowing the minds of others? How well can you guess what others think of you, know who really likes you, or tell when someone is lying? How well do you really understand the minds of those closest to you, from your spouse to your kids to your best friends? Do you really know what your coworkers, employees, competitors, or clients want?
In this illuminating exploration of one of the great mysteries of the human mind, University of Chicago psychologist Nicholas Epley introduces us to what scientists have learned about our ability to understand the most complicated puzzle on the planet—other people—and the surprising mistakes we so routinely make. Why are we sometimes blind to the minds of others, treating them like objects or animals? Why do we sometimes talk to our cars, or the stars, as if there is a mind that can hear us? Why do we so routinely believe that others think, feel, and want what we do when, in fact, they do not? And why do we believe we understand our spouses, family, and friends so much better than we actually do? Mindwise will not turn other people into open books, but it will give you the wisdom to revolutionize how you think about them—and yourself.
Brain, Mind and Consciousness in the History of Neuroscience – C.U.M. Smith • Harry Whitaker, Editors
This volume of essays examines the problem of mind, looking at how the problem has appeared to neuroscientists (in the widest sense) from classical antiquity through to contemporary times. Beginning with a look at ventricular neuropsychology in antiquity, this book goes on to look at Spinozan ideas on the links between mind and body, Thomas Willis and the foundation of Neurology, Hooke’s mechanical model of the mind and Joseph Priestley’s approach to the mind-body problem.Ecopsychology, Phenomenology, and the Environment: The Experience of Nature – Douglas A. Vakoch, Fernando Castrillón, Editors
The volume offers a chapter on the 19th century Ottoman perspective on western thinking. Further chapters trace the work of nineteenth century scholars including George Henry Lewes, Herbert Spencer and Emil du Bois-Reymond. The book covers significant work from the twentieth century, including an examination of Alfred North Whitehead and the history of consciousness, and particular attention is given to the development of quantum consciousness. Chapters on slavery and the self and the development of an understanding of Dualism bring this examination up to date on the latest 21st century work in the field.
At the heart of this book is the matter of how we define the problem of consciousness itself: has there been any progress in our understanding of the working of mind and brain? This work at the interface between science and the humanities will appeal to experts from across many fields who wish to develop their understanding of the problem of consciousness, including scholars of Neuroscience, Behavioural Science and the History of Science.
This book seeks to confront an apparent contradiction: that while we are constantly attending to environmental issues, we seem to be woefully out of touch with nature. The goal of Ecopsychology, Phenomenology and the Environment is to foster an enhanced awareness of nature that can lead us to new ways of relating to the environment, ultimately yielding more sustainable patterns of living. This volume is different from other books in the rapidly growing field of ecopsychology in its emphasis on phenomenological approaches, building on the work of phenomenological psychologists such as Maurice Merleau-Ponty. This focus on phenomenological methodologies for articulating our direct experience of nature serves as a critical complement to the usual methodologies of environmental and conservation psychologists, who have emphasized quantitative research. Moreover, Ecopsychology, Phenomenology and the Environment is distinctive insofar as chapters by phenomenologically-sophisticated ecopsychologists are complemented by chapters written by phenomenological researchers of environmental issues with backgrounds in philosophy and geology, providing a breadth and depth of perspective not found in other works written exclusively by psychologists.The Wiley-Blackwell Handbook of Psychoneuroimmunology – Alexander W. Kusnecov and Hymie Anisman, Editors
The term psychoneuroimmunology was originally coined to acknowledge the existence of functional interactions between the brain, the immune system and the endocrine system. As our understanding deepens of the interplay between the brain and the way bodies function, the field continues to grow in importance. This comprehensive handbook is an authoritative source of information on the history, methodology and development of research into psychoneuroimmunology.New Frontiers in Social Neuroscience (Research and Perspectives in Neurosciences) – Jean Decety and Yves Christen, Editors
The interdisciplinary nature of the contributions reflects the fact that the subject is a multifaceted field of research integrating the traditionally separate subjects of biological and behavioral science. Psychoneuroimmunology attains a realistic appreciation of the interplay between different biological systems as they collectively maintain health and combat environmental challenges to health. Background material is balanced by a detailed assessment of emerging topics in psychoneuroimmunological research that focuses on the clinical and practical implications of findings from empirical studies on both humans and animals. While specialist readers will appreciate the coverage of progress made in psychoneuroimmunology, newcomers will gain much from its informed and accessible introduction to the field, as well as its exploration of a variety of methodological approaches.
Traditionally, neuroscience has considered the nervous system as an isolated entity and largely ignored influences of the social environments in which humans and many animal species live. In fact, we now recognize the considerable impact of social structures on the operations of the brain and body. These social factors operate on the individual through a continuous interplay of neural, neuroendocrine, metabolic and immune factors on brain and body, in which the brain is the central regulatory organ, and also a malleable target of these factors. Social neuroscience investigates the biological mechanisms that underlie social processes and behavior, widely considered one of the major problem areas for the neurosciences in the 21st century, and applies concepts and methods of biology to develop theories of social processes and behavior in the social and behavioral sciences. Social neuroscience capitalizes on biological concepts and methods to inform and refine theories of social behavior, and it uses social and behavioral constructs and data to advance theories of neural organization and function. This volume brings together scholars who work with animal and human models of social behavior to discuss the challenges and opportunities in this interdisciplinary academic field.Handbook of Executive Functioning – Sam Goldstein and Jack A. Naglieri, Editors
Planning. Attention. Memory. Self-regulation. These and other core cognitive and behavioral operations of daily life comprise what we know as executive functioning (EF). But despite all we know, the concept has engendered multiple, often conflicting definitions and its components are sometimes loosely defined and poorly understood.Brain Theory: Essays in Critical Neurophilosophy – Charles T. Wolfe, Editor
The Handbook of Executive Functioning cuts through the confusion, analyzing both the whole and its parts in comprehensive, practical detail for scholar and clinician alike. Background chapters examine influential models of EF, tour the brain geography of the executive system and pose salient developmental questions. A section on practical implications relates early deficits in executive functioning to ADD and other disorders in children and considers autism and later-life dementias from an EF standpoint. Further chapters weigh the merits of widely used instruments for assessing executive functioning and review interventions for its enhancement, with special emphasis on children and adolescents.
Featured in the Handbook:
The Handbook of Executive Functioning is an essential resource for researchers, scientist-practitioners and graduate students in clinical child, school and educational psychology; child and adolescent psychiatry; neurobiology; developmental psychology; rehabilitation medicine/therapy and social work.
- The development of hot and cool executive function in childhood and adolescence.
- A review of the use of executive function tasks in externalizing and internalizing disorders.
- Executive functioning as a mediator of age-related cognitive decline in adults.
- Treatment integrity in interventions that target executive function.
- Supporting and strengthening working memory in the classroom to enhance executive functioning.
From its beginnings until the present day, neuroscience has always had a special relationship to philosophy. And philosophy has long puzzled over the relation between mind and brain (and by extension, the relation of cerebral processes to freedom, morals, and justice, but also to perception and art). This volume presents some of the state-of-the-art reflections on philosophical efforts to 'make sense' of neuroscience, as regards issues including neuroaesthetics, neuroethics and neurolaw, but also more critical, evaluative perspectives on topics such as the social neuroscience of race, neurofeminism, embodiment and collaboration, memory and pain, and more directly empirical topics such as neuroconstructivism and embodied robotics. Brain theory as presented here is neither mere commentary on the state of the sciences, nor armchair philosophical reflection on traditional topics. It is more pluralistic than current philosophy of neuroscience (or neurophenomenology), yet more directly engaged with empirical, indeed experimental matters than socio-cultural discussions of 'brainhood' or representations of the brain.Late Modernity: Trajectories towards Morphogenic Society (Social Morphogenesis) – Margaret S. Archer, EditorThis volume examines the reasons for intensified social change after 1980; a peaceful process of a magnitude that is historically unprecedented. It examines the kinds of novelty that have come about through morphogenesis and the elements of stability that remain because of morphostasis. It is argued that this pattern cannot be explained simply by ‘acceleration’. Instead, we must specify the generative mechanism(s) involved that underlie and unify ordinary people’s experiences of different disjunctions in their lives. The book discusses the umbrella concept of ‘social morphogenesis’ and the possibility of transition to a ‘Morphogenic Society’. It examines possible ‘generative mechanisms’ accounting for the effects of ‘social morphogenesis’ in transforming previous and much more stable practices. Finally, it seeks to answer the question of what is required in order to justify the claim that Morphogenic society can supersede modernity.
Offering multiple perspectives from many fields of human inquiry that may move all of us toward a more integrated understanding of who we are as conscious beings.
Saturday, June 14, 2014
2014 – The Year in Books (so far)
Halfway through the year, almost, and there have already been some seriously good books published that will appear on a lot of top-ten lists in December. Some of those books are below, but there also a lot of books below no one will have heard of about side of their respective fields, books from academic publishers or other sources not likely to be found at your local bookstores.
Below is a list of the books I have picked up this year (which is not likely to be very mainstream), and I am including the publisher's ad copy for their books. I would love to review each of these, but I seriously do not have that kind of time. Perhaps, if time allows, I will offer some individual reviews of a few of these books.
Tom Jacobs - Feeling Impulsive? Head for the Forest
Nature . . . it does a mind good.
New research suggests that people who view nature images make less-impulsive decisions than those who looked at buildings or simple geometrical shapes. A lot of previous research found that exposure to nature can reduce stress; now it seems to also help us make smarter choices.
Feeling Impulsive? Head for the Forest
By Tom Jacobs • June 10, 2014
(Photo: Pakhnyushcha/Shutterstock)
New research finds yet another benefit of viewing images of the natural world.
•
Do you make too many impulsive decisions—eating that snack now and worrying about calories later, or buying that expensive toy and only to later realize it will break your budget?
Perhaps you need to spend more time in nature.
A first-of-its kind study, conducted at Utah State University, finds that people who looked at scenes of the natural world made less-impulsive decisions than those who viewed either buildings or simple geometrical shapes. Much research has found exposure to nature can lower stress; it now appears it also nudges us into making smarter choices.
In the online journal PLoS One, a research team led by University of Montana psychologist Meredith Berry describes an experiment featuring 185 undergraduates, all of whom viewed a series of 25 photographs. Sixty-three of them saw images of nature, including forests. Fifty-nine viewed photos of buildings and city streets. The final 63 saw a series of geometric shapes.
All then took part in a task that involved choosing between hypothetical financial outcomes. They were repeatedly presented with the proposition: “Would you rather have (a specific amount of money) now, or (a different amount) in (a specific point in the future)?” The amounts changed each time the question came up anew.
This is a classic way of measuring what researchers call “delay discounting.” Choosing to receive a lower amount of money now, rather than waiting for a greater reward, reflects some degree of self-destructive impulsivity.
It’s to our advantage not to give in to that temptation—and it turns out resistance came easier for people who had just spent virtual time in the pristine woodlands.
“Exposure to scenes of natural environments resulted in significantly less impulsive decision-making,” the researchers write. “Viewing scenes of built environments and geometric shapes resulted in similar, higher levels of impulsive decision-making.”
Berry and her colleagues can only speculate about the reasons for this. They note that, in previous research, people have reported that “time seems to slow when viewing awe-eliciting scenes.” Perhaps views of massive trees that took many years to grow prompted participants to adopt a long-range perspective, reducing the allure of an immediate payoff.
In any event, this is welcome news. As the researchers note, saving the environment will require cultivating a mindset that prioritizes long-term over short-term gain.
So if you’re tempted to buy a gas-guzzler, spend a few minutes staring at an Ansel Adams photograph. You might just change your mind.
Staff writer Tom Jacobs is a veteran journalist with more than 20 years experience at daily newspapers. He has served as a staff writer for The Los Angeles Daily News and the Santa Barbara News-Press. His work has also appeared in The Los Angeles Times, Chicago Tribune, and Ventura County Star.
More From Tom Jacobs
Friday, June 13, 2014
The World Cup Has Begun - All You Never Needed to Know about 2014's World Stopping Event
Goooooooooooooooaaalll! For fùtbol fans around the world (that's soccer for you yanks), it's the global celebration of the beautiful game (per Pelé).
According to Vox, there are 10 things American's need to know about the World Cup:
1. The World Cup is the pinnacle of soccerOf course, Vox goes into more detail on each of those. These links come from Bookforum's Omnivore blog.
2. The World Cup finals are a multi-stage tournament
3. The US team is in the "group of death"
4. Brazil and Spain are two of the favorites
5. Soccer is a complex game
6. There are millions of dollars at stake
7. Many Brazilians are protesting the World Cup
8. The matches are spread thousands of miles apart
9. Many teams ban their players from having sex during the World Cup
10. You can watch on the matches on ESPN and Univision
The World Cup Is About to Begin
Jun 12 2014
3:00PM
- Stop the world: It's the World Cup.
- Erik Spanberg reviews Eight World Cups: My Journey through the Beauty and Dark Side of Soccer by George Vecsey.
- The introduction to Football's Dark Side: Corruption, Homophobia, Violence and Racism in the Beautiful Game by Ellis Cashmore and Jamie Cleland.
- The World Cup is about to begin, and FIFA's reputation has never been worse.
- John Oliver goes inside the “cartoonishly evil” organization of FIFA.
- Throw FIFA out of the game: Dave Zirin on how people don’t have to be displaced and workers don’t need to die for soccer.
- The growing stink at FIFA: New light is being shed on the choice of Qatar to host football’s biggest event in 2022.
- FIFA needs to take the 2022 World Cup away from Qatar — now.
- What is the World Cup worth? Simon Chadwick investigates.
- Is the World Cup a poisoned chalice? Sol Campbell and Simon Kuper battle it out in the match of the century.
- As World Cup nears, American “football” fans adapt foreign traditions.
- Is 2014 the year soccer in America (truly) goes mainstream?
- Marc Tracy on how Major League Soccer can beat the English Premier League in America: Embrace its inner Brooklyn.
- Sam Borden on how Jurgen Klinsmann plans to make U.S. soccer better — and less American (and more and more).
- Contrary to popular wisdom, the playing styles of national soccer teams is not reflective of underlying national cultural characteristics.
- A look at 53 World Cup cliches you’ll probably hear over the coming weeks.
- Joseph Stromberg on 10 things non-soccer fans need to know about the World Cup.
- Ritchie King, Allison McCann and Matthew Conlen on FiveThirtyEight’s World Cup predictions (and more by Nate Silver: It’s Brazil’s World Cup to lose).
1. The World Cup is the pinnacle of soccer
2. The World Cup finals are a multi-stage tournament
3. The US team is in the "group of death"
4. Brazil and Spain are two of the favorites
5. Soccer is a complex game
6. There are millions of dollars at stake
7. Many Brazilians are protesting the World Cup
8. The matches are spread thousands of miles apart
9. Many teams ban their players from having sex during the World Cup
10. You can watch on the matches on ESPN and Univision
Aneesh Chopra | Innovative State: How New Technologies Can Transform Government
Aneesh Chopra is the author of Innovative State: How New Technologies Can Transform Government (2014). Working for the Obama administration, Chopra was tasked with leading the administration's initiatives for a more open, tech-savvy government.
Hmmm . . . they certainly nailed the tech savvy part with with NSA spying programs, but they seem to forgotten about the transparency part.
Technology is ethically neutral, but its use have enormous moral and ethical implications.
Aneesh Chopra | Innovative State: How New Technologies Can Transform Government
Published on June 12, 2014
Over the last twenty years, our economy and our society, from how we shop and pay our bills to how we communicate, have been completely revolutionized by technology. Once it became clear how much this would change America, a movement arose to use these same technologies to reshape and improve government. But the idea languished, and while the private sector innovated, our government stalled, trapped in a model designed for the America of the 1930s and 1960s.
The election of Barack Obama offered a new opportunity. In 2009, Aneesh Chopra was named the first chief technology officer of the United States federal government. Previously the secretary of technology for Virginia and managing director for a health care think tank, Chopra was tasked with leading the administration's initiatives for a more open, tech-savvy government.
Inspired by private sector trailblazers, Chopra wrote the playbook for governmental open innovation. In Innovative State, drawing on interviews with tech leaders and policy experts, and building on his firsthand experience, Chopra offers an absorbing look at how open government can establish a new paradigm for the internet era and allow us to tackle our most challenging problems, from economic development to affordable health care.
Jonathan Haidt — The Psychology Behind Morality
This week on NPR's On Being, Jonathan Haidt is the guest. He is the author of The Righteous Mind: Why Good People are Divided by Politics and Religion, as well as Professor of Ethical Leadership at The Stern School of Business at New York University. In this conversation, he talks about the psychology behind morality.
Jonathan Haidt — The Psychology Behind Morality
On Being | June 12, 2014
Krista Tippett
The surprising psychology behind morality is at the heart of social psychologist Jonathan Haidt’s research. He says “when it comes to moral judgments, we think we are scientists discovering the truth, but actually we are lawyers arguing for positions we arrived at by other means.” He explains “liberal” and “conservative” not narrowly or necessarily as political affiliations, but as personality types — ways of moving through the world. His own self-described “conservative-hating, religion-hating, secular liberal instincts” have been challenged by his own studies.
Photo courtesy of TED
Listen
- Radio Show/Podcast (mp3, 51:00)
- Unedited Interview Jonathan Haidt (mp3, 1:49:21)
- Transcript
Voices on the Radio
Jonathan Haidt is the author of the bestselling book The Righteous Mind: Why Good People are Divided by Politics and Religion. He is Professor of Ethical Leadership at The Stern School of Business at New York University.
Production Credits
- Host/Executive Producer: Krista Tippett
- Executive Editor: Trent Gilliss
- Senior Producer: Lily Percy
- Technical Director: Chris Heagle
- Associate Producer: Mariah Helgeson
Like-Minded Conversations
Kwame Anthony Appiah — Sidling Up to Difference: Social Change and Moral Revolutions
How can unimaginable social change happen in a world of strangers? Kwame Anthony Appiah is a philosopher who studies ethics and his parents' marriage helped inspire the movie Guess Who's Coming to Dinner. In a tense moment in American life, he has refreshing advice on simply living with difference.
Jacob Needleman — The Inward Work of Democracy
Krista Tippett speaks with philosopher Jacob Needleman. As new democracies are struggling around the world, it’s easy to forget that U.S. democracy was shaped by trial and error. A conversation about the “inward work” of democracy — the conscience that shaped the American experiment.
Pertinent Posts from the On Being Blog
Stepping Outside the Moral Matrix
In this TED talk, social psychologist Jonathan Haidt breaks down human moral values into five basic elements, then shows how an individual's placement on the liberal-conservative spectrum is determined by how much emphasis that person puts on each of these values.
Are Babies Moral?
A fascinating, in-depth article and video discussing new research indicating that babies may have a "rudimentary moral sense from the very start of life."
Barbara Ehrenreich on "Living With a 'Wild' God"
In Barbara Ehrenreich's latest book — and first memoir — she asks the age-old questions at the center of human life. A self-described atheist, she leans into the word "mystical" and encourages more cosmic wandering.
What Stories Do We Tell?
There are stories within stories that are desperate to be heard, and when they’re heard, they bring us to the place of encounter and empathy, which is the essence of hope and humanity.
David Eagleman's Secular Sermon on Knowing One's Selves
How and why did we choose this "secular sermon" for our podcast. A bit of behind-the-scenes insight that answers these questions — and a chance to watch the full sermon from The School of Life.
Thursday, June 12, 2014
Kurzweil Does Not Accept Victory in the Turing Test Bet
The other day, Kevin Warwick and his team reported that their computer program, a chatbot named Eugene Goostman, had become the first artificial intelligence to pass the Turing Test.
For those who follow such things, inventor, futurist, and Google's engineering director, Ray Kurzweil has a standing wager of $20,000 with Mitch Kapor that a computer would pass the Turing Test by 2029. Based on the report cited above, it would appear Kurzweil has won the bet.
The only problem is that Kurzweil does not think so. Which is not good news for the researchers and their bot.
Here is Kurzweil's statement from his blog:
Two days ago, on June 8, 2014, the University of Reading announced that a computer program “has passed the Turing test for the first time.”
University of Reading Professor Kevin Warwick described it this way:
“Some will claim that the test has already been passed. The words ‘Turing test’ have been applied to similar competitions around the world. However, this event involved more simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing’s test was passed for the first time on Saturday.” — Kevin Warwick, PhDI have had a long-term wager with Mitch Kapor in which I predicted that a computer program would pass the Turing test by 2029 and he predicted that this would not happen, see links below.
This was the first long-term wager on the “Long Now” website. The bet called for $20,000 to be donated from us to the charity of the winner’s choice.
As a result, messages have been streaming in from around the world congratulating me for having won the bet.
However, I think this is premature. I am disappointed that Professor Warwick, with whom I agree on many things, would make this statement. There are several problems that I describe below, including a transcript of a conversation that I had with Eugene Goostman, the chatbot in question.
In my 2004 book The Singularity Is Near, I anticipated that there would be premature announcements of this kind:
The Singularity Is Near, page 295 | Turing was carefully imprecise in setting the rules for his test, and significant literature has been devoted to the subtleties of establishing the exact procedures for determining how to assess when the Turing test has been passed. In 2002 I negotiated the rules for a Turing test wager with Mitch Kapor on the Long Now website.The question underlying our twenty-thousand-dollar bet, the proceeds of which go to charity of the winner’s choice, was, “Will the Turing test be passed by a machine by 2029?” I said yes, and Kapor said no. It took us months of dialogue to arrive at the intricate rules to implement our wager.Simply defining “machine” and “human,” for example, was not a straightforward matter. Is the human judge allowed to have any nonbiological thinking processes in his or her brain? Conversely, can the machine have any biological aspects?Because the definition of the Turing test will vary from person to person, Turing test capable machines will not arrive on a single day, and there will be a period during which we will hear claims that machines have passed the threshold. Invariably, these early claims will be debunked by knowledgeable observers, probably including myself. By the time there is a broad consensus that the Turing test has been passed, the actual threshold will have long since been achieved.”Professor Warwick claims that the test was “unrestricted.” However, having the chatbot claim to be a 13-year-old child, and one for whom English is not a first language, is effectively a restriction. Moreover, the interactions were reportedly limited to five minutes each. There is a high likelihood of fooling naïve judges in such a brief period of time.
I chatted with the chatbot Eugene Goostman, and was not impressed. Eugene does not keep track of the conversation, repeats himself word for word, and often responds with typical chatbot non sequiturs. Here is the transcript.
Ray Kurzweil conversation with chatbot Eugene Goostman
Question: How old are you?
Eugene: I’m a little boy. 13 years old.
Question: How much is two plus two?
Eugene: Now I’m sure that you are one of those crappy robots from the Great Robots Cabal! And I know your password: 2 + 2=4.0! Now divide by zero and die!!!!
Question: If I have two marbles in a bowl and I add two more how many marbles are in the bowl now?
Eugene: Not too many. But I can’t tell you the exact number, I forgot it. If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
Question: I live in the capital of the United States. Do you know where that is?
Eugene: Every nerd knows that the capital of The United States is Washington.
Question I live in the capital of the country that put a man on the moon.
Eugene:: Well, if you want it, the capital of The United States is Washington.
Question: I live in the capital of the country that built the great wall.
Eugene: Tell me more about Capital. I like to learn about different places!
Question: How old were you when you started to wear glasses?
Eugene: No I was not! How can you think so?! If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
Question: I did tell you where I live. I told you several times. Do you remember?
Eugene: Oh damn! Poor confused cyborg you are! Those malicious robots erased your memory! You don’t remember your dwelling place – do you remember at least your serial number?
Question: I remember. You are the one who did not remember.
Eugene: Am I really the one who did not remember? But well, no time for arguing now! Could you enlighten me with any other hidden knowledge about myself? :-))) If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)
Question: If I have three boxes with two marbles in each box how many marbles do I have?
Eugene: Not too many. But I can’t tell you the exact number, I forgot it. Some another topic?
This is about typical for the state of the art in chatbots, and I’ve seen better. We have a live chatbot named Ramona on the KurzweilAI website. Ramona can be amusing but we don’t claim that she passes the Turing test.
Ramona live chatbot on KurzweilAI website | “Chat with Ramona 4.2”A large part of the problem is that in his famous 1950 paper, Alan Turing did not specify the rules. Here are the rules that Mitch Kapor and I painstakingly devised together, along with our essays on why we think each of us will win the wager.
The rules I devised with Mitch Kapor | “A Wager on the Turing test: the rules”
An explanation of rules behind the Turing test, used to determine the winner of a long bet between Ray Kurzweil and Mitch Kapor over whether artificial intelligence will be achieved by 2029.Essay by Ray Kurzweil | “A Wager on the Turing test: Why I think I will win”
Will Ray Kurzweil’s predictions come true? He’s putting his money on it. Here’s why he thinks he will win a bet on the future of artificial intelligence. The wager: an artifical intelligence that passes the Turing test by 2029.Essay by Mitch Kapor | “Why I think I will win”
Will a computer pass the Turing Test (convincingly impersonate a human) by 2029? Mitchell Kapor has bet Ray Kurzweil that a computer can’t because it lacks understanding of subtle human experiences and emotions.Essay by Ray Kurzweil | “Response to Mitchell Kapor’s essay titled ‘Why I think I will win’”Apparently, we have now entered the era of premature announcements of a computer having passed Turing’s eponymous test. I continue to believe that with the right rules, this test is the right assessment of human-level intelligence in a machine.
Ray Kurzweil responds to Mitch Kapor’s arguments against the possibility that an AI will succeed, in this final counterpoint on the bet: an artificial intelligence will pass a Turing Test by 2029.
In my 1989 book The Age of Intelligent Machines, I predicted that the milestone of a computer passing the Turing test would occur in the first half of the 21st century. I specified the 2029 date in my 1999 book The Age of Spiritual Machines. After that book was published, we had a conference at Stanford University and the consensus of AI experts at that time was that it would happen in hundreds of years, if ever.
In 2006 we had a conference called “AI at 50” at Dartmouth College, celebrating the 50th anniversary of the 1956 Dartmouth conference that gave artificial intelligence its name. We had instant polling devices and the consensus at that time, among AI experts, was 25 to 50 years. Today, my prediction appears to be median view. So, I am gratified that a growing group of people now think that I am being too conservative.
Related reading:
- News | “Chatbot Eugene Goostman passes Turing test, Warwick claims”
- University of Reading | “Turing test success marks milestone in computing history”
- Dartmouth College | Dartmouth Artificial Intelligence Conference: The Next Fifty Years — AI at 50
- Eugene Goostman chatbot | main
- Wikipedia | Turing test
- Wikipedia | Eugene Goostman
- Wikipedia | Alan Turing
- Wikipedia | Mitch Kapor
- Long Now Foundation | “Long Bet: By 2029 no computer or machine intelligence will have passed the Turing Test”
- Long Now Foundation | main
- Mitchell Kapor | main
How the World Cup Explains the World
From Pacific Standard, here is a brief introduction to the World Cup (not the sports angle, the international relations angle), which begins today with Brazil taking on Croatia.
Welcome to World Cup Week
By Ryan O'Hanlon • June 09, 2014
•
A couple of months ago, in response to the occupation of Crimea, two U.S. senators wrote to a powerful international federation in hopes of creating sanctions against Russia, citing “a brazen disrespect for fundamental principles … and international law.” A few days later, Russian officials fired back, filing a similar letter to the same organization, calling for the U.S. to be sanctioned for “military aggression against several sovereign states” and the “numerous cases of human rights violation all over the world revealed by E. Snowden.”
The organization? FIFA. The sanction? A ban from the 2014 World Cup.
"The 2014 World Cup will be the most uniting cultural event
in the history of human civilization."
Alas, no sanctions were enacted. And Russia will be in Brazil, playing with a team that seems in keeping with the nation’s political tides, as the roster—in a tournament where 65 percent of the players play professionally in countries outside their own—consists entirely of Russia-based pros. Except, the team is coached by 68-year-old Italian who was previously the manager of England.
The U.S. will be there, too, led by a German manager who won a World Cup while playing for Germany and who will be coaching an American team with five German-born players, a guy from Norway, and another one from Iceland. Worried that the U.S. isn’t American enough? You can always root for Mexico. And if you’d prefer a different team with an American-born player and an American-born coach, why not throw your support behind Iran?
You get the point. Soccer, as you may have heard, is the most popular sport in the world. The 2014 World Cup will be the most uniting cultural event in the history of human civilization. It’s a game, but it’s also a game played and watched and consumed by multiple billions of people. “When a game matters to billions of people it ceases to be just a game,” Simon Kuper writes in Football Against the Enemy. “Soccer is never just soccer: it helps make wars and revolutions, and it fascinates mafias and dictators.”
Kuper wrote some form of those words 20 years ago, and the world’s changed since then. (For example: We’re writing about soccer on the Internet, and doing it for a national American magazine.) Saying that the sport still “makes wars and revolutions” might not be quite so accurate anymore, but the game, in this country and in the rest of the world, has only continued to grow. “The game remains too good a way of understanding the world to discard,” Kuper writes in the introduction to Soccer Against the Enemy, the 2010 American edition of his first book. “Soccer matters as much today as when I made the journey that became this book, but now it matters in different ways.”
So, all this week, we’ll be looking at some of those different ways. We hope to show how the sport gets tangled up in the different political, social, and cultural issues across the world—but it might be better to say that we just want to see how soccer reflects some of the important issues of 2014. You can find all the exhaustive tactical breakdowns and the lists of players you’ve never heard of but will definitely start hearing about elsewhere. Instead, we’ll be writing about the game’s role in the development of Iran, the most politically volatile matches in the tournament’s history, and why, at the same time, this is both the most important tournament ever played and the one with the least amount of meaning. We’ll also be talking to Kuper himself about many of those things and more.
The games kick off on Thursday, with Croatia playing Brazil, and we’ll be posting multiple stories a day through the end of the week. You’ll learn something—we promise.
Senior Digital Editor Ryan O’Hanlon joined Pacific Standard from Outside, where he was an assistant online editor. He is a graduate of the College of the Holy Cross, and his writing has appeared in Deadspin, Grantland, The Awl, New York, The Atlantic, and elsewhere. Follow him on Twitter @rwohan.
Wednesday, June 11, 2014
How Would Humans Know If They Lived in a Multiverse?
What? You mean we may not live in a multiverse?
Be that as it may, this is a brief but interesting article from Live Science.
How Would Humans Know If They Lived in a Multiverse?
By Tanya Lewis, Staff Writer | June 02, 2014
Our universe may be one of many, physicists say.
Credit: Shutterstock/Victor Habbick
Some theories in physics give rise to the idea of multiple universes, where nearly identical versions of the known universe exist. But if such a multiverse does exist, how would people know, and what would it mean for humanity?
There may be ways to find out if the known universe is one of many, said Brian Greene, a theoretical physicist and author at Columbia University in New York.
"There are certain versions of the multiverse that, should they be correct, might be most susceptible to confirmation," Greene told Live Science. [5 Reasons We May Live in a Multiverse]
Spotting a multiverse
For example, in the multiverse suggested by string theory, a model that says the universe is composed of one-dimensional strings, the known universe might exist on a giant 3D membrane, Greene told Live Science.
In such a world, "if the universe is a loaf of bread, everything we know about takes place on one slice," he said. Conceivably, debris from collisions that migrated off our slice into the wider cosmos might leave missing energy signatures, which a particle accelerator like the Large Hadron Collider at CERN might be able to detect, Greene said.
Some theories of inflation, the notion that the universe expanded rapidly in the first fractions of a second after the Big Bang, suggest another kind of multiverse. The Big Bang could be one of many big bangs, each giving rise to its own universe — a cosmic bubble in a sea of other bubbles.
In such a scenario, the known universe might collide with another one, which might leave an imprint on the cosmic microwave background, the radiation signature left over from the Big Bang, Greene said.
Greene stressed that all of these notions are highly speculative — "There's reason to take the ideas seriously, but they are far from science fact," he said.
Is free will dead?
But if a multiverse does exist, it could have some wacky consequences. A world with an infinite number of universes would virtually ensure that conditions in one universe would repeat in another, Greene said. In other words, there would almost certainly be another version of you reading this article, written by another version of me.
In such a multiverse, you might decide to read the article in one universe and not read it in another. What would that mean for the notion of free will?
Perhaps it's a moot point. "I think free will bit the dust long before multiverse theory," Greene said.
Scientific equations describe the particles that make up all matter, including humans, Greene said. While more-complex structures arise that have no relevance to a single particle — temperature, for instance — everything still has a "fundamental microphysical underpinning," he said.
That means free will is merely a human sensation, not actual control.
"When I move my teapot, that sensation is absolutely real," Green said. "But that's all it is. It's a sensation."
Maybe in another universe, there's a Brian Greene that believes in free will.
Follow Tanya Lewis on Twitter and Google+. Follow us @livescience, Facebook & Google+. Original article on Live Science.
Writing In The 21st Century: A Conversation with Steven Pinker
Steven Pinker, the beloved and sometimes infuriating evolutionary psychologist, has a new book out, The Sense of Style: The Thinking Persons Guide to Writing in the 21st Century (2014). Interesting choice. I have never thought of Pinker as an especially clear writer. But I guess after so many big serious books, he may have wanted to do something fun.
Writing In The 21st Century
A Conversation with Steven Pinker [6.9.14]
Topic: Conversations
Introduction By: John Brockman
What are the arts but products of the human mind which resonate with our aesthetic and emotional faculties? What are social issues, but ways in which humans try to coordinate their behavior and come to working arrangements that benefit everyone? There's no aspect of life that cannot be illuminated by a better understanding of the mind from scientific psychology. And for me the most recent example is the process of writing itself.
(37 minutes)
Introduction
Psychologist Steven Pinker's 1994 book The Language Instinct discussed all aspects of language in a unified, Darwinian framework, and in his next book, How The Mind Works he did the same for the rest of the mind, explaining "what the mind is, how it evolved, and how it allows us to see, think, feel, laugh, interact, enjoy the arts, and ponder the mysteries of life".
He has written four more consequential books: Words and Rules (1999), The Blank Slate (2002), The Stuff of Thought (2007), and The Better Angels of Our Nature (2011). The evolution in his thinking, and the expansion of his range, the depth of his vision, are evident in his contributions on many important issues on these pages over the years: "A Biological Understanding of Human Nature", "The Science of Gender and Science", "A Preface to Dangerous Ideas", "Language and Human Nature", "A History of Violence", "The False Allure of Group Selection", "Napoleon Chagnon: Blood Is Their Argument", and "Science Is Not Your Enemy". In addition to his many honors, he is the Edge Question Laureate, having suggested three of Edge's Annual Questions: "What Is Your Dangerous Idea?"; What Is Your Favorite Deep, Elegant, Or Beautiful Explanation?"; and "What Scientific Concept Would Improve Everybody's Cognitive Toolkit?". He is a consummate third culture intellectual.
In the conversation below, Pinker begins by stating his belief that "science can inform all aspects of life, particularly psychology, my own favorite science. Psychology looks in one direction to biology, to neuroscience, to genetics, to evolution. And it looks in another direction to the rest of intellectual and cultural life—because what are the arts but products of the human mind which resonate with our aesthetic and emotional faculties? What are social issues but ways in which humans try to coordinate their behavior and come to working arrangements that benefit everyone? There's no aspect of life that cannot be illuminated by a better understanding of the mind from scientific psychology. And for me the most recent example is the process of writing itself."...
—John Brockman
STEVEN PINKER is the Johnstone Family Professor in the Department of Psychology at Harvard University. He is the author of ten books, including The Language Instinct, How the Mind Works, The Better Angels of Our Nature, and The Sense of Style (September). Steven Pinker's Edge Bio page
WRITING IN THE 21ST CENTURY
I believe that science can inform all aspects of life, particularly psychology, my own favorite science. Psychology looks in one direction to biology, to neuroscience, to genetics, to evolution. And it looks in another direction to the rest of intellectual and cultural life—because what are the arts but products of the human mind which resonate with our aesthetic and emotional faculties? What are social issues but ways in which humans try to coordinate their behavior and come to working arrangements that benefit everyone? There's no aspect of life that cannot be illuminated by a better understanding of the mind from scientific psychology. And for me the most recent example is the process of writing itself.
I'm a psychologist who studies language—a psycholinguist—and I'm also someone who uses language in my books and articles to convey ideas about, among other things, the science of language itself. But also, ideas about war and peace and emotion and cognition and human nature. The question I'm currently asking myself is how our scientific understanding of language can be put into practice to improve the way that we communicate anything, including science?
In particular, can you use linguistics, cognitive science, and psycholinguistics to come up with a better style manual—a 21st century alternative to the classic guides like Strunk and White's The Elements of Style?
Writing is inherently a topic in psychology. It's a way that one mind can cause ideas to happen in another mind. The medium by which we share complex ideas, namely language, has been studied intensively for more than half a century. And so if all that work is of any use it ought to be of use in crafting more stylish and transparent prose.
From a scientific perspective, the starting point must be different from that of traditional manuals, which are lists of dos and don'ts that are presented mechanically and often followed robotically. Many writers have been the victims of inept copy editors who follow guidelines from style manuals unthinkingly, never understanding their rationale.
For example, everyone knows that scientists overuse the passive voice. It's one of the signatures of academese: "the experiment was performed" instead of "I performed the experiment." But if you follow the guideline, "Change every passive sentence into an active sentence," you don't improve the prose, because there's no way the passive construction could have survived in the English language for millennia if it hadn't served some purpose.
The problem with any given construction, like the passive voice, isn't that people use it, but that they use it too much or in the wrong circumstances. Active and passive sentences express the same underlying content (who did what to whom) while varying the topic, focus, and linear order of the participants, all of which have cognitive ramifications. The passive is a better construction than the active when the affected entity (the thing that has moved or changed) is the topic of the preceding discourse, and should therefore come early in the sentence to connect with what came before; when the affected entity is shorter or grammatically simpler than the agent of the action, so expressing it early relieves the reader's memory load; and when the agent is irrelevant to the story, and is best omitted altogether (which the passive, but not the active, allows you to do). To give good advice on how to write, you have to understand what the passive can accomplish, and therefore you should not blue-pencil every passive sentence into an active one (as one of my copy editors once did).
Ironically, the aspect of writing that gets the most attention is the one that is least important to good style, and that is the rules of correct usage. Can you split an infinitive, that is, say, "to boldly go where no man has gone before,"or must you say to "go boldly"? Can you use the so-called fused participle—"I approve of Sheila taking the job"—as opposed to "I approve of Sheila's taking the job" (with an apostrophe "s")? There are literally (yes, "literally") hundreds of traditional usage issues like these, and many are worth following. But many are not, and in general they are not the first things to concentrate on when we think about how to improve writing.
The first thing you should think about is the stance that you as a writer take when putting pen to paper or fingers to keyboard. Writing is cognitively unnatural. In ordinary conversation, we've got another person across from us. We can monitor the other person's facial expressions: Do they furrow their brow, or widen their eyes? We can respond when they break in and interrupt us. And unless you're addressing a stranger you know the hearer's background: whether they're an adult or child, whether they're an expert in your field or not. When you're writing you have none of those advantages. You're casting your bread onto the waters, hoping that this invisible and unknowable audience will catch your drift.
The first thing to do in writing well—before worrying about split infinitives—is what kind of situation you imagine yourself to be in. What are you simulating when you write, and you're only pretending to use language in the ordinary way? That stance is the main thing that iw distinguishes clear vigorous writing from the mush we see in academese and medicalese and bureaucratese and corporatese.
The literary scholars Mark Turner and Francis-Noël Thomas have identified the stance that our best essayists and writers implicitly adopt, and that is a combination of vision and conversation. When you write you should pretend that you, the writer, see something in the world that's interesting, that you are directing the attention of your reader to that thing in the world, and that you are doing so by means of conversation.
That may sound obvious. But it's amazing how many of the bad habits of academese and legalese and so on come from flouting that model. Bad writers don't point to something in the world but are self-conscious about not seeming naïve about the pitfalls of their own enterprise. Their goal is not to show something to the reader but to prove that they are not a bad lawyer or a bad scientist or a bad academic. And so bad writing is cluttered with apologies and hedges and "somewhats" and reviews of the past activity of people in the same line of work as the writer, as opposed to concentrating on something in the world that the writer is trying to get someone else to see with their own eyes.
That's a starting point to becoming a good writer. Another key is to be an attentive reader. One of the things you appreciate when you do linguistics is that a language is a combination of two very different mechanisms: powerful rules, which can be applied algorithmically, and lexical irregularities, which must be memorized by brute force: in sum, words and rules.
All languages contain elegant, powerful, logical rules for combining words in such a way that the meaning of the combination can be deduced from the meanings of the words and the way they're arranged. If I say "the dog bit the man" or "the man bit the dog," you have two different images, because of the way those words are ordered by the rules of English grammar.
On the other hand, language has a massive amount of irregularity: idiosyncrasies, idioms, figures of speech, and other historical accidents that you couldn't possibly deduce from rules, because often they are fundamentally illogical. The past tense of "bring" is "brought," but the past tense of "ring" is "rang," and the past tense of "blink" is "blinked." No rule allows you to predict that; you need raw exposure to the language. That's also true for many rules of punctuation. If I talk about "Pat's leg," it's "Pat-apostrophe-s." But If I talk about "its leg," I can't use apostrophe S; that would be illiterate. Why? Who knows? That's just the way English works. People who spell possessive "its" with an apostrophe are not being illogical; they're being too logical, while betraying the fact that they haven't paid close attention to details of the printed page.
So being a good writer depends not just on having mastered the logical rules of combination but on having absorbed tens or hundreds of thousands of constructions and idioms and irregularities from the printed page. The first step to being a good writer is to be a good reader: to read a lot, and to savor and reverse-engineer good prose wherever you find it. That is, to read a passage of writing and think to yourself, … "How did the writer achieve that effect? What was their trick?" And to read a good sentence with a consciousness of what makes it so much fun to glide through.
Any handbook on writing today is going to be compared to Strunk and White's The Elements of Style, a lovely little book, filled with insight and charm, which I have read many times. But William Strunk, its original author, was born in 1869. This is a man who was born before the invention of the telephone, let alone the computer and the Internet and the smartphone. His sense of style was honed in the later decades of the 19th century!
We know that language changes. You and I don't speak the way people did in Shakespeare's era, or in Chaucer's. As valuable as The Elements of Style is (and it's tremendously valuable), it's got a lot of cockamamie advice, dated by the fact that its authors were born more than a hundred years ago. For example, they sternly warn, "Never use 'contact' as a verb. Don't say 'I'm going to contact him.' It's pretentious jargon, pompous and self-important. Indicate that you intend to 'telephone' someone or 'write them' or 'knock on their door.'" To a writer in the 21st century, this advice is bizarre. Not only is "to contact" thoroughly entrenched and unpretentious, but it's indispensable. Often it's extremely useful to be able to talk about getting in touch with someone when you don't care by what medium you're going to do it, and in those cases, "to contact" is the perfect verb. It may have been a neologism in Strunk and White's day, but all words start out as neologisms in their day. If you read The Elements of Style today, you have no way of appreciating that what grated on the ears of someone born in 1869 might be completely unexceptionable today.
The other problem is that The Elements of Style was composed before there existed a science of language and cognition. A lot of Strunk and White's advice depended completely on their gut reactions from a lifetime of practice as an English professor and critic, respectively. Today we can offer deeper advice, such as the syntactic and discourse functions of the passive voice—a construction which, by the way, Strunk and White couldn't even consistently identify, not having being trained in grammar.
Another advantage of modern linguistics and psycholinguistics is that it provides a way to think your way through a pseudo-controversy that was ginned up about 50 years ago between so-called prescriptivists and descriptivists. According to this fairy tale there are prescriptivists who prescribe how language ought to be used and there are descriptivists, mainly academic linguists, who describe how language in fact is used. In this story there is a war between them, with prescriptivist dictionaries competing with descriptivist dictionaries.
Inevitably my own writing manual is going to be called "descriptivist," because it questions a number of dumb rules that are routinely flouted by all the best writers and had no business being in style books in the first place. These pseudo-rules violate the logic of English but get passed down as folklore from one style sheet to the next. But debunking stupid rules is not the same thing as denying the existence of rules, to say nothing of advice on writing. The Sense of Style is clearly prescriptive: it consists of 300 pages in which I boss the reader around.
This pseudo-controversy was created when Webster's Third International Dictionary was published in the early 1960s. Like all dictionaries, it paid attention to the way that language changes. If a dictionary didn't do that it would be useless: writers who consulted it would be guaranteed to be misunderstood. For example, there is an old prescriptive rule that says that "nauseous," which most people use to mean nauseated, cannot mean that. It must mean creating nausea, namely, "nauseating." You must write that a roller coaster ride was nauseous, or a violent movie was nauseous, not I got nauseous riding on the roller coaster or watching the movie. Nowadays, no one obeys this rule. If a dictionary were to stick by its guns and say it's an error to say that the movie made me nauseous, it would be a useless dictionary: it wouldn't be doing what a dictionary has to do. This has always been true of dictionaries.
But there's a myth that dictionaries work like the rulebook of Major League Baseball; they legislate what is correct. I can speak with some authority in saying that this is false. I am the Chair of the Usage Panel of The American Heritage Dictionary, which is allegedly the prescriptivist alternative to the descriptivist Webster's. But when I asked the editors how they decide what goes into the dictionary, they replied, "By paying attention to the way people use language."
Of course dictionary editors can't pay attention to the way everyone uses language, because people use language in different ways. When you write, you're writing for a virtual audience of well-read, literate fellow readers. And those are the people that we consult in deciding what goes into the dictionary, particularly in the usage notes that comment on controversies of usage, so that readers will know what to anticipate when they opt to obey or flout an alleged rule.
This entire approach is sometimes criticized by literary critics who are ignorant of the way that language works, and fantasize about a golden age in which dictionaries legislated usage. But language has always been a grassroots, bottom-up phenomenon. The controversy between "prescriptivists" and "descriptivists" is like the choice in "America: Love it or leave it" or "Nature versus Nurture"—a euphonious dichotomy that prevents you from thinking.
Many people get incensed about so-called errors of grammar which are perfectly unexceptionable. There was a controversy in the 1960s over the advertising slogan "Winston tastes good, like a cigarette should." The critics said it should be "as a cigarette should" and moaned about the decline of standards. . A more recent example was an SAT question that asked students whether there was an error in "Toni Morrison's genius allows her to write novels that capture the African American condition." Supposedly the sentence is ungrammatical: you can't have "Toni Morrison's" as an antecedent to the pronoun "she." Now that is a complete myth: there was nothing wrong with the sentence.
Once a rumor about a grammatical error gets legs, it can proliferate like an urban legend about alligators in the sewers. Critics and self-appointed guardians of the language will claim that language is deteriorating because people violate the rule—which was never a rule in the first place. It's so much fun to be in high dudgeon over the decline of language and civilization that these critics don't stop to check the rulebooks and dictionaries to discover how great writers write or to learn the logic of the English language.
Poets and novelists often have a better feel for the language than the self-appointed guardians and the pop grammarians because for them language is a medium. It's a way of conveying ideas and moods with sounds. The most gifted writers—the Virginia Woolfs and H.G. Wellses and George Bernard Shaws and Herman Melvilles—routinely used words and constructions that the guardians insist are incorrect. And of course avant-garde writers such as Burroughs and Kerouac, and poets pushing the envelope or expanding the expressive possibilities of the language, will deliberately flout even the genuine rules that most people obey. But even non-avant garde writers, writers in the traditional canon, write in ways that would be condemned as grammatical errors by many of the purists, sticklers and mavens.
Another bit of psychology that can make anyone a better writer is to be aware of a phenomenon sometimes called The Curse of Knowledge. It goes by many names, and many psychologists have rediscovered versions of it, including defective Theory of Mind, egocentrism, hindsight bias, and false consensus. They're all versions of an infirmity afflicting every member of our species, namely that it's hard to imagine what it's like not to know something that you do know.
It's easiest to see it in children. In one famous experiment, kid comes into a room, opens a box of candy, finds pencils inside, and the kid is surprised. Then you say to him, "Now Jason's going to come into the room. What does he think is in the box?" And the child will say "pencils." Of course, Jason has no way of knowing that the box had pencils, but the first child is projecting his own state of knowledge onto Jason, forgetting that other people may not know what he knows.
Now we laugh at the kids, but it's true of all of us. We as writers often use technical terms, abbreviations, assumptions about typical experimental methods, assumptions about what questions we ask in our research, that our readers have no way of knowing because they haven't been through the same training that we have. Overcoming the curse of knowledge may be the single most important requirement in becoming a clear writer.
Contrary to the common accusation that academic writing is bad because professors are trying to bamboozle their audience with highfalutin gobbledygook, I don't think that most bad prose is deliberate. I think it is inept. It is a failure to get inside the head of your reader. We also know from psychology that simply trying harder to get inside the head of your reader is not the ideal way to do it. No matter how hard we try, we're at best okay, but not great, at anticipating another person's state of knowledge.
Instead, you have to ask. You've got to show people a draft. Even if you're writing for laypeople, your reviewers don't all have to be laypeople; a colleague is better than no one. I'm often astonished at things that I think are obvious that turn out to be not so obvious to other people.
Another implication of the curse of knowledge is that having an editor is a really good thing. Supposedly there are writers who can dash off a perfectly comprehensible, clear, and coherent essay without getting feedback from a typical reader, but most of us don't have that clairvoyance. We need someone to say "I don't understand this" or "What the hell are you talking about?" To say nothing of attention to the fine points of punctuation, grammar, sentence structure, and other ways in which a sophisticated copy editor can add value to your written work.
How much of this advice comes from my experience as a writer and how much from my knowledge as a psycholinguist? Some of each. I often reflect on psychology behind the thousands of decisions I make as a writer in the lifelong effort to improve my prose, and I often think about how to apply experiments on sentence comprehension and the history of words and the logic (and illogic) of grammar to the task of writing. I might think, "Aha, the reason I rewrote this sentence that way is because of the memory demands of subject versus object relative clauses."
This combination of science and letters is emblematic of what I hope to be a the larger trend we spoke of earlier, namely the application of science, particularly psychology and cognitive science, to the traditional domains of humanities. There's no aspect of human communication and cultural creation that can't benefit from a greater application of psychology and the other sciences of mind. We would have an exciting addition to literary studies, for example, if literary critics knew more about linguistics.Poetry analysts could apply phonology (the study of sound structure) and the cognitive psychology of metaphor. An analysis of plot in fiction could benefit from a greater understanding of the conflicts and confluences of ultimate interests in human social relationships. The genre of biography would be deepened by an understanding of the nature of human memory, particularly autobiographical memory. How much of the memory of our childhood is confabulated? Memory scientists have a lot to say about that. How much do we polish our image of ourselves in describing ourselves to others, and more importantly, recollecting our own histories? Do we edit our memories in an Orwellian manner to make ourselves more coherent in retrospect? Syntax and semantics are relevant as well. How does a writer use the tense system of English to convey a sense of immediacy or historical distance?
In music the sciences of auditory and speech perception have much to contribute to understanding how musicians accomplish their effects. The visual arts could revive an old method of analysis going back to Ernst Gombrich and Rudolf Arnheim in collaboration with the psychologist Richard Gregory Indeed, even the art itself in the 1920s was influenced by psychology, thanks in part to Gertrude Stein, who as an undergraduate student of William James did a wonderful thesis on divided attention, and then went to Paris and brought the psychology of perception to the attention of artists like Picasso and Braque. Gestalt psychology may have influenced Paul Klee and the expressionists. Since then we have lost that wonderful synergy between the science of visual perception and the creation of visual art.
Going beyond the arts, the social sciences, such as political ,science could benefit from a greater understanding of human moral and social instincts, such as the psychology of dominance, the psychology of revenge and forgiveness, and the psychology of gratitude and social competition. All of them are relevant, for example, to international negotiations. We talk about one country being friendly to another or allying or competing, but countries themselves don't have feelings. It's the elites and leaders who do, and a lot of international politics is driven by the psychology of its leaders.
Even beyond applying the findings of psychology and cognitive science and social and affective neuroscience, it's the mindset of science that ought to be exported to cultural and intellectual life as a whole. That consists in increased skepticism and scrutiny about factual conventional wisdom: How much of what you think is true really is true if you go to the, the numbers? For me this has been a salient issue in analyzing violence, because the conventional wisdom is that we're living in extraordinarily violent times.
But if you take into account the psychology of risk perception, as pioneered by Daniel Kahneman, Amos Tversky, Paul Slovic, Gerd Gigerenzer, and others, you realize that the conventional wisdom is systematically distorted by the source of our information about the world, namely the news. News is about the stuff that happens; it's not about the stuff that doesn't happen. Human risk perception is affected by memorable examples, according to Tversky and Kahneman's availability heuristic. No matter what the rate of violence is objectively, there are always enough examples to fill the news. And since our perception of risk is influenced by memorable examples, we'll always think we're living in violent times. It's only when you apply the scientific mindset to world events, to political science and history, and try to count how many people are killed now as opposed to ten years ago, a hundred years ago, or a thousand years ago that you get an accurate picture about the state of the world and the direction that it's going, which is largely downward. That conclusion only came from applying an empirical mindset to the traditional subject matter of history and political science.
The other aspect of the scientific mindset that ought to be exported to the rest of intellectual life is the search for explanations. That is, not to just say that history is one damn thing after another, that stuff happens, and there's nothing we can do to explain why, but to relate phenomena to more basic or general phenomena … and to try to explain those phenomena with still more basic phenomena. We've repeatedly seen that happen in the sciences, where, for example, biological phenomena were explained in part at the level of molecules, which were explained by chemistry, which was explained by physics.
There's no reason that that this process of explanation can't continue. Biology gives us a grasp of the brain, and human nature is a product of the organization of the brain, and societies unfold as they do because they consist of brains interacting with other brains and negotiating arrangements to coordinate their behavior, and so on.
Now I know that there is tremendous resistance to this idea, because it's confused with a boogeyman called "reductionism"—the fear that we must explain World War I in terms of genes or even elementary particles.
But explanation does not imply reduction. You reduce the building blocks of an explanation to more complex phenomena one level down, but you don't discard the explanation of the phenomenon itself. So World War I obviously is not going to be explained in terms of neuroscience.On the other hand, World War I could be explained in terms of the emotions of fear and dominance and prestige among leaders, which fell into a deadly combination at that moment in history. And instead of just saying, "Well, that's the way things are, and there's nothing more we can say about it," we can ask, , "Why do people compete for prestige? Why do people have the kinds of fears that they do?
The answer doesn't have to be, "Because I said so" or "Because that's the way it is." You can ask, "How does the psychology of fear work? How does the psychology of dominance work? How does the psychology of coalitions work?" Having done that, you get a deeper understanding of some of the causes of World War I. That doesn't mean you throw out the conventional history of World War I, it just means that you enrich it, you diversity it, you deepen it. A program of unifying the arts and humanities with the psychological sciences and ultimately the biological sciences promises tremendous increases of depth of understanding for all the fields.
I'm often asked, "Who are the leaders of this movement? Whose writings should we be reading and discussing?" But that misses the point. It's not about individual people. It's more revolutionary than just reading this, that or the other person. There has to be a change in mindset coming from both directions. It's not just a question of getting traditional scholars from the humanities and social sciences to start incorporating more science, to start thinking more like scientists. It's got to work the other direction as well. A lot of scientists really are philistines when it comes to history and political theory and philosophy. We need to break down the idea that there are these separate disciplines and modes of study.
In trying to figure out what would give us the deepest, most insightful, most informative understanding of the world and ourselves, we have to be aware of the turf battles: who gets the franchise for talking about what matters. That is one reason that there is cadre of traditional intellectuals who have been hostile to science. I'm not talking about the climate deniers or the vaccine kooks but those who resent the idea that the discussion of what matters, of morality, of politics, of meaning, of purpose should be taken on by these philistines called scientists or social scientists. They act as if the franchise for these heavyweight topics has been given to critics and literary scholars and commentators on religion.
But we need not give credence to people who are simply protecting their turf. It's becoming increasingly clear over the decades and centuries that an understanding of science is central to our understanding of the deepest questions of who we are, where we came from, what matters. If you aren't aware of what science has to say about who we are and what we're like as a species, then you're going to be missing a lot of insight about human life. The fact that this upsets certain traditional bastions of commentary shouldn't matter. People always protect their turf.
That's why I'm reluctant to answer when I'm asked who are the people we should be reading, what names can we associate with this approach. It's not about people. It's about the ideas, and the ideas inevitably come piecemeal from many thinkers. The ideas are refined, exchanged, accumulated, and improved by a community of thinkers, each of whom will have some a few ideas and a lot of bad ideas. What we've been talking about is a direction that I hope the entire intellectual culture goes in. It's not about anointing some guru.
Another intellectual error we must be suspicious of is the ever-present tendency to demonize the younger generation and the direction in which culture and society are going. In every era there are commentators who say that the kids today are dumbing down the culture and taking human values with them. Today the accusations are often directed at anything having to do with the Web and other electronic technologies—as if the difference between being printed on dead trees and displayed as pixels on a screen is going to determine the content of ideas. We're always being told that young people suck: that they are illiterate and unreflective and un-thoughtful, all of which ignores the fact that every generation had that said about them by the older generation. Yet somehow civilization persists.
An appreciation of psychology can remind us that we as a species are prone to these bad habits. When we comment on the direction that intellectual life is going, we should learn to discount our own prejudices, our own natural inclination to say "I and my tribe are entitled to weigh in on profound issues, but members of some other guild or tribe or clique are not." And "My generation is the embodiment of wisdom and experience, and the younger generation is uncouth, illiterate, unwashed and uncivilized." better
There is no "conflict between the sciences and humanities," or at least there shouldn't be. There should be no turf battle as to who gets to speak about what matters. What matters are ideas. We should seek the ideas that give us the deepest, richest, best-informed understanding of the human condition, regardless of which people or what discipline originates them. That has to include the sciences, but it can't come only from the sciences. The focus should be on ideas, not on people, disciplines, or academic traditions.
Subscribe to:
Posts (Atom)