Showing posts with label cognitive skills. Show all posts
Showing posts with label cognitive skills. Show all posts

Thursday, April 24, 2014

Cognitive Skills Decline from the Age of 24, Especially on StarCraft 2


Okay, I admit when I saw this headline, my first thought was, "Well, sh!t, that was nearly half a lifetime ago. I'm screwed." Fortunately, I know better than to trust headlines (which is why I changed it for the title of this post). The study is based on ability to play a video game called StarCraft 2.

The younger the players the better their skills on 5 specific measures:
  • Looking-doing latency (similar to reaction time)
  • Dual-task performance
  • Total reported hours of StarCraft 2 experience
  • Effective use of hotkeys
  • Effective management of view-screens/maps
The older players, however, adapted to their reaction time limitations and remained competitive.

"Older players, though slower, seem to compensate by employing simpler strategies and using the game's interface more efficiently than younger players, enabling them to retain their skill, despite cognitive motor-speed loss."
So maybe I am over the hill for video game play, but that's cool. I would not trade the experience and wisdom I have now for youth for any amount of money.

Our cognitive skills decline from the age of 24, but there is hope

Saturday 19 April 2014 
Written by David McNamee 
  If you are an adult who has ever been told by a partner or colleague that you are "too old to be playing video games," then they may well have a point. A new study - using a video game as a test - has found that people over the age of 24 are past their peak in terms of cognitive motor performance.

Generally, the researchers behind the new study observe, people tend to think of middle age as being around 45 years of age - around the time when age-related declines in cognitive-motor functioning become obvious.

But there is evidence that our memory and speed relating to cognitive tasks peak much earlier in our lives.

However, data on this is limited because most scientific studies examining the relationship of cognitive motor performance and aging focus on elderly populations, rather than when the decline in performance actually begins.

The authors note that some researchers have investigated the origins of cognitive motor performance decline but have only used simple reaction time tasks to measure performance. 
The new study - carried out by two doctoral students from Simon Fraser University in Burnaby, Canada, and their thesis supervisor - is built around a large-scale social science experiment involving the real-time space-faring strategy game StarCraft 2.

The data for the study came from the researchers replaying and analyzing 870 hours of gameplay from 3,305 StarCraft 2 players aged between 16 and 44.



How can StarCraft 2 be used to measure cognitive motor performance?


In the game, players have to successfully manage their civilization's economy and military growth, with the objective of tactically defeating their opponent's army.

All aspects of gameplay occur in real time, so the player is required to make a large number of adjustments continuously, and they must carefully make decisions and develop overall strategies in a manner that the researchers compare to chess or managing an emergency.

Attention to detail and fast reaction time are both important components of successful gameplay.




Attention to detail and fast reaction time are both important components of successful StarCraft 2 gameplay.

The researchers analyzed the following variables of gameplay:

  • Looking-doing latency (similar to reaction time)
  • Dual-task performance
  • Total reported hours of StarCraft 2 experience
  • Effective use of hotkeys
  • Effective management of view-screens/maps.
Complex statistical modeling then allowed the researchers to arrive at meaningful results relating to the players' game behaviors and response time.
"After around 24 years of age, players show slowing in a measure of cognitive speed that is known to be important for performance," reveals lead author and doctoral student Joe Thompson. "This cognitive performance decline is present even at higher levels of skill." 
But there is hope yet for you older gamers. Because - parallel to the cognitive performance decline in the over-24 year olds - Thompson and his colleagues noticed the older players adapting naturally to their cognitive disadvantages.

"Our research tells a new story about human development," claims Thompson.

"Older players, though slower, seem to compensate by employing simpler strategies and using the game's interface more efficiently than younger players, enabling them to retain their skill, despite cognitive motor-speed loss."
By efficiently manipulating the use of hotkeys and multiple screens, the older players were able to make up for their delayed speed in executing real-time commands.

"Our cognitive-motor capacities are not stable across our adulthood," suggests Thompson, "but are constantly in flux." He considers that the results of this study - his doctorate thesis, which is published in PLOS One - demonstrate how "our day-to-day performance is a result of the constant interplay between change and adaptation."

In January, Medical News Today reported on a study that linked slow reaction time to risk of early death.


Full Citation:
Thompson, JJ, Blair, MR, and Henry, AJ. (2014, Apr 9). Over the Hill at 24: Persistent Age-Related Cognitive-Motor Decline in Reaction Times in an Ecologically Valid Video Game Task Begins in Early Adulthood. PLOS One. DOI: 10.1371/journal.pone.0094215

Here is the abstract to the study (you can read the whole study by following the link below):

Over the Hill at 24: Persistent Age-Related Cognitive-Motor Decline in Reaction Times in an Ecologically Valid Video Game Task Begins in Early Adulthood

Joseph J. Thompson, Mark R. Blair, Andrew J. Henrey

Published: April 09, 2014
DOI: 10.1371/journal.pone.0094215

Abstract

Typically studies of the effects of aging on cognitive-motor performance emphasize changes in elderly populations. Although some research is directly concerned with when age-related decline actually begins, studies are often based on relatively simple reaction time tasks, making it impossible to gauge the impact of experience in compensating for this decline in a real world task. The present study investigates age-related changes in cognitive motor performance through adolescence and adulthood in a complex real world task, the real-time strategy video game StarCraft 2. In this paper we analyze the influence of age on performance using a dataset of 3,305 players, aged 16-44, collected by Thompson, Blair, Chen & Henrey [1]. Using a piecewise regression analysis, we find that age-related slowing of within-game, self-initiated response times begins at 24 years of age. We find no evidence for the common belief expertise should attenuate domain-specific cognitive decline. Domain-specific response time declines appear to persist regardless of skill level. A second analysis of dual-task performance finds no evidence of a corresponding age-related decline. Finally, an exploratory analyses of other age-related differences suggests that older participants may have been compensating for a loss in response speed through the use of game mechanics that reduce cognitive load.

Wednesday, February 15, 2012

Body Quirks Affect Our Thinking in Predictable Ways


I have long believed that bodies shape minds - my original premise was that an unhealthy body reflects or embodies an unhealthy mind. For example, there is a growing stack of studies that show obesity impacts cognitive abilities for the worse. There is also new evidence that having cosmetic surgery is linked to declines in mental health status. These may be chicken and egg issues, but they demonstrate that body and mind are incredibly interlinked.

A new study published in Current Directions in Psychological Science, a journal of the Association for Psychological Science, looks at how handedness shapes cognitive skills and functions. In a sense, this is an extension of the old idea that left-handed people think differently than right-handers.

We like to think of ourselves as rational creatures, absorbing information, weighing it carefully, and making thoughtful decisions. But, as it turns out, we’re kidding ourselves. Over the past few decades, scientists have shown there are many different internal and external factors influencing how we think, feel, communicate, and make decisions at any given moment.

One particularly powerful influence may be our own bodies, according to new research reviewed in the December issue of Current Directions in Psychological Science, a journal of the Association for Psychological Science.

Cognitive scientist Daniel Casasanto, of The New School for Social Research, has shown that quirks of our bodies affect our thinking in predictable ways, across many different areas of life, from language to mental imagery to emotion.

People come in all different shapes and sizes, and people with different kinds of bodies think differently — an idea Casasanto has termed the ‘body-specificity hypothesis.’

One way our bodies appear to shape our decision-making is through handedness. Casasanto and his colleagues explored whether being right-handed or left-handed might influence our judgments about abstract ideas like value, intelligence, and honesty.

Through a series of experiments, they found that, in general, people tend to prefer the things that they encounter on the same side as their dominant hand. When participants were asked which of two products to buy, which of two job applicants to hire, or which of two alien creatures looked more trustworthy, right-handers routinely chose the product, person, or creature they saw on the right side of the page, while left-handers preferred the one on the left. These kinds of preferences have been found in children as young as 5 years old.

But why should our handedness matter when it comes to making such abstract evaluations? It all comes down to fluency, according to Casasanto. “People like things better when they are easier to perceive and interact with,” he says. Right-handers interact with their environment more easily on the right than on the left, so they come to associate ‘good’ with ‘right’ and ‘bad’ with ‘left.’

This preference for things on our dominant side isn’t set in stone. Right-handers who’ve had their right hands permanently handicapped start to associate ‘good’ with ‘left.’ The same goes for righties whose ‘good’ hand is temporarily handicapped in the laboratory, Casasanto and colleagues found. “After a few minutes of fumbling with their right hand, righties start to think like lefties,” says Casasanto. “If you change people’s bodies, you change their minds.”

It’s clear that this association has implications beyond the laboratory. The body-specificity hypothesis may even play a role in voting behavior – Casasanto points out that many states still use butterfly ballots, with candidates’ names listed on the left and right.

“Since about 90 percent of the population is right-handed,” says Casasanto, “people who want to attract customers, sell products, or get votes should consider that the right side of a page or a computer screen might be the ‘right’ place to be.”
 

Sunday, October 02, 2011

Jonah Lehrer - Every Child Is A Scientist


This is a cool article from Jonah Lehrer (The Frontal Cortex) posted at Wired Science. The research Lehrer is discussing seeks to understand causal reasoning in young children (mean age of 54 months). Below the excerpt from his post (which serves as a good overview) is the link to the original research article and a couple of paragraphs from the introduction. I have also included links to a couple of books by Alison Gopnik, one of the leading researchers in this field.

Every Child Is A Scientist

Pablo Picasso once declared: “Every child is an artist. The problem is how to remain an artist once we grow up.” Well, something similar can be said about scientists. According to a new study in Cognition led by Claire Cook at MIT, every child is a natural scientist. The problem is how to remain a scientist once we grow up.
The psychologists conducted their experiments on four and five-year-olds, so they had to be pretty simple. Sixty kids were shown a boxy toy that played music when beads were placed on it. Half of the children saw a version of the toy in which the toy was only activated after four beads were exactingly placed, one at a time, on the top of the toy. This was the “unambiguous condition,” since it implied every bead is equally capable of activating the device. However, other children were randomly assigned to an “ambiguous condition,” in which only two of the four beads activated the toy. (The other two beads did nothing.) In both conditions, the researchers ended their demo with a question: “Wow, look at that. I wonder what makes the machine go?”
Next came the exploratory phase of the study. The children were given two pairs of new beads. One of the pairs was fixed together permanently. The other pair could be snapped apart. They had one minute to play. 
Here’s where the ambiguity made all the difference.
Read the whole article.

The original research article is available online.
Where science starts: Spontaneous experiments in preschoolers’ exploratory play
Claire Cook a, Noah D. Goodman b, Laura E. Schulz


Abstract
Probabilistic models of expected information gain require integrating prior knowledge about causal hypotheses with knowledge about possible actions that might generate data relevant to those hypotheses. Here we looked at whether preschoolers (mean: 54 months) recognize ‘‘action possibilities’’ (affordances) in the environment that allow them to isolate variables when there is information to be gained. By manipulating the physical properties of the stimuli, we were able to affect the degree to which candidate variables could be isolated; by manipulating the base rate of candidate causes, we were able to affect the potential for information gain. Children’s exploratory play was sensitive to both manipulations: given unambiguous evidence children played indiscriminately and rarely tried to isolate candidate causes; given ambiguous evidence, children both selected (Experiment 1) and designed (Experiment 2) informative interventions.
Here are a few paragraphs from the introduction that looks at the history of causal reasoning in children.
The ‘‘child as scientist’’ account would seem to predict that an additional functional feature of theories – the ability to support informative exploration – should also emerge in early childhood. However, evidence for this seemingly fundamental point of comparison between science and cognitive development, the dynamic by which new knowledge is acquired, has been strikingly mixed. Indeed, education research looking at the relationship between self-guided exploration and science learning has found evidence against the claim that children ‘‘learn by doing.’’ Studies suggest that students have a poor metacognitive understanding of principles of experimental design, difficulty designing controlled interventions, and difficulty anticipating the type of evidence that would support or undermine causal hypotheses (Inhelder, Piaget, 1958; Klahr, Nigam, 2004; Kuhn, 1989; Kuhn, Amsel, O’Laughlin, 1988; Koslowski, 1996; Masnick, Klahr, 2003).


Research in science education however, typically investigates students’ understanding of real world phenomena (e.g., density, balance relations, etc.). In such contexts, children’s reliance on domain-specific prior beliefs may mask their formal reasoning abilities (Koslowski, 1996; Kuhn, 1989; Kushnir, Gopnik, 2005; Schulz, Bonawitz, Griffiths, 2007; Schulz, Gopnik, 2004; Sobel, Munro, 2009). Additionally, students are often tested on relatively complex, multivariate problems (e.g., Kuhn, 1989; Masnick & Klahr, 2003). Such problems are appropriate for investigating factors that could affect classroom performance but may underestimate children’s causal reasoning in simpler contexts.


Developmental studies provide stronger grounds for optimism about children’s ability to design informative interventions. Work in fields ranging from perception to motor learning to industrial design (e.g., Adolph, Eppler, & Gibson, 1993; Berger, Adolph, Lobo, 2005; Brown, 1990; Lockman, 2000; Norman, 1988, 1999) suggests that learners discover action possibilities or affordances (Gibson, 1977) in the environment through exploration. Research suggests for instance that toddlers inspect the length and ends of rakes when they need a tool to reach a distant object (Brown, 1990), and the rigidity of handrails when they need to cross narrow bridges (Berger et al., 2005). Similarly, when access to a toy or food is obstructed, toddlers, non-human primates, and even corvids can perform novel interventions to gain information and achieve their goals (Brauer, Kaminski, Reidel, Call, Tomasello, 2006; Emery, Clayton, 2004; Hood, Carey, Prasada, 2000; Mendes, Hanus, Call, 2007; Stulp, Emery, Verhulst, Clayton, 2009). However, children can learn object functions without designing experiments; the ability to intervene on physical features of the environment to gain information does not necessarily entail the ability to intervene when information is unknown because of formal properties of the evidence (e.g., because causal variables are confounded).


The strongest evidence that children may understand some formal principles underlying experimental design comes from research looking at children’s causal reasoning. Studies suggest, for instance, that preschoolers understand patterns of co-variation well enough to distinguish genuine causes from spurious associations: if two variables together generate an effect but only one variable generates the effect independently, children conclude that the other variable is not a cause (Gopnik, Sobel, Schulz, Glymour, 2001; Kushnir, Gopnik, 2005, 2007; Schulz, Gopnik, 2004). Children’s causal judgments are also sensitive to the base rate of candidate causes. When the status of a causal variable is ambiguous, preschoolers are more likely to believe it is causal when causes are common than when they are rare (Sobel, Tenenbaum, Gopnik, 2004). Moreover, preschoolers can draw accurate inferences not only from observed evidence but also from evidence they generate (by chance) in exploratory play (Schulz, Gopnik, Glymour, 2007). Finally, two recent studies (Gweon, Schulz, 2008; Schulz, Bonawitz, 2007) suggest that children’s exploratory play is affected by the ambiguity of the evidence they observe; given confounded or un-confounded evidence about which of two variables controls which of two effects, preschoolers’ selectively explore confounded evidence. Critically however, selective exploration of confounded evidence is advantageous even if children explore randomly (with no understanding of how to isolate variables): the more different actions children perform, the better their odds of generating informative data.
One of the authors the paper cites is Alison Gopnik. A couple of her books on this topic include Causal Learning: Psychology, Philosophy, and Computation by Alison Gopnik and Laura Schulz (more academic) and The Scientist in the Crib: What Early Learning Tells Us About the Mind by Alison Gopnik, Andrew N. Meltzoff, and Patricia K. Kuhl (for more mainstream audiences).


Sunday, August 28, 2011

Young brains lack the wisdom of their elders


The old belief that we become wiser in our old age appears to be true, at least in terms of how we allocate effort and energy in the brain. Essentially, we are better able to integrate experience into brain function and problem solving.


Clinical study shows young brains lack the wisdom of their elders

August 25, 2011



Language task reveals brains of older people are not slower but rather wiser than young brains, allowing older adults to achieve an equivalent level of performance.
The brains of older people are not slower but rather wiser than young brains, which allows  to achieve an equivalent level of performance, according research undertaken at the University  Institute of Montreal by Dr. Oury Monchi and Dr. Ruben Martins of the Univeristy of Montreal.
"The older  has experience and knows that nothing is gained by jumping the gun. It was already known that aging is not necessarily associated with a significant loss in cognitive function. When it comes to certain tasks, the brains of older adults can achieve very close to the same performance as those of younger ones," explained Dr. Monchi. "We now have neurobiological evidence showing that with age comes  and that as the brain gets older, it learns to better allocate its resources. Overall, our study shows that Aesop's fable about the tortoise and the hare was on the money: being able to run fast does not always win the race—you have to know how to best use your abilities. This adage is a defining characteristic of aging."
The original goal of the study was to explore the brain regions and pathways that are involved in the planning and execution of language pairing tasks. In particular, the researchers were interested in knowing what happened when the rules of the task changed part way through the exercise. For this test, participants were asked to pair words according to different lexical rules, including semantic category (animal, object, etc.), rhyme, or the beginning of the word (attack). The matching rules changed multiple times throughout the task without the participants knowing. For example, if the person figured out that the words fell under the same semantic category, the rule was changed so that they were required to pair the words according to rhyme instead.
"Funny enough, the young brain is more reactive to negative reinforcement than the older one. When the young participants made a mistake and had to plan and execute a new strategy to get the right answer, various parts of their brains were recruited even before the next task began. However, when the older participants learned that they had made a mistake, these regions were only recruited at the beginning of the next trial, indicating that with age, we decide to make adjustments only when absolutely necessary. It is as though the older brain is more impervious to criticism and more confident than the young brain," stated Dr. Monchi.
Provided by University of Montreal (news : web)

Thursday, April 07, 2011

RSA Keynote - Barbara Strauch: The Secret Life of the Grown-Up Brain


Cool keynote lecture from RSA Events - featuring New York Times' health and science editor Barbara Strauch, author of The Secret Life of the Grown-up Brain: The Surprising Talents of the Middle-Aged Mind.

The Secret Life of the Grown-Up Brain

6th Apr 2011; 13:00

Listen to the audio

Please right-click button and choose "Save Link As..." to download audio file onto your computer.

RSA Keynote

For many years, scientists thought that the human brain simply decayed over time and its dying cells led to memory slips, fuzzy logic, negative thinking, and even depression.

But new research from neuroscientists and psychologists suggests that, in fact, the brain reorganises, improves in important functions, and even helps us adopt a more optimistic outlook in middle age. Growth of white matter and brain connectors allow us to recognize patterns faster, make better judgments, and find unique solutions to problems.

Scientists call these traits cognitive expertise and they reach their highest levels in middle age.

Join the New York Times' health and science editor Barbara Strauch at the RSA as she reveals the latest research that shows that the middle-aged brain is more flexible, more capable and more surprisingly talented than previously thought.

Speaker: Barbara Strauch, health and medical science editor and deputy science editor at The New York Times.

Get the latest RSA Audio

Subscribe to RSA Audio iTunes Podcast iTunes | RSA Audio RSS Feed RSS | RSA Mixcloud page Mixcloud

You are welcome to link to, download, save or distribute our audio/video files electronically. Find out more about our open access licence.

Speakers

Books

The Secret Life of the Grown-Up Brain

Saturday, January 29, 2011

The Economist - The rise and rise of the cognitive elite

Excellent article from The Economist - see the bottom for additional articles on their series on global leaders.

The rise and rise of the cognitive elite
Brains bring ever larger rewards
A special report on global leaders

Jan 20th 2011 | from PRINT EDITION

WHEN the financial crisis struck, says a prominent banker, the women he knows stopped wearing jewellery. “It wasn’t just that they were self-conscious about the ostentation. It was because it didn’t look good to them any more.” He goes on: “There were blogs that had my name, my family’s names, my address. There were death threats. You’d think this could be some pimply kid in a basement, but John Lennon met some pimply kid from a basement. And the kid shot him.”

The crash sparked a wave of public ire against financiers, and against rich people in general. It also intensified the debate about inequality, which has risen sharply in nearly all rich countries. In America, for example, in 1987 the top 1% of taxpayers received 12.3% of all pre-tax income. Twenty years later their share, at 23.5%, was nearly twice as large. The bottom half’s share fell from 15.6% to 12.2% over the same period.

They don’t do Dior here

Jan Pen, a Dutch economist who died last year, came up with a striking way to picture inequality. Imagine people’s height being proportional to their income, so that someone with an average income is of average height. Now imagine that the entire adult population of America is walking past you in a single hour, in ascending order of income.

The first passers-by, the owners of loss-making businesses, are invisible: their heads are below ground. Then come the jobless and the working poor, who are midgets. After half an hour the strollers are still only waist-high, since America’s median income is only half the mean. It takes nearly 45 minutes before normal-sized people appear. But then, in the final minutes, giants thunder by. With six minutes to go they are 12 feet tall. When the 400 highest earners walk by, right at the end, each is more than two miles tall.

The most common measure of inequality is the Gini coefficient. A score of zero means perfect equality: everyone earns the same. A score of one means that one person gets everything. America’s Gini coefficient has risen from 0.34 in the 1980s to 0.38 in the mid-2000s. Germany’s has risen from 0.26 to 0.3 and China’s has jumped from 0.28 to 0.4 (see chart 2). In only one large country, Brazil, has the coefficient come down, from 0.59 to 0.55.

Surprisingly, over the same period global inequality has fallen, from 0.66 in the mid-1980s to 0.61 in the mid-2000s, according to Xavier Sala-i-Martin, an economist at Columbia University. This is because poorer countries, such as China, have grown faster than richer countries.

How much does inequality matter? A lot, say Richard Wilkinson and Kate Pickett, the authors of “The Spirit Level: Why Equality is Better for Everyone”. Their book caused a stir in Britain by showing, with copious graphs and statistics, that inequality is associated with all manner of social ills. After comparing various unequal countries and American states with more equal ones, the authors concluded that greater inequality leads to more crime, higher infant mortality, fatter citizens, shorter lives, more teenage pregnancies, more discrimination against women and so on. They even found that more equal countries are more innovative, as measured by patents earned per person.

Mr Wilkinson and Ms Pickett suggest that equal societies fare better because humans evolved in small groups of hunter-gatherers who shared food. Modern, unequal societies are hugely stressful because they violate people’s hard-wired sense of fairness. The authors call for stiffer taxes on the rich and more co-operative ownership of companies. Pundits on the left applaud, but others are not so sure.

Peter Saunders of Policy Exchange, a centre-right think-tank in London, thinks the book’s statistical claims are mostly bunk. He points to several flaws. First, Mr Wilkinson and Ms Pickett did not exclude outliers from their sample. So, for example, when they say that unequal countries have higher murder rates than equal ones, all they have really observed is that Americans kill each other much more often than do people in other rich countries, perhaps because they are better armed. For the rest of the sample the link between inequality and homicide does not hold.

Likewise, their findings about life expectancy depend on the Japanese, whose longevity is more likely to be due to a healthy diet than to a flat income distribution. And their findings about teen births, women’s status and innovation depend on Scandinavia, a region with a mild and sensible culture that is equally evident among people of Scandinavian stock who live in America.

Factors other than inequality are often more strongly correlated with the problems described in the book. In American states, for example, race is a far more accurate predictor of murder, imprisonment and infant-mortality rates, says Mr Saunders. He also chides the authors for ignoring countries that do not fit their theory, and for glossing over social problems, such as divorce and suicide, that are worse in more equal countries.

This debate will probably never be resolved. The statistical problems are tricky enough. If you measure inequality of wealth rather than income, the global pecking order changes. By this measure, Sweden is less equal than Britain, since fewer Swedes have private pensions. And if you measure consumption, the world seems a more equal place. The poor in rich countries often consume more than they earn, because they receive welfare benefits and use public services. The very rich often consume only a small portion of their income. Bill Gates is millions of times richer than the average person, but he does not eat millions of meals each day.

The philosophical questions are even trickier. It seems unfair that footballers, bankers and tycoons earn more money than they know what to do with whereas jobless folk and single parents struggle to pay the rent, notes Mr Saunders. Yet it also seems unfair to take money from those who have worked hard and give it to those who have not, or to take away the profits of those who have risked their life savings to bring a new invention to market in order to help those who have risked nothing. Different societies choose to deal with this conflict in different ways.

It is hard to gauge just how strongly people object to inequality. A recent poll by the BBC, a tax-funded broadcaster, found that many people in Britain think cashiers and care assistants should be paid more and chief executives and football stars less. Yet few Britons tip cashiers, boycott firms with fat-cat bosses or watch second-division football teams.

The Pew Global Attitudes Project asks people in various countries whether in their view “most people are better off in a free-market economy, even though some people are rich and some are poor.” In Britain, France, Germany, Poland, America and even Sweden most people agree, but in Japan and Mexico most disagree. People in countries that have recently liberalised and are now booming are the most enthusiastic: 79% of Indians and 84% of Chinese say yes.

Degrees of fairness

Inequality jars less if the rich have earned their fortunes. Steve Jobs is a billionaire because people love Apple’s products; J.K. Rowling’s vault is stuffed with gold galleons because millions have bought her Harry Potter books. But people are more resentful when bankers are rewarded for failure, or when fortunes are made by rent-seeking rather than enterprise.

In the most corrupt countries the rulers simply help themselves to public money. In mature democracies power is abused in more subtle ways. In Japan, for example, retiring bureaucrats often take lucrative jobs at firms they used to regulate, a practice known as amakudari (literally “descent from heaven”). The Kyodo news agency reported last year that all 43 past and present heads of six non-profit organisations funded by government-run lottery revenues secured their jobs this way.

In America, too, ex-politicians often walk into cushy directorships when they retire. This may be because they are talented, driven individuals. But a study by Amy Hillman of Arizona State University finds that American firms in heavily regulated industries such as telecoms, drugs or gambling hire more ex-politicians as directors than firms in lightly regulated ones.

People from humble origins sometimes rise to the top. Barack Obama was raised by a single mother. Lloyd Blankfein, the boss of Goldman Sachs, is the son of a clerk. What such people usually have in common is uncommon intelligence.

All kinds of talent are rewarded. But the number of people who get rich by singing or kicking a ball is tiny compared with the number who become wealthy or influential through brainpower. The most lucrative careers, such as law, medicine, technology and finance, all require above-average mental skills. A bond dealer need not appreciate Proust, but he must be able to do sums in his head. A lawyer need not understand “A Brief History of Time”, but she must be able to argue logically.

The clever shall inherit the earth

As technology advances, the rewards to cleverness increase. Computers have hugely increased the availability of information, raising the demand for those sharp enough to make sense of it. In 1991 the average wage for a male American worker with a bachelor’s degree was 2.5 times that of a high-school drop-out; now the ratio is 3. Cognitive skills are at a premium, and they are unevenly distributed.

Parents who graduated from university are far more likely than non-graduates to raise children who also earn degrees. This is true in all countries, but more so in America and France than in Israel, Finland or South Korea, according to the OECD. Nature, nurture and politics all play a part.

Children may inherit a genetic predisposition to be intelligent. Their raw mental talents may then be nurtured better in some homes than others. Bookish parents read more to their children, use a larger vocabulary when they talk to them and prod them to do their homework. Educated parents typically earn more (see chart 3), so they can afford private schools or houses near good public ones. In America, where residential segregation is extreme, the best public schools are stuffed with college-bound strivers, whereas the worst need metal detectors. School reform helps, but cannot level the playing field.

“Assortative mating” further entrenches inequality. Highly educated men are much more likely to marry highly educated women than they were a generation ago. In 1970 only 9% of those with bachelors’ degrees in America were women, so the vast majority of men with such degrees married women who lacked them. Now the numbers are roughly even (in fact women are earning more degrees) and people tend to pair up with mates of a similar educational background.

Women have made immense strides in the workplace, too. For example, in 1970, fewer than 5% of American lawyers were female. Now the figure is 34%, and nearly half of law students are female. So highly educated, double-income power couples have become far more common. The children of such couples have every advantage, but there are not many of them. The lifetime fertility rate for American high-school dropouts is 2.4; for women with advanced degrees, it is only 1.6. The opportunity costs of child-rearing are far higher for a woman who earns $200,000 a year than for one who greets customers at Wal-Mart. And raising elite children is expensive. A lawyer couple can easily afford to put one child through Yale, but perhaps not four.

The cost of higher education has contributed to plummeting birth rates among pushy parents in other rich countries, too. Greens may rejoice at anything that curbs population growth, but the implications of these trends are troubling. Demography makes it harder for people who start at the bottom of the ladder to climb up it. And that has political consequences.