Showing posts with label open access. Show all posts
Showing posts with label open access. Show all posts

Tuesday, July 01, 2014

The Internet’s Own Boy: The Story of Aaron Swartz - New Documentary Is Free Online


This story is a tragedy, in my opinion. Aaron Swartz was being made an example of for having embarrassed the government on a couple of occasions. Even the case for which charges were finally brought did not cause any financial harm to his target (JSTOR), who urged the government to drop the charges. The Feds refused - Swartz's conviction would serve as a warning. Instead, the young man hanged himself in his NYC apartment.

Here is a key passage that explains why so many of us supported Swartz's "work":
Swartz’s manifesto didn’t just call for the widespread illegal downloading and sharing of copyrighted scientific and academic material, which was already a dangerous idea. It explained why. Much of the academic research held under lock and key by large institutional publishers like Reed Elsevier had been largely funded at public expense, but was now being treated as private property – and as Swartz understood, that was just one example of a massive ideological victory for corporate interests that had penetrated almost every aspect of society. The actual data theft for which Swartz was prosecuted, the download of a large volume of journal articles from the academic database called JSTOR, was largely symbolic and arguably almost pointless. (As a Harvard graduate student at the time, Swartz was entitled to read anything on JSTOR.)
Academic publishers like Reed Elsevier, JSTOR, Science Direct, Nature, Hindawi, Springer, and others control nearly all of the published research in nearly every field, much of which is funded by tax dollars either directly or indirectly.

These publishers then charge authors hundreds [sometimes thousands] of dollars to publish, and charge more if the author wants open access; they charge for images in articles; they charge libraries hundreds of dollars for subscriptions, even digital subscriptions; and they try to charge consumers (like me) between $30 and $70 for use of an article (often on 24 hours).

Anyway, first up here is a review of the film and the life of its subject, via Salon, followed by an open access version of the film from Open Culture.

“The Internet’s Own Boy”: How the government destroyed Aaron Swartz

A film tells the story of the coder-activist who fought corporate power and corruption -- and paid a cruel price

Andrew O'Hehir |



Aaron Swartz (Credit: TakePart/Noah Berger)

Brian Knappenberger’s Kickstarter-funded documentary The Internet’s Own Boy: The Story of Aaron Swartz, which premiered at Sundance barely a year after the legendary hacker, programmer and information activist took his own life in January 2013, feels like the beginning of a conversation about Swartz and his legacy rather than the final word. This week it will be released in theaters, arriving in the middle of an evolving debate about what the Internet is, whose interests it serves and how best to manage it, now that the techno-utopian dreams that sounded so great in Wired magazine circa 1996 have begun to ring distinctly hollow.

What surprised me when I wrote about “The Internet’s Own Boy” from Sundance was the snarky, dismissive and downright hostile tone struck by at least a few commenters. There was a certain dark symmetry to it, I thought at the time: A tragic story about the downfall, destruction and death of an Internet idealist calls up all of the medium’s most distasteful qualities, including its unique ability to transform all discourse into binary and ill-considered nastiness, and its empowerment of the chorus of belittlers and begrudgers collectively known as trolls. In retrospect, I think the symbolism ran even deeper. Aaron Swartz’s life and career exemplified a central conflict within Internet culture, and one whose ramifications make many denizens of the Web highly uncomfortable.

For many of its pioneers, loyalists and self-professed deep thinkers, the Internet was conceived as a digital demi-paradise, a zone of total freedom and democracy. But when it comes to specifics things get a bit dicey. Paradise for whom, exactly, and what do we mean by democracy? In one enduringly popular version of this fantasy, the Internet is the ultimate libertarian free market, a zone of perfect entrepreneurial capitalism untrammeled by any government, any regulation or any taxation. As a teenage programming prodigy with an unusually deep understanding of the Internet’s underlying architecture, Swartz certainly participated in the private-sector, junior-millionaire version of the Internet. He founded his first software company following his freshman year at Stanford, and became a partner in the development of Reddit in 2006, which was sold to Condé Nast later that year.

That libertarian vision of the Internet – and of society too, for that matter – rests on an unacknowledged contradiction, in that some form of state power or authority is presumably required to enforce private property rights, including copyrights, patents and other forms of intellectual property. Indeed, this is one of the principal contradictions embedded within our current form of capitalism, as the Marxist scholar David Harvey notes: Those who claim to venerate private property above all else actually depend on an increasingly militarized and autocratic state. And from the beginning of Swartz’s career he also partook of the alternate vision of the Internet, the one with a more anarchistic or anarcho-socialist character. When he was 15 years old he participated in the launch of Creative Commons, the immensely important content-sharing nonprofit, and at age 17 he helped design Markdown, an open-source, newbie-friendly markup format that remains in widespread use.

One can certainly construct an argument that these ideas about the character of the Internet are not fundamentally incompatible, and may coexist peaceably enough. In the physical world we have public parks and privately owned supermarkets, and we all understand that different rules (backed of course by militarized state power) govern our conduct in each space. But there is still an ideological contest between the two, and the logic of the private sector has increasingly invaded the public sphere and undermined the ancient notion of the public commons. (Former New York Mayor Rudy Giuliani once proposed that city parks should charge admission fees.) As an adult Aaron Swartz took sides in this contest, moving away from the libertarian Silicon Valley model of the Internet and toward a more radical and social conception of the meaning of freedom and equality in the digital age. It seems possible and even likely that the Guerilla Open Access Manifesto Swartz wrote in 2008, at age 21, led directly to his exaggerated federal prosecution for what was by any standard a minor hacking offense.

Swartz’s manifesto didn’t just call for the widespread illegal downloading and sharing of copyrighted scientific and academic material, which was already a dangerous idea. It explained why. Much of the academic research held under lock and key by large institutional publishers like Reed Elsevier had been largely funded at public expense, but was now being treated as private property – and as Swartz understood, that was just one example of a massive ideological victory for corporate interests that had penetrated almost every aspect of society. The actual data theft for which Swartz was prosecuted, the download of a large volume of journal articles from the academic database called JSTOR, was largely symbolic and arguably almost pointless. (As a Harvard graduate student at the time, Swartz was entitled to read anything on JSTOR.)

But the symbolism was important: Swartz posed a direct challenge to the private-sector creep that has eaten away at any notion of the public commons or the public good, whether in the digital or physical worlds, and he also sought to expose the fact that in our age state power is primarily the proxy or servant of corporate power. He had already embarrassed the government twice previously. In 2006, he downloaded and released the entire bibliographic dataset of the Library of Congress, a public document for which the library had charged an access fee. In 2008, he downloaded and released about 2.7 million federal court documents stored in the government database called PACER, which charged 8 cents a page for public records that by definition had no copyright. In both cases, law enforcement ultimately concluded Swartz had committed no crime: Dispensing public information to the public turns out to be legal, even if the government would rather you didn’t. The JSTOR case was different, and the government saw its chance (one could argue) to punish him at last.

Knappenberger could only have made this film with the cooperation of Swartz’s family, which was dealing with a devastating recent loss. In that context, it’s more than understandable that he does not inquire into the circumstances of Swartz’s suicide in “Inside Edition”-level detail. It’s impossible to know anything about Swartz’s mental condition from the outside – for example, whether he suffered from undiagnosed depressive illness – but it seems clear that he grew increasingly disheartened over the government’s insistence that he serve prison time as part of any potential plea bargain. Such an outcome would have left him a convicted felon and, he believed, would have doomed his political aspirations; one can speculate that was the point. Carmen Ortiz, the U.S. attorney for Boston, along with her deputy Stephen Heymann, did more than throw the book at Swartz. They pretty much had to write it first, concocting an imaginative list of 13 felony indictments that carried a potential total of 50 years in federal prison.

As Knappenberger explained in a Q&A session at Sundance, that’s the correct context in which to understand Robert Swartz’s public remark that the government had killed his son. He didn’t mean that Aaron had actually been assassinated by the CIA, but rather that he was a fragile young man who had been targeted as an enemy of the state, held up as a public whipping boy, and hounded into severe psychological distress. Of course that cannot entirely explain what happened; Ortiz and Heymann, along with whoever above them in the Justice Department signed off on their display of prosecutorial energy, had no reason to expect that Swartz would kill himself. There’s more than enough pain and blame to go around, and purely on a human level it’s difficult to imagine what agony Swartz’s family and friends have put themselves through.

One of the most painful moments in “The Internet’s Own Boy” arrives when Quinn Norton, Swartz’s ex-girlfriend, struggles to explain how and why she wound up accepting immunity from prosecution in exchange for information about her former lover. Norton’s role in the sequence of events that led to Swartz hanging himself in his Brooklyn apartment 18 months ago has been much discussed by those who have followed this tragic story. I think the first thing to say is that Norton has been very forthright in talking about what happened, and clearly feels torn up about it.

Norton was a single mom living on a freelance writer’s income, who had been threatened with an indictment that could have cost her both her child and her livelihood. When prosecutors offered her an immunity deal, her lawyer insisted she should take it. For his part, Swartz’s attorney says he doesn’t think Norton told the feds anything that made Swartz’s legal predicament worse, but she herself does not agree. It was apparently Norton who told the government that Swartz had written the 2008 manifesto, which had spread far and wide in hacktivist circles. Not only did the manifesto explain why Swartz had wanted to download hundreds of thousands of copyrighted journal articles on JSTOR, it suggested what he wanted to do with them and framed it as an act of resistance to the private-property knowledge industry.

Amid her grief and guilt, Norton also expresses an even more appropriate emotion: the rage of wondering how in hell we got here. How did we wind up with a country where an activist is prosecuted like a major criminal for downloading articles from a database for noncommercial purposes, while no one goes to prison for the immense financial fraud of 2008 that bankrupted millions? As a person who has made a living as an Internet “content provider” for almost 20 years, I’m well aware that we can’t simply do away with the concept of copyright or intellectual property. I never download pirated movies, not because I care so much about the bottom line at Sony or Warner Bros., but because it just doesn’t feel right, and because you can never be sure who’s getting hurt. We’re not going to settle the debate about intellectual property rights in the digital age in a movie review, but we can say this: Aaron Swartz had chosen his targets carefully, and so did the government when it fixed its sights on him. (In fact, JSTOR suffered no financial loss, and urged the feds to drop the charges. They refused.)

A clean and straightforward work of advocacy cinema, blending archival footage and contemporary talking-head interviews, Knappenberger’s film makes clear that Swartz was always interested in the social and political consequences of technology. By the time he reached adulthood he began to see political power, in effect, as another system of control that could be hacked, subverted and turned to unintended purposes. In the late 2000s, Swartz moved rapidly through a variety of politically minded ventures, including a good-government site and several different progressive advocacy groups. He didn’t live long enough to learn about Edward Snowden or the NSA spy campaigns he exposed, but Swartz frequently spoke out against the hidden and dangerous nature of the security state, and played a key role in the 2011-12 campaign to defeat the Stop Online Piracy Act (SOPA), a far-reaching government-oversight bill that began with wide bipartisan support and appeared certain to sail through Congress. That campaign, and the Internet-wide protest of American Censorship Day in November 2011, looks in retrospect like the digital world’s political coming of age.

Earlier that year, Swartz had been arrested by MIT campus police, after they noticed that someone had plugged a laptop into a network switch in a server closet. He was clearly violating some campus rules and likely trespassing, but as the New York Times observed at the time, the arrest and subsequent indictment seemed to defy logic: Could downloading articles that he was legally entitled to read really be considered hacking? Wasn’t this the digital equivalent of ordering 250 pancakes at an all-you-can-eat breakfast? The whole incident seemed like a momentary blip in Swartz’s blossoming career – a terms-of-service violation that might result in academic censure, or at worst a misdemeanor conviction.

Instead, for reasons that have never been clear, Ortiz and Heymann insisted on a plea deal that would have sent Swartz to prison for six months, an unusually onerous sentence for an offense with no definable victim and no financial motive. Was he specifically singled out as a political scapegoat by Eric Holder or someone else in the Justice Department? Or was he simply bulldozed by a prosecutorial bureaucracy eager to justify its own existence? We will almost certainly never know for sure, but as numerous people in “The Internet’s Own Boy” observe, the former scenario cannot be dismissed easily. Young computer geniuses who embrace the logic of private property and corporate power, who launch start-ups and seek to join the 1 percent before they’re 25, are the heroes of our culture. Those who use technology to empower the public commons and to challenge the intertwined forces of corporate greed and state corruption, however, are the enemies of progress and must be crushed.


”The Internet’s Own Boy” opens this week in Atlanta, Boston, Chicago, Cleveland, Denver, Los Angeles, Miami, New York, Toronto, Washington and Columbus, Ohio. It opens June 30 in Vancouver, Canada; July 4 in Phoenix, San Francisco and San Jose, Calif.; and July 11 in Seattle, with other cities to follow. It’s also available on-demand from Amazon, Google Play, iTunes, Vimeo, Vudu and other providers.

* * * * *

Luckily for us (especially those of us in a town too small to get a showing of this film, or who can't afford to pay per view), there is an open access version of the film available online.


The Internet’s Own Boy: New Documentary About Aaron Swartz Now Free Online

Open Culture | June 29th, 2014

On BoingBoing today, Cory Doctorow writes: “The Creative Commons-licensed version of The Internet’s Own Boy, Brian Knappenberger’s documentary about Aaron Swartz, is now available on the Internet Archive, which is especially useful for people outside of the US, who aren’t able to pay to see it online…. The Internet Archive makes the movie available to download or stream, in MPEG 4 and Ogg. There’s also a torrentable version.”

According to the film summary, the new documentary “depicts the life of American computer programmer, writer, political organizer and Internet activist Aaron Swartz. It features interviews with his family and friends as well as the internet luminaries who worked with him. The film tells his story up to his eventual suicide after a legal battle, and explores the questions of access to information and civil liberties that drove his work.”

The Internet’s Own Boy will be added to our collection, 200 Free Documentaries Online, part of our larger collection, 675 Free Movies Online: Great Classics, Indies, Noir, Westerns, etc..

Sunday, January 26, 2014

The Changing Face of Psychology

Interesting post from The Guardian (UK) on a few changes occurring in the world of academic/research psychology, most importantly a move toward more open access and more replication studies.

I do a lot of research in the worlds of cell biology, nutrition, and psychology/neuroscience. Of these three fields, psych/neuroscience is by far the least open access and open data. The publishers gouge researchers and universities for publication fees and then gouge libraries and individuals for access to the publication (with electronic versions costing as much as print versions, even though they often receive the material "camera ready" and have to do NO processing of the manuscripts). It's the only business I know of that charges producers and consumers for intellectual property it has not created. What a scam.

The changing face of psychology

After 50 years of stagnation in research practices, psychology is leading reforms that will benefit all life sciences


Posted by Chris Chambers
Friday 24 January 2014


Psychology is championing important changes to culture and practice, including a greater emphasis on transparency, reliability, and adherence to the scientific method. 
Photograph: Sebastian Kaulitzki/Alamy

In 1959, an American researcher named Ted Sterling reported something disturbing. Of 294 articles published across four major psychology journals, 286 had reported positive results – that is, a staggering 97% of published papers were underpinned by statistically significant effects. Where, he wondered, were all the negative results – the less exciting or less conclusive findings? Sterling labelled this publication bias a form of malpractice. After all, getting published in science should never depend on getting the “right results”.

You might think that Sterling’s discovery would have led the psychologists of 1959 to sit up and take notice. Groups would be assembled to combat the problem, ensuring that the scientific record reflected a balanced sum of the evidence. Journal policies would be changed, incentives realigned.

Sadly, that never happened. Thirty-six years later, in 1995, Sterling took another look at the literature and found exactly the same problem – negative results were still being censored. Fifteen years after that, Daniele Fanelli from the University of Edinburgh confirmed it yet again. Publication bias had turned out to be the ultimate bad car smell, a prime example of how irrational research practices can linger on and on.

Now, finally, the tide is turning. A growing number of psychologists – particularly the younger generation – are fed up with results that don’t replicate, journals that value story-telling over truth, and an academic culture in which researchers treat data as their personal property. Psychologists are realising that major scientific advances will require us to stamp out malpractice, face our own weaknesses, and overcome the ego-driven ideals that maintain the status quo.

Here are five key developments to watch in 2014.

1. Replication


The problem: The best evidence for a genuine discovery is showing that independent scientists can replicate it using the same method. If it replicates repeatedly then we can use it to build better theories. If it doesn't then it belongs in the trash bin of history. This simple logic underpins all science – without replication we’d still believe in phlogiston and faster-than-light neutrinos.

In psychology, attempts to closely reproduce previous methods are rarely attempted. Psychologists tend to see such work as boring, lacking in intellectual prowess, and a waste of limited resources. Some of the most prominent psychology journals even have explicit policies against publishing replications, instead offering readers a diet of fast food: results that are novel, eye catching, and even counter-intuitive. Exciting results are fine provided they replicate. The problem is that nobody bothers to try, which litters the field with results of unknown (likely low) value.

How it’s changing: The new generation of psychologists understands that independent replication is crucial for real advancement and to earn wider credibility in science. A beautiful example of this drive is the Many Labs project led by Brian Nosek from the University of Virginia. Nosek and a team of 50 colleagues located in 36 labs worldwide sought to replicate 13 key findings in psychology, across a sample of 6,344 participants. Ten of the effects replicated successfully.

Journals are also beginning to respect the importance of replication. The prominent outlet Perspectives on Psychological Science recently launched an initiative that specifically publishes direct replications of previous studies. Meanwhile, journals such as BMC Psychology and PLOS ONE officially disown the requirement for researchers to report novel, positive findings.

2. Open access


The problem: Strictly speaking, most psychology research isn’t really “published” – it is printed within journals that expressly deny access to the public (unless you are willing to pay for a personal subscription or spend £30+ on a single article). Some might say this is no different to traditional book publishing, so what's the problem? But remember that the public being denied access to science is the very same public that already funds most psychology research, including the subscription fees for universities. So why, you might ask, is taxpayer-funded research invisible to the taxpayers that funded it? The answer is complicated enough to fill a 140-page government report, but the short version is that the government places the business interests of corporate publishers ahead of the public interest in accessing science.

How it’s changing: The open access movement is growing in size and influence. Since April 2013, all research funded by UK research councils, including psychology, must now be fully open access – freely viewable to the public. Charities such as the Wellcome Trust have similar policies. These moves help alleviate the symptoms of closed access but don’t address the root cause, which is market dominance by traditional subscription publishers. Rather than requiring journals to make articles publicly available, the research councils and charities are merely subsidising those publishers, in some cases paying them extra for open access on top of their existing subscription fees. What other business in society is paid twice for a product that it didn’t produce in the first place? It remains a mystery who, other than the publishers themselves, would call this bizarre set of circumstances a “solution”.

3. Open science


The problem: Data sharing is crucial for science but rare in psychology. Even though ethical guidelines require authors to share data when requested, such requests are usually ignored or denied, even when coming from other psychologists. Failing to publicly share data makes it harder to do meta-analysis and easier for unscrupulous researchers to get away with fraud. The most serious fraud cases, such as Diederik Stapel, would have been caught years earlier if journals required the raw data to be published alongside research articles.

How it’s changing: Data sharing isn’t yet mandatory, but it is gradually becoming unacceptable for psychologists not to share. Evidence shows that studies which share data tend to be more accurate and less likely to make statistical errors. Public repositories such as Figshare and the Open Science Framework now make the act of sharing easy, and new journals including the Journal of Open Psychology Data have been launched specifically to provide authors with a way of publicising data sharing.

Some existing journals are also introducing rewards to encourage data sharing. Since 2014, authors who share data at the journal Psychological Science will earn an Open Data badge, printed at the top of the article. Coordinated data sharing carries all kinds of other benefits too – for instance, it allows future researchers to run meta-analysis on huge volumes of existing data, answering questions that simply can’t be tackled with smaller datasets.

4. Bigger data


The problem: We’ve known for decades that psychology research is statistically underpowered. What this means is that even when genuine phenomena exist, most experiments don’t have sufficiently large samples to detect them. The curse of low power cuts both ways: not only is an underpowered experiment likely to miss finding water in the desert, it’s also more likely to lead us to a mirage.

How it’s changing: Psychologists are beginning to develop innovative ways to acquire larger samples. An exciting approach is Internet testing, which enables easy data collection from thousands of participants. One recent study managed to replicate 10 major effects in psychology using Amazon’s Mechanical Turk. Psychologists are also starting to work alongside organisations that already collect large amounts of useful data (and no, I don’t mean GCHQ). A great example is collaborative research with online gaming companies. Tom Stafford from the University of Sheffield recently published an extraordinary study of learning patterns in over 850,000 people by working with a game developer.

5. Limiting researcher “degrees of freedom”


The problem: In psychology, discoveries tend to be statistical. This means that to test a particular hypothesis, say, about motor actions, we might measure the difference in reaction times or response accuracy between two experimental conditions. Because the measurements contain noise (or “unexplained variability”), we rely on statistical tests to provide us with a level of certainty in the outcome. This is different to other sciences where discoveries are more black and white, like finding a new rock layer or observing a supernova.

Whenever experiments rely on inferences from statistics, researchers can exploit “degrees of freedom” in the analyses to produce desirable outcomes. This might involve trying different ways of removing statistical outliers or the effect of different statistical models, and then only reporting the approach that “worked” best in producing attractive results. Just as buying all the tickets in a raffle guarantees a win, exploiting researcher degrees of freedom can guarantee a false discovery.

The reason we fall into this trap is because of incentives and human nature. As Sterling showed in 1959, psychology journals select which studies to publish not based on the methods but on the results: getting published in the most prominent, career-making journals requires researchers to obtain novel, positive, statistically significant effects. And because statistical significance is an arbitrary threshold (p<.05), researchers have every incentive to tweak their analyses until the results cross the line. These behaviours are common in psychology – a recent survey led by Leslie John from Harvard University estimated that at least 60% of psychologists selectively report analyses that “work”. In many cases such behaviour may even be unconscious.

How it’s changing: The best cure for researcher degrees of freedom is to pre-register the predictions and planned analyses of experiments before looking at the data. This approach is standard practice in medicine because it helps prevent the desires of the researcher from influencing the outcome. Among the basic life sciences, psychology is now leading the way in advancing pre-registration. The journals Cortex, Attention Perception & Psychophysics, AIMS Neuroscience and Experimental Psychology offer pre-registered articles in which peer review happens before experiments are conducted. Not only does pre-registration put the reins on researcher degrees of freedom, it also prevents journals from selecting which papers to publish based on the results.

Journals aren’t the only organisations embracing pre-registration. The Open Science Framework invites psychologists to publish their protocols, and the 2013 Declaration of Helsinki now requires public pre-registration of all human research “before recruitment of the first subject”.

We’ll continue to cover these developments at HQ as they progress throughout 2014.

Friday, April 05, 2013

Michel Bauwens - Proposed Next Steps for the Emerging P2P and Commons Networks


From Michel Bauwens' P2P Blog, here are his proposals for "Next Steps" in the emerging P2P and Commons networks.

Proposed Next Steps for the emerging P2P and Commons networks




Michel Bauwens
2nd April 2013
In short, we need a alliance of the commons to project civil and political power and influence at every level of society; we need phyles to strengthen our economic autonomy from the profit-maximizing dominant system; and we need Chambre of the Commons to achieve territorial policy; legal and infrastructural conditions for the alternative, human and nature-friendly political economy to thrive. Neither alone is sufficient, but together they could be a powerful triad for the necessary phase transition.
Michel Bauwens:

The recent success of a global mobilization (500+ participants and collectives in 23 countries and over 50 cities) to collaborative map P2P-driven, commons-oriented, collaboration/sharing-based initiatives in hispanic countries, has shown a grassroots hunger for more mutual coordination to enhance the capacity to initiate social change. I would like to add the hypothesis that what is in the making is not just a new social imaginery, but also a potential new political subject. To build and obtain more civic infrastructures to enable and empower autonomous social production, I believe we must move to mutualize our forces and create a new set of political, social and economic institutions which can have ‘transitional’ effects, i.e. prepare the ground for a phase-transition to a political economy and civilization in which socially and environmentally friendly free association between autonomous producers and citizens become the norm.

I believe the time is there to start constructing the following three institutional coalitions:

* The civic/political institution: The Alliance of the Commons

An alliance of the commons is an alliance, meeting place and network of p2p-commons oriented networks, associations, places; who do not have economic rationales. These alliances can be topical, local, transnational, etc … An example is the initiative Paris Communs Urbains which is attempting to create a common platform for urban commons intiatives in the Paris region; another Parisian/French example is the freecultural network Libre Savoirs, which is developing a set of policy proposals around digital rights. (both examples were communicated to me by Lionel Maurel).

An alliance of the commons is a meeting place and platform to formulate policy proposals that enhance civic infrastructures for the commons.

* The economic institution: the P2P/Commons Globa-local « Phyle »

A phyle (as originally proposed by lasindias.net) is a coalition of commons-oriented, community-supportive ethical enterprises which trade and exchange in the market to create livelyhoods for commoners and peer producers engaged in social production. The use of a peer production licence keeps the created exchange value within the sphere of the commons and strengthens the existence of a more autonomous counter-economy which refuses the destructive logic of profit-maximisation and instead works to increase benefits for their own, but also the emerging global commons. Phyles created integrated economies around the commons, that render them more autonomous and insure the social reproduction of its members. Hyperproductive global phyles that generate well-being for their members will gradually create a counterpower to the hitherto dominant MNO’s.

* The political-economy institution: The Chamber of the Commons

In analogy with the well-known chambers of commerce which work on the infrastructure for for-profit enterprise, the Commons chamber exclusively coordinates for the needs of the emergent coalitions of commons-friendly ethical enterprises (the phyles), but with a territorial focus. Their aim is to uncover the convergent needs of the new commons enterprises and to interface with territorial powers to express and obtain their infrastructural, policy and legal needs.

In short, we need a alliance of the commons to project civil and political power and influence at every level of society; we need phyles to strengthen our economic autonomy from the profit-maximizing dominant system; and we need Chambre of the Commons to achieve territorial policy; legal and infrastructural conditions for the alternative, human and nature-friendly political economy to thrive. Neither alone is sufficient, but together they could be a powerful triad for the necessary phase transition.

Wednesday, April 03, 2013

The National Digital Public Library Is Launched! (April 18)

This is AWESOME! I can't wait to see what will be available. This article about the new National Digital Public Library comes Robert Darnton at the New York Review of Books.

The National Digital Public Library Is Launched!

APRIL 25, 2013
Robert Darnton

A detail from the preliminary model for the home page of the Digital Public Library of America’s website, to be available at http://dp.la/

The Digital Public Library of America, to be launched on April 18, is a project to make the holdings of America’s research libraries, archives, and museums available to all Americans—and eventually to everyone in the world—online and free of charge. How is that possible? In order to answer that question, I would like to describe the first steps and immediate future of the DPLA. But before going into detail, I think it important to stand back and take a broad view of how such an ambitious undertaking fits into the development of what we commonly call an information society.

Speaking broadly, the DPLA represents the confluence of two currents that have shaped American civilization: utopianism and pragmatism. The utopian tendency marked the Republic at its birth, for the United States was produced by a revolution, and revolutions release utopian energy—that is, the conviction that the way things are is not the way they have to be. When things fall apart, violently and by collective action, they create the possibility of putting them back together in a new manner, according to higher principles.

The American revolutionaries drew their inspiration from the Enlightenment—and from other sources, too, including unorthodox varieties of religious experience and bloody-minded convictions about their birthright as free-born Englishmen. Take these ingredients, mix well, and you get the Declaration of Independence and the Bill of Rights—radical assertions of principle that would never make it through Congress today.

Yet the revolutionaries were practical men who had a job to do. When the Articles of Confederation proved inadequate to get it done, they set out to build a more perfect union and began again with a Constitution designed to empower an effective state while at the same time keeping it in check. Checks and balances, the Federalist Papers, sharp elbows in a scramble for wealth and power, never mind about slavery and slave wages. The founders were tough and tough-minded.

How do these two tendencies converge in the Digital Public Library of America? For all its futuristic technology, the DPLA harkens back to the eighteenth century. What could be more utopian than a project to make the cultural heritage of humanity available to all humans? What could be more pragmatic than the designing of a system to link up millions of megabytes and deliver them to readers in the form of easily accessible texts?

Above all, the DPLA expresses an Enlightenment faith in the power of communication. Jefferson and Franklin—the champion of the Library of Congress and the printer turned philosopher-statesman—shared a profound belief that the health of the Republic depended on the free flow of ideas. They knew that the diffusion of ideas depended on the printing press. Yet the technology of printing had hardly changed since the time of Gutenberg, and it was not powerful enough to spread the word throughout a society with a low rate of literacy and a high degree of poverty.

Thanks to the Internet and a pervasive if imperfect system of education, we now can realize the dream of Jefferson and Franklin. We have the technological and economic resources to make all the collections of all our libraries accessible to all our fellow citizens—and to everyone everywhere with access to the World Wide Web. That is the mission of the DPLA.

Put so boldly, it sounds too grand. We can easily get carried away by utopian rhetoric about the library of libraries, the mother of all libraries, the modern Library of Alexandria. To build the DPLA, we must tap the can-do, hands-on, workaday pragmatism of the American tradition. Here I will describe what theDPLA is, what it will offer to the American public at the time of its launch, and what it will become in the near future.

How to think of it? Not as a great edifice topped with a dome and standing on a gigantic database. The DPLA will be a distributed system of electronic content that will make the holdings of public and research libraries, archives, museums, and historical societies available, effortlessly and free of charge, to readers located at every connecting point of the Web. To make it work, we must think big and begin small. At first, the DPLA’s offering will be limited to a rich variety of collections—books, manuscripts, and works of art—that have already been digitized in cultural institutions throughout the country. Around this core it will grow, gradually accumulating material of all kinds until it will function as a national digital library.

The trajectory of its development can be understood from the history of its origin—and it does have a history, although it is not yet three years old. It germinated from a conference held at Harvard on October 1, 2010, a small affair involving forty persons, most of them heads of foundations and libraries. In a letter of invitation, I included a one-page memo about the basic idea: “to make the bulk of world literature available to all citizens free of charge” by creating “a grand coalition of foundations and research libraries.” In retrospect, that sounds suspiciously utopian, but everyone at the meeting agreed that the job was worth doing and that we could get it done.

We also agreed on a short description of it, which by now has become a mission statement. The DPLA, we resolved, would be “an open, distributed network of comprehensive online resources that would draw on the nation’s living heritage from libraries, universities, archives, and museums in order to educate, inform, and empower everyone in the current and future generations.”

Sounds good, you might say, but wasn’t Google already providing this service? True, Google set out bravely to digitize all the books in the world, and it managed to create a gigantic database, which at last count includes 30 million volumes. But along the way it collided with copyright laws and a hostile suit by copyright holders. Google tried to win over the litigants by inviting them to become partners in an even larger project. They agreed on a settlement, which transformed Google’s original enterprise, a search service that would display only short snippets of the books, into a commercial library. By purchasing subscriptions, research libraries would gain access to Google’s database—that is, to digitized copies of the books that they had already provided to Google free of charge and that they now could make available to their readers at a price to be set by Google and its new partners. To some of us, Google Book Search looked like a new monopoly of access to knowledge. To the Southern Federal District Court of New York, it was riddled with so many unacceptable provisions that it could not stand up in law.

After the court’s decision on March 23, 2011, to reject the settlement,* Google’s digital library was effectively dead, although Google can continue to use its database for other purposes, such as agreements with publishers to provide digital copies of their books to customers. The DPLA was not designed to replace Google Book Search; in fact, the designing had begun long before the court’s decision. But the DPLA took inspiration from Google’s bold attempt to digitize entire libraries, and it still hopes to win Google over as an ally in working for the public good. Nonetheless, you might raise another objection: Who authorized this self-appointed group to undertake such an enterprise in the first place?

Answer: no one. We believed that it required private initiative and that it would never get off the ground if we waited for the government to act. Therefore, we appointed a steering committee, a secretariat located in the Berkman Center at Harvard, and six groups scattered around the country, which began to study and debate key issues: governance, finance, technological infrastructure, copyright, the scope and content of the collections, and the audience to be envisioned.

The groups grew and developed a momentum of their own, drawing on voluntary labor; crowdsourcing (the practice of appealing for contributions to an undefined group, usually an online community, as in the case of Wikipedia); and discussion through websites, listservs, open meetings, and highly focused workshops. Hundreds of people became actively involved, and thousands more participated through an endless, noisy debate conducted on the Internet. Plenary meetings in Washington, D.C., San Francisco, and Chicago drew large crowds and a much larger virtual audience, thanks to texting, tweeting, streaming, and other electronic connections. There gradually emerged a sense of community, twenty-first-century style—open, inchoate, virtual, yet real, because held together as a body by an electronic nervous system built into the Web.

This virtual and real discussion took place while groups got down to work. Forty volunteers submitted “betas”—prototypes of the software that theDPLA might use, which were then to be subjected to “beta testing,” a user-based form of review. After several rounds of testing and reworking, a platform was developed that will provide links to content from library collections throughout the country and that will aggregate their metadata—i.e., catalog-type information that identifies digital files and describes their content. The metadata will be aggregated in a repository located in what the designers call the “back end” of the platform, while an application programming interface (API) in the “front end” will make it possible for all kinds of software to transmit content in diverse ways to individual users.

The user-friendly interface will therefore enable any reader—say, a high school student in the Bronx—to consult works that used to be stored on inaccessible shelves or locked up in treasure rooms—say, pamphlets in the Huntington Library of Los Angeles about nullification and secession in the antebellum South. Readers will simply consult the DPLA through its URL, http://dp.la/. They will then be able to search records by entering a title or the name of an author, and they will be connected through the DPLA’s site to the book or other digital object at its home institution. The illustration on page 4 shows what will appear on the user’s screen, although it is just a trial mock-up.

Meanwhile, several of the country’s greatest libraries and museums—among them Harvard, the New York Public Library, and the Smithsonian—are prepared to make a selection of their collections available to the public through the DPLA. Those works will be accessible to everyone online at the launch on April 18, but they are only the beginning of aggregated offerings that will grow organically as far as the budget and copyright laws permit.

Of course, growth must be sustainable. But the greatest foundations in the country have expressed sympathy for the project. Several of them—the Sloan, Arcadia, Knight, and Soros foundations in addition to the National Endowment for the Humanities and the Institute of Museum and Library Services—have financed the first three years of the DPLA’s existence. If a dozen foundations combined forces, allotting a set amount from each to an annual budget, they could create the digital equivalent of the Library of Congress within a decade. And the sponsors naturally hope that the Library of Congress also will participate in the DPLA.

The main impediment to the DPLA’s growth is legal, not financial. Copyright laws could exclude everything published after 1964, most works published after 1923, and some that go back as far as 1873. Court cases during the last few months have opened up the possibility that the fair use provision of the copyright act of 1976 could be extended to make more recent books available for certain purposes, such as service to the visually impaired and some forms of teaching. And if, as expected, the DPLA excludes books that are still selling on the market (most exhaust their commercial viability within a few years), authors and publishers might grant the exercise of their rights to the DPLA.

In any case, we cannot wait for courts to untangle legalities before creating an effective administration. The informal secretariat at Harvard is being replaced by a nonprofit corporation organized according to the 501(c)3 provisions of the tax code. The steering committee has been succeeded by a board of directors. And the six groups will evolve into a committee system with carefully defined functions, such as outreach to public libraries and community colleges. The choice of an executive director, Daniel Cohen, a superb historian and Internet expert from George Mason University, was announced on March 5; the first staff members have already been hired; and administrative headquarters are being set up in Boston.

Those first steps will not lead to the creation of a top-heavy bureaucracy. On the contrary, the “distributed” character of the DPLA means that its operations will be spread across the country. Its growing collection of metadata (Harvard has already made available 12 million openly accessible metadata records) will be stored in computer clouds, and its activities will be funneled through two kinds of “hubs.”

The DPLA’s “content hubs” are large repositories of digital material, usually held in physical locations like the Internet Archive in San Francisco. They will make their data accessible to users directly through the DPLA without passing through any intermediate aggregators. “Service hubs”—centers for collecting material—will aggregate data and provide various services at the state or regional level. TheDPLA cannot deal directly with all the libraries, archives, and museums in the United States, because that would require its central administration to become involved in developing hundreds of thousands of interfaces and links. But development among local institutions is now being coordinated at the state level, and the DPLA will work with the states to create an integrated system for the entire country.

Forty states have digital libraries, and the DPLA’s service hubs—seven are already being developed in different parts of the country—will contribute the data those digital libraries have already collected to the national network. Among other activities, these service hubs will help local libraries and historical societies to scan, curate, and preserve local materials—Civil War mementos, high school yearbooks, family correspondence, anything that they have in their collections or that their constituents want to fetch from trunks and attics. As it develops, digital empowerment at the grassroots level will reinforce the building of an integrated collection at the national level, and the national collection will be linked with those of other countries.

The DPLA has designed its infrastructure to be interoperable with that of Europeana, a super aggregator sponsored by the European Union, which coordinates linkages among the collections of twenty-seven European countries. Within a generation, there should be a worldwide network that will bring nearly all the holdings of all libraries and museums within the range of nearly everyone on the globe. To provide a glimpse into this future, Europeana and the DPLA have produced a joint digital exhibition about immigration from Europe to the US, which will be accessible online at the time of the April 18 launch.

Of course, expansion, at the local or global level, depends on the ability of libraries and other institutions to develop their own digital databases—a long-term, uneven process that requires infusions of money and energy. As it takes place, great stockpiles of digital riches will grow up in locations scattered across the map. Many already exist, because the largest research libraries have already digitized enormous sections of their collections, and they will become content hubs in themselves.

For example, in serving as a hub, Harvard plans to make available to the DPLA by the time of its launch 243 medieval manuscripts; 5,741 rare Latin American pamphlets; 3,628 daguerreotypes, along with the first photographs of the moon and of African-born slaves; 502 chapbooks and “penny dreadfuls” about sensational crimes, a popular genre of literature in the eighteenth and nineteenth centuries; and 420 trial narratives from cases involving marriage and sexuality. Harvard expects to provide a great deal more in the following months, notably in fields such as music, cartography, zoology, and colonial history. Other libraries, archives, and museums will contribute still more material from their collections. The total number of items available in all formats on April 18 will be between two and three million.

How will such material be put to use? I would like to end with a final example. About 14 million students are struggling to get an education in community colleges—at least as many as those enrolled in all the country’s four-year colleges and universities. But many of them—and many more students in high schools—do not have access to a decent library. The DPLA can provide them with a spectacular digital collection, and it can tailor its offering to their needs. Many primers and reference works on subjects such as mathematics and agronomy are still valuable, even though their copyrights have expired. With expert editing, they could be adapted to introductory courses and combined in a reference library for beginners.

At one time or other, nearly every student comes in contact with a poem by Emily Dickinson, who probably qualifies as America’s favorite poet. But Dickinson’s poems are especially problematic. Only a few of them, horribly mangled, were published in her lifetime. Nearly all the manuscript copies are stored in Harvard’s Houghton Library, and they pose important puzzles, because they contain quirky punctuation, capitalization, spacing, and other touches that have profound implications for their meaning. Harvard has digitized the originals, combined them with the most important printed editions (one edited by Thomas H. Johnson in 1955 and one edited by Ralph W. Franklin in 1981), and added supplementary documentation in an Emily Dickinson Archive, which it will make available through its own website and the DPLA.

The online archive will enrich the experience of students at every level of the educational system. Teachers will be able to make selections from it and adjust them to the needs of their classes. By paying close attention to different versions of a poem, the students will begin to appreciate the way poetry works. They will sharpen their sensitivity to language in general, and the lessons they learn will help them gain possession of their cultural heritage. It may be a small step, but it will be a pragmatic advance into the world of knowledge, which Jefferson, in a utopian vein, described as “the common property of mankind.”


* See Robert Darnton, “Google’s Loss: The Public’s Gain,” The New York Review, April 28, 2011.

Monday, March 04, 2013

Why I Support Open Access Publishing - How Corporations Score Big Profits By Limiting Access To Publicly Funded Academic Research


Recently I was doing some research through Google Scholar and found a couple of articles relevant to what I was searching for, only to discover that they were behind a paywall. Generally, the hard sciences are more easily available to the public (i.e., open access) while the social sciences are not.


Anyway, here is the fee for one article (from SciVerse: Science Direct [Elsevier]):
Brain microglia constitutively express β-2 integrins
Journal of Neuroimmunology, Volume 30, Issue 1, November 1990, Pages 81-93
H. Akiyama, P.L. McGeer
If you do not have a Username and Password, click the "Register to Purchase" button below to purchase this article.
Price: US $ 31.50
Here is the required fee for another article (ingentaconnect):
Endogenous Regulators of Adult CNS Neurogenesis
Theo Hagg,
Current Pharmaceutical Design, Volume 13, Number 18, June 2007 , pp. 1829-1840(12)

Buy and download full text article:
Price: $63.10 plus tax
To get these two articles would cost me nearly $100. In many cases, the article fee is for 24 hours only, so you are essentially renting the article for $30 to $60. This is only the consumer side of things. Authors pay to get their manuscripts published, and they often pay more--a lot more--if they want to allow open access to their work. The institutions where the authors are doing their research (universities, most times) then have to pay enormous subscription fees to have the journals in their libraries, and even if they opt for digital only, the costs continue to increase.

Here is an in-depth look at this issue from Think Progress.

How Corporations Score Big Profits By Limiting Access To Publicly Funded Academic Research

By Andrea Peterson on Mar 3, 2013


"Red and blue liquids inside graduated test tubes" by Horia Varlan 
used under a Creative Commons Attribution 2.0 license

Here’s how the academic publishing industry works: Academics do research (frequently supported by public funds) and submit that research to journals, often paying “$600-$2,000 to either the publisher or the academic society that owns the journal” for the privilege of publication. Then journals send the research back out to other academics to be reviewed (typically pro-bono–a 2008 study estimated the worldwide worth of unpaid peer review was £1.9 billion a year), and the (often for-profit) journal publishers sell access to the published research, mostly to the academic institutions who do the majority of basic research.

The system is big business: The largest of the for profit academic publishers, Elsevier, reportedly earned over $1 billion in profits in 2011 with a profit margin around 35 percent and 71 percent of their revenue coming from academic customers like university libraries.

But the rapid inflation of journal subscription prices–the per subscription cost rose by 215% between 1986 and 2003–has left many of those universities struggling to keep up. In a statement last spring, the Harvard Faculty Council called rising costs to maintain access to scholarly works “untenable” and the University of California San Francisco Library spends 85 percent of their collection budget on journal subscriptions, but “[d]espite cancelling the print component of more than 100 journal subscriptions in 2012 to keep up with a budget reduction, [their] costs still increased by 3 percent.”

This major disconnect between how much of this research is funded and produced and who controls the final product has led to a flourishing Open Access movement with broad support among private and public academic institutions, focused on using technological innovations to democratize access to scholarly research and correct what they see as imbalances in the current system through reform on local and national levels. One such national reform they welcomed was the White House Office of Science and Technology Policy memorandum outlining a plan to open up access to research to some federally funded research.

ThinkProgress’ coverage of that announcement drew criticism from an executive at Elsevier:



When reached for comment, Elsevier head of Corporate Relations Tom Reller agreed with her comment and confirmed Smith is VP for Global Internal Communications for Reed Elsevier subsidiary Elsevier, but referred questions about the company’s support of Open Access movement to its website and a recent statement of support for the White House’s proposal. Elsevier’s website says the company “will continue to identify access gaps, and work towards ensuring that everyone has access to quality scientific content anytime, anywhere.”

But their parent company’s lobbying disclosures in 2012 and members of the Open Access community suggest a very different position. When asked over email if they have seen Elsevier and many of the for-profit academic publishers actively cooperate with the Open Access movement on advancing public access to federally funded research, Heather Joseph, the Executive Director of the Scholarly Publishing & Academic Resources Coalition (SPARC), balked at the suggestion:
Quite the opposite. SPARC and the Open Access community spent the first eight weeks of 2012 fighting The Research Works Act (H.R 3699) — a bill introduced into the House of Representatives with the sole aim of overturning the highly successful NIH Public Access Policy, and prohibiting other Federal Agencies from enacting similar policies. Elsevier and the American Association of Publishers were two of only three organizations who publicly endorsed the bill. 
If this was the first time they took this tactic, I might be tempted to cut them some slack. But it was a repeat performance; in 2008, they tried the same thing with “The Fair Copyright in Research Works Act (H.R. 801)” — a bill that tried to amend U.S. copyright code to make the NIH Policy — and policies like it — illegal.
According to the U.S. Senate Lobbying Database, Elsevier’s parent company Reed Elsevier spent $1,420,000 lobbying the U.S. government in 2012. Reed Elsevier’s in-house lobbying team disclosures and those from the Podesta Group listing Reed Elsevier as a client corroborate Wilson’s comments about their support for The Research Works Act — only withdrawingsupport after a boycott of from academic communities, according to news reports. That boycott continues today, and has attracted over 13,000 scholars and academics who object to Elsevier’s business practices.

Reed Elsevier lobbied OSTP on “[c]opyright issues related to scientific, technology, and medical publications” during the run up to the White House’s Open Access announcement and their in-house lobbying team reported working on “[i]ssues related to science, technical, medical and scholarly publications” and on “all provisions” of the Federal Research Public Access Act (FRPAA)–a proposal similar to the recently introduced Fair Access to Science and Technology Research Act (FASTR) that would have required federal agencies with annual extramural research budgets of $100 million or more to provide the public with online access to research manuscripts stemming from funded research no later than six months after publication in a peer-reviewed journal.

Elsevier was one of 81 publishers to sign a Association of American Publishers (AAP) letter opposing FRPAA, with AAP President and CEO Tom Allen calling it “little more than an attempt at intellectual eminent domain, but without fair compensation to authors and publishers.” Remember, these publishers claiming to be concerned about “fair compensation to authors,” are the same ones often charging them publication fees.

As Reller noted to ThinkProgress, the sum total of Reed Elsevier’s 2012 lobbying expenditures represents the all lobbying done in support of their business ventures and their disclosures list a number of bills unrelated to Open Access. Companies are not required to disclose what proportion of their total lobbying is spent on which topics. We do know that Elsevier, the corporate subsidiary involved with academic publishing, accounted for over 47 percent of Reed Elsevier’s adjusted operating profits in 2011.

While AAP released a statement in support of the White House’s plan Open Access memorandum, their comments praised how the plan only included guidelines for releasing research, not mandates, saying the policy’s success is dependent on “how the agencies use their flexibility to avoid negative impacts” on the current system and calling it fair “[i]n stark contrast to angry rhetoric and unreasonable legislation offered by some” — a reference to the Open Access movement. Elsevier’s similar response to the plan praised it for promoting “gold open access funded through publishing charges and flexible embargo periods for green open access” and dismissed Open Access legislative proposals, saying they would like “open-access advocates [to] withdraw their support from unnecessary and divisive open access legislation now introduced in the US at federal level.”

There’s ample room to credit the academic publishing industry’s history of serving as the shepherds of scholarly research — but technology has dramatically changed researchers’ ability to share knowledge without intermediaries. There is an ideological debate at hand, and it’s about if the public is better served by expanding access to the research they fund or protecting the interests of companies who have a substantial financial stake in limiting that access.

Wednesday, January 23, 2013

Jonathan Rowson - Who owns information: The defining battle of our time?


I am HUGE proponent of open access/Creative Commons publishing, especially for news and research (in all fields). If we are ever to have a true cultural commons, then we should have access to the information for which (many times) our tax dollars pays.

As the quote from Alan Swartz (below) makes clear, knowledge is power, but it is only power by keeping access controlled tightly so that others cannot have that same knowledge, thereby neutralizing the power.

This excellent article comes from Jonathan Rowson at The RSA.

Who owns information: The defining battle of our time?
January 22, 2013 by Jonathan Rowson
If you have an apple and I have an apple and we exchange these apples then you and I will still each have one apple. But if you have an idea and I have an idea and we exchange these ideas, then each of us will have two ideas. - George Bernard Shaw
Could our major problems have a discernible ‘form’ that is somehow more fundamental than their content? If there is some sort of pattern, wouldn’t it make sense to target the pattern as a whole, rather than individual issues piecemeal? Marxists might say that Capitalism as such is the underlying problem, but I don’t think we have to endorse that view to look for what Bateson once called “the pattern that connects“.

We will shortly be publishing a report examining Iain McGilchrist’s work that argues there is a discernible pattern relating to the distinctive phenomenologies of the two brain hemispheres. The claim is that many of our major problems relate to the fact that the ‘inferior’ (though definitely important) left hemisphere is slowly usurping the (wiser but more tentative) right hemisphere at a cultural level, with the consequence that we live increasingly virtual and instrumental lives, and may not even realise what we are losing. The details of that discussion are coming soon to a screen near you, but there are other ways to conceive the form of the problem.

Who controls information?
“Information is power. But like all power, there are those who want to keep it for themselves.” – Alan Swartz.
When you start to think deeply about our major challenges – including climate change – you quickly run into various vested interests that get in the way of solutions, and many such vested interests are preserved through unequal access to information – academic, technological, legal, environmental, political, financial and so forth. Information should be a public good, and benefits larger numbers when it is shared, but perhaps the main way that vested interests perpetuate their power is through the control and protection of information. For instance what do Shell tell us about their research into drilling in the Arctic, and how can we know it represents full disclosure? What if a doctor prescribes you medicine and you can’t access the relevant primary research because you run into a pay wall? What if the most promising components needed for a technological breakthrough on clean energy are patented by a small group, and therefore thousands of scientists can’t follow that path of inquiry?

“(American) politics is filled with easy cases that we get wrong. The scientific consensus on global warming is overwhelming, but we abandon the Kyoto Protocol. Nutritionists are clear that sugar is unhealthy, but the sugar lobby gets it into dietary recommendations. Retroactive copyright extensions do nothing for society, but Congress passes them over and over.Such control of information is deeply related to financial dependency. Those who control information are supported in their control by law and lawyers. An excerpt from a talk by Harvard academic and activist Lawrence Lessig captures the centrality of this point.

Similar errors are made in other fields that have the public trust. Studies of new drugs are biased towards the drug companies. Law professors and other scholars write papers biased towards the clients they consult for.

Why? Because the trusted people in each case are acting as dependants. The politicians are dependent on fundraising money. They are good people, but they need to spend a quarter of their time making fundraising calls. So most of the people they speak to our lobbyists and they never even hear from the other side. If they were freed from this dependence they would gladly do the right thing.

The scientists get paid to sign on to studies done by the drug companies. The law professors get paid to consult.

How do we solve it? We need to free people from dependency. But this is too hard. We should fight for it, but politicians will never endorse a system of public funding of campaigns when they have so much invested in the current system. Instead, we need norms of independence. People need to start saying that independence is important to them and that they won’t support respected figures who act as dependants. And we can use the Internet to figure out who’s acting as dependants.”

At the risk of simplification, the underlying problem is that the inequality in power is perpetuated by the unequal access to information, and this is a self-perpetuating problem because those with power based on information use it to create dependants, and these dependants thereby develop a vested interest in protecting the information that forms their livelihood.

Why did nobody tell me about Aaron Swartz?

I started to think about this when I realised, sadly, that I never knew the pioneering cyber activist Alan Swartz while he was alive. He recently ended his own life at the age of 26 under enormous legal and political pressure, but is viewed by many as a hero of our times who was driven over the edge by an excessively zealous witch hunt. He was known for being prodigious and hyper-intelligent, but is perhaps best known and admired for the way he swiftly conjured enormous political capital to prevent the SOPA (Stop online piracy act) law in the US which he speaks about so clearly and compellingly here (highly recommended viewing). In essence he prevented the passing of a law that would have radically undermined people’s capacity to connect and share information online, and the way he did so is inspiring, because it looked like he was facing impossible odds.


A friend and former RSA colleague Jamie Young remarked that if I was going to write about Alan Swartz, I should also mention the UK’s Chris Lightfoot who was a similar character fighting a similar kind of battle – a broadly political fight about who rightfully controls information- and also took his life at a young age. The RSA has raised similar questions before, for instance by hosting Evgeny Mozorov who’s talk on why Dictators love the internet was turned into an RSAnimate.

What follows?

What all these thinkers share is a belief that the access to information has much wider implications that people typically realise. As Professor Shamad Basheer puts it in the Spicy IP Blog We live in “a world where the powers that be conspire time and again to reassert hegemony and re-establish control in a digital world whose essential DNA is one of openness and sharing.”

The main take-home point for me lies in the gap between the social norms of sharing and openness online, with the economic and legal norms relating to the perpetuation of property rights and power that have been formed before the digital age. In Aaron Swartz’s case, this battle unfolded in his heart and mind to a tragic extent, but the more I think about it, the more it seems like an enormously important battle between the public good and private ownership that will be defined largely by the political will of the relevant institutions – which in turn is shaped by us (that’s what Lessig was getting at above about the need to shape social norms).

It may not make sense to ‘take sides’ as such, and there are certainly ways to protect intellectual property that are more canny and proportionate. (As an author of three books, all of which have been PDFed and sold cheaply by Xerox merchants online, I am also a kind of ‘dependant’ with a vested interest here).

Whatever you think, I would ask you to reflect on the opening quotation by George Bernard Shaw. Ideas need each other to flourish, but they can’t meet when they are help in captivity, and they will ultimately need some form of power to free them.

Saturday, January 19, 2013

TED Book: Radical Openness by Don Tapscott and Anthony Williams

This new offering from TED Books looks interesting, Radical Openness: Four Unexpected Principles for Success. And at only $2.99 for the e-reader versions (for the Kindle), quite affordable.

A brand new TED Book: Radical Openness


17 January 2013

At TEDGlobal 2012, Don Tapscott gave us an beautiful metaphor for how society could function: like a starling murmuration. By flying as a group — dipping and diving as a single unit — starlings successfully ward off predators through cooperation. While there is leadership, there is no discernable leader.

Tapscott shares much more of his vision of cooperation in the new TED Book, Radical Openness: Four Unexpected Principles for Success. In it, Tapscott — with co-author Anthony Williams — looks at how, around the world, people are connecting and collaborating in new ways. They give examples of how smart organizations are shunning their old, secretive practices and embracing the values of transparency and collaboration. Meanwhile, movements for freedom of information are exploding like never before. Overall, Tapscott and Williams show how this new philosophy is affecting many facets of our society, from the way we do business to whom we chose to govern us.

But while radical openness promises many exciting transformations, it also comes with new risks and responsibilities. Tapscott and Williams ask: How much information should we share and with whom? And what are the consequences of disclosing the intimate, unvarnished details of our businesses and personal lives?

Radical Openness is available for the Kindle and Nook, as well as through the iBookstore. Or download the TED Books app for your iPad or iPhone. A subscription costs $4.99 a month, and is an all-you-can-read buffet.
Check out Tapscott’s 2012 TED Talk.