At a board meeting last month, the other organizers of Harvard College Effective Altruism (HCEA) and I were worried in the best possible way: we were planning a talk by philosopher Peter Singer and it had received over 1,500 RVSPs weeks in advance. With great excitement, we booked a second overflow university lecture hall to accommodate this unprecedented interest. At the end of Singer’s talk on April 12, the audience frenetically rushed down to mob the speaker for autographs and selfies. The slight, grey-haired philosophy professor had celebrity status.
This level of enthusiasm “basically never happens for academic events”, said Nir Eyal, faculty advisor of HCEA. “It is a new phenomenon, and so far strictly student-driven, without involvement from the Harvard administration.” Indeed, it is a rarity that a talk about philosophy is met with such popularity, especially a talk on a topic still unknown to most of the public—effective altruism, the new social movement and philosophy that applies evidence and reason to find the most effective ways to improve the world. But perhaps this was not such a surprising occurrence at Harvard, where the EA movement has taken off with miraculous growth over the past few years.
When HCEA was founded in 2012, our activities consisted of occasional meet-ups to discuss EA ideas. In just three years, HCEA has ballooned into a prominent student organization. HCEA runs a series of highly-attended public talks, a selective fellowship program on philanthropy, reading groups on specialized topics, innovative research, and an online blog. The Spring 2015 fellowship received 67 applications for only 24 spots, and many talks this semester—which have in the past hosted such luminaries as Jaan Tallinn, co-founder of Skype, and Elie Hassenfeld, co-founder of Givewell—have left little standing room. The EA movement at Harvard has been a resounding success by any measure, and a similar trend is sweeping through other major American campuses, with active EA chapters existing at Princeton, Stanford, MIT, Berkeley, University of Pennsylvania, and Yale. This rapid growth extends beyond the walls of the Ivory Tower—EA as a whole has become part of public debate, and for many, a way of life. Today, EA is at the critical point of going from marginal to mainstream.
The Birth of the EA Movement
The movement is a nascent one. No Wikipedia entry on EA existed until two years ago, and the preponderance of the organizations associated with the movement appeared within the past decade. Givewell, which recommends charities for individual donors based on rigorous assessments of their performance, was founded in 2007. Giving What We Can was started in 2009 to encourage people to pledge to donate at least ten percent of their income to effective charities. Two years later, 80,000 Hours, which derives its name from the number of hours a typical person spends working, was founded to provide advice on how to choose a career with a positive social impact. Among its conclusions is the non-obvious idea that to do the most good, it is perhaps better to take a high-paying job in finance than become an Oxfam charity worker, because by donating the difference in income a financier might fund the salaries of three Oxfam employees. During the same time as the birth of these organizations, the term “effective altruism” was coined in late 2011, and the Centre for Effective Altruism was founded the next year.
The implications of EA are powerful. In 2013, individuals in the United States donated $240 billion to charity, which amounts to tens times the amount of U.S. foreign aid. According to Singer, most of these individuals do little to no research into the charities they donate to. If these funds were directed toward the charities that made the most use of their money, and the causes where the most impact can be made —which research has consistently shown to be those that help the global poor— we can maximize the benefits of our donations. In his talk, Singer cited David Geffen’s recent $100 million donation for naming rights to the Avery Fisher Hall, noting how much more good he could have done if he had used this money to cure one million cases of blindness from trachoma in the developing world instead of improving the aesthetic experience of already-comfortable New York concert-goers.
In addition to alleviating global poverty, EA prioritizes two other causes: animal welfare and existential risk, the threat that a future catastrophic event will cause the end of our species. These causes are in line with EA’s utilitarian logic of minimizing suffering and the impartiality principle best summarized by 19th century philosopher Henry Sidgwick: “The good of any one individual is of no more importance, from the point of view…of the Universe, than the good of any other”. So just as effective altruists value the lives of strangers, they also seek to reduce the suffering experienced by animals from practices such as factory farming and to do maximal good to the human beings of the distant future that they will never know. The most concerning existential risks are unfriendly artificial intelligence (AI), pandemics, nuclear wars, asteroid impacts, and catastrophic climate change. The moral importance of existential risks cannot be understated: a cataclysm wiping out humankind would result in innumerable lives lost, both present and future, rendering current anti-hunger campaigns and infectious disease prevention programs practically futile in comparison.
The consequences of EA thinking, though intellectually fascinating, can seem radical. Donating 10% of our income or more is a daunting challenge for anyone. And the idea that we should pour resources on doing research to prevent some unforeseeable existential risk is understandably met with incredulity by a public that associates the topic with geeky sci-fi movies and apocryphal doomsday declarations. But what are considered extreme stances do not have to remain that way. In the 70s moral vegetarianism was considered a fringe concept, but after half a century of animal rights activism by figures such as Henry Spira and Peter Singer, it is almost unheard of for eateries to not offer vegetarian and vegan options and we have made significant strides in banning or de-popularizing animal testing and wearing of animal skins. Right now, it appears that the EA movement is poised to go mainstream as well. According to Harvard Professor of Philosophy and Moral Cognition Lab director Josh Greene, we have “finally entered phase two of the movement, where effective altruism is no longer an interesting philosophical puzzle but something to live by.”
EA’s Rise to Prominence
Indeed, an effective altruist today would find herself in good company. She has a roster of public EAs to look up to for inspiration and advice. Givewell co-founder Toby Ord donated ten percent of his income while living on a graduate student’s salary at Oxford, and today he gives away everything he earns above £18,000. Elie Hassenfeld and Holden Karofsky accepted huge pay cuts when they quit their jobs as hedge fund analysts to found Givewell. Meanwhile, The Life You Can Save co-founder Matt Wage took the earning-to-give route, eschewing a promising academic career for a job on Wall Street so he could donate half of his six-figure income. Local EAs Julia Wise and Jeff Kaufman also give away half of their income and Wise maintains a popular EA support blog and organizes monthly meet-ups for Boston-based EAs. There is an emerging community of people for whom the ideas of EA are no longer constrained to the realm of theory but are being lived out daily. As their numbers increase and EA ideas gain traction, the movement is capturing mainstream attention and interest. Singer’s new book The Most Good You Can Do sold out within days of its release earlier this month and was the topic of an op-ed in the New York Times.
In fact, even the most theoretical ideas of EA, like those related to existential risk, have made great strides in achieving credibility and public visibility, thanks to the backing of these concerns by distinguished technologists and scientists. Tesla and SpaceX CEO Elon Musk recently announced a $10 million donation to the existential risk organization Future of Life Institute. The book that inspired Musk, philosopher Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies, was also endorsed by Bill Gates at the Boao Conference for Asia last month. Founded just last year at MIT, FLI also hosted a summit this past January on the future of AI where Musk, Stephen Hawking, cosmologist Martin Rees, Skype founder Jann Tallin (one of FLI’s co-founders) and leading AI-researchers in academia and industry, such as engineers from Google Deepmind, signed an open letter pledging to ensure that artificial intelligence research benefits mankind. The Machine Intelligence Research Institute, whose purpose mirrors that of FLI, counts among its biggest donors Peter Thiel of PayPal, Mt. Gox founder Jed McCaleb, and Tallinn, who also funds the Future of Humanity Institute and the Centre for the Study of Existential Risk at Cambridge University. This vociferous support by the technological elite has made the refusal to contemplate existential risk seriously a difficult stance to take in 2015.
Back at Harvard, the conversation has long moved past the question of EA’s validity. This year, from April 12 to April 17, Harvard’s first Effective Altruism Week took place. Launched by Singer’s event, the campaign featured a career panel on earning-to-give at the Office of Student Services, a talk by Economist Daron Acemoğlu on the causes of economic disparity around the world and what can be done to help nations prosper, an existential-risk themed movie screening, and a competition to win a career-coaching package from 80,000 Hours.
If universities are the bellwether of social change, as they have often been in times past, then the enthusiasm and captivation that EA has brought out at Harvard and other institutions of higher learning may well trickle to the rest of the population. After all, these ideas have been supported by numerous people of influence and there is arguably a critical mass of people already actualizing the philosophy in their lives. In time, the idea that we should give away a considerable portion of our earnings or pour millions of dollars into preventing a future unfriendly super-intelligence may become as intuitive and commonsensical as the very idea underpinning the movement—that we should want to do good for the world and do good in the most effective way possible. And in time, such practices may be broadly and routinely implemented. I certainly hope—and will strive to help ensure—that the time will be soon.
Image credit: Ethan Alley