You can edit almost every page by Creating an account. Otherwise, see the FAQ.

Existential risk

From EverybodyWiki Bios & Wiki






This is basically a fork of the same concept in Global catastrophic risk. It is difficult to separate these terms without overlap and confusion. Future of Humanity Institute is prominent in the sourcing (including Nick Bostrom) they are the one's publishing books and papers differentiating themselves around this term. The author of this draft User:Vermeer dawn also works at Future of Humanity Institute, I'm concerned about COI and fragmentation around the institute's vision of how to label and categorize. -- GreenC 17:31, 14 August 2020 (UTC)

Existential risk is a more specific concept with special implications that has received particular attention. By way of analogy, global catastrophic risk (GCR) is to existential risk as artificial intelligence is to artificial general intelligence. (Both articles have been up for over a decade.) Regarding the separation of terms, the current treatment of existential risk in the GCR article is awkward and confusing. Giving each concept its own article would be a step towards a clearer distinction.
Yes, Bostrom defined the term in 2002, and existential risk is highly relevant to the long-term future of humanity. However, as is clear in the draft, other organizations are also working on existential risk research, and existential risk has been thought about long before Bostrom's definition. User:Vermeer dawn may work at FHI, but I do not, and the content speaks for itself. It is reasonable to be wary of COI, but I have not come across any indication that the quality of the draft had been sacrificed to benefit the institute. WeyerStudentOfAgrippa (talk) 15:40, 15 August 2020 (UTC)
The article's justification for existence is based on a single sentence to Bostrom:
"Existential risks should be understood as a special subset of global catastrophic risks, where the damage is permanent, and the effects fall not only on the present generation of humanity, but also on all of their descendants."
Understood by who, Nick Bostrom and FHI. It's a false dichotomy, name which institutions are studying global catastrophic risks who then exclude permanent damage or the impacts on descendants. No one studying Global Catastrophic Risks is limiting their studies to ephemeral damage to current generations, it's an artificial distinction. If this distinction was real we should have no trouble seeing other institutions using it this way. You say "other organizations are also working on existential risk research". Which ones? When I look at the two others listed they appear to be using "existential" as synonymous with global catastrophic risk. It's not possible to create such a fine distinction in the real world, outside of Bostrom and FHI's appropriation of the term. Wikipedia follows trends it does not lead. -- GreenC 03:37, 17 August 2020 (UTC)



An existential risk is a hypothetical global catastrophic risk that leads to a permanent loss of humanity's potential. Such loss of potential may be total, e.g. extinction, or partial, e.g. permanent universal totalitarianism. The study of existential risks presents unique challenges, due to their unprecedented nature. Despite this, existential risks have received academic and social attention, particularly since the invention of nuclear weapons.

While nuclear winter remains as an existential risk, others have since been considered more dangerous, particularly existential risk from artificial general intelligence and risk of engineered pandemics. Estimates of the combined existential risk facing humanity vary considerably, but many such estimates give a probability within 3-19% of existential catastrophe within the next century. These estimates have been criticized by academics and industry figures as inflated and unsupported by data.

Since the 2000s, a number of academic and non-profit organizations have been established to research existential risk and formulate potential responses.

Definition & concept[edit]

Defining existential risk[edit]

Nick Bostrom defined existential risk as follows in a 2002 article: "[An existential risk is] one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential."[1] In a later paper, he offered a slightly altered definition: "An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development."[2] Toby Ord offers a slightly simpler definition that is equivalent to Bostrom's: "existential risks are risks that threaten the destruction of humanity's long-term potential."[3]:37

These definitions identify the key features of existential risks—they are permanent (and therefore necessarily unprecedented), and they are global (they impact all of humanity as well as all subsequent generations). An existential catastrophe would be the moment when everything was lost—the end of the human story. Since it is sometimes desirable to talk about the events themselves, and not the risk of their occurrence, researchers use the term 'existential catastrophe' to describe such events, where an 'existential risk' is just the probability of such an event occurring.[4]

Difference from global catastrophic risks[edit]

The term global catastrophic risk "lacks a sharp definition", and generally refers (loosely) to a risk that could inflict "serious damage to human well-being on a global scale".[5] Humanity has suffered large catastrophes before. Some of these have caused serious damage, but were only local in scope—e.g. the Black Death may have resulted in the deaths of a third of Europe's population,[6] 10% of the global population at the time.[7] The European conquest of the Americas may have resulted in population losses of 90%.[8] Some were global, but may not have been severe enough—e.g. the 1918 influenza pandemic killed an estimated 3-6% of the world's population.[9] None of these catastrophes were existential—humanity was able to recover from all of them with its potential intact.[3]:124–5

Existential risks should be understood as a special subset of global catastrophic risks, where the damage is permanent, and the effects fall not only on the present generation of humanity, but also on all of their descendants. This shared feature of existential risks makes it valuable to study them as a class of their own, separate from global catastrophic risks in general.[10]

Non-extinction risks[edit]

Of all the species that have ever lived, 99% have gone extinct.[11] Extinction is the most obvious way in which humanity's potential could be destroyed, but there are others. More generally, a disaster severe enough to cause a permanent collapse of civilisation would constitute an existential catastrophe, even if it fell short of extinction. Similarly, if humanity fell under a totalitarian regime, and there were no chance of recovery, this too would be an existential catastrophe. George Orwell imagined one such dystopia in his 1949 novel Nineteen Eighty-Four:[12]

"Do you begin to see, then, what kind of world we are creating? ... A world of fear and treachery is torment, a world of trampling and being trampled upon, a world which will grow not less but more merciless as it refines itself. Progress in our world will be progress towards more pain. The old civilizations claimed that they were founded on love or justice. Ours is founded upon hatred ... In our world there will be no emotions except fear, rage, triumph, and self-abasement. Everything else we shall destroy—everything. There will be no art, no literature, no science. When we are omnipotent we shall have no more need of science. There will be no distinction between beauty and ugliness. There will be no curiosity, no enjoyment of the process of life. All competing pleasures will be destroyed. ... If you want a picture of the future, imagine a boot stamping on a human face—for ever."

A dystopian scenario shares the key features of extinction and a permanent collapse of civilisation—before the catastrophe, humanity faced a vast range of bright futures to choose from; after the catastrophe, humanity is locked forever in a terrible state.[3] Bryan Caplan writes that "perhaps an eternity of totalitarianism would be worse than extinction".[13]

Methodological challenges[edit]

Lack of historical precedent[edit]

Humanity has never suffered an existential catastrophe. By definition, if humanity does succumb to an existential catastrophe, it will be the first. In this way, existential catastrophes are necessarily unprecedented.[3] This is a challenge for dealing with existential risk, since humanity will not be able to learn from a track record of previous events. It also means that the existential risk is not easily subject to the usual standards of scientific rigour. Carl Sagan expressed this with regards to nuclear war: “Understanding the long-term consequences of nuclear war is not a problem amenable to experimental verification.”[14]

Incentives and coordination[edit]

There are economic reasons that can explain why so little effort is going into existential risk reduction. It is a global public good, so we should expect it to be undersupplied by markets. Furthermore, it is an intergenerational global public good, since most of the benefits of existential risk reduction would be enjoyed by future generations, and though these future people would in theory perhaps be willing to pay substantial sums for existential risk reduction, no mechanism for such a transaction exists.[2]

Cognitive biases[edit]

Numerous cognitive biases can influence people's judgment of the importance of existential risks, including scope insensitivity, hyperbolic discounting, availability heuristic, the conjunction fallacy, the affect heuristic, and the overconfidence effect.[15]

Scope insensitivity influences how bad people consider the extinction of the human race to be. For example, when people are motivated to donate money to altruistic causes, the quantity they are willing to give does not increase linearly with the magnitude of the issue: people are roughly as concerned about 200,000 birds getting stuck in oil as they are about 2,000.[16] Similarly, people are often more concerned about threats to individuals than to larger groups.[15]

Moral importance of existential risk[edit]

In one of the earliest discussions of ethics of human extinction, Derek Parfit offers the following thought experiment:

The scale of what is lost in an existential catastrophe is determined by humanity's long-term potential—what humanity could expect to achieve if it survived. From a classical utilitarian perspective, the value of protecting humanity is the product of its duration (how long humanity survives), its size (how many humans there are over time), and its quality (on average, how good is life for future people).[3]:273 On average, species survive for around a million years before going extinct. Parfit points out that the Earth will remain habitable for around a billion years. And these might be lower bounds on our potential: if humanity is able to expand beyond Earth, it could survive for trillions of years.[3]:21 The size of the foregone potential that would be lost, were humanity to go extinct, is very large.

Some philosophers have defended views on which future people do not matter, morally speaking.[18] Even on such views, an existential catastrophe would be among the worst things imaginable. It would cut short the lives of eight billion presently existing people, destroying all of what makes their lives valuable, and most likely subjecting many of them to profound suffering. So even setting aside the value of future generations, there are strong reasons to reduce existential risk, grounded in concern for presently existing people.[19]

Beyond utilitarianism, other moral perspectives lend support to the importance of reducing existential risk. An existential catastrophe would destroy more than just humanity—it would destroy all cultural artefacts, languages, and traditions, and many of the things we value.[14] So moral viewpoints on which we have duties to protect and cherish things of value would see this as a huge loss that should be avoided. One can also consider reasons grounded in duties to past generations. Edmund Burke writes of a "partnership ... between those who are living, those who are dead, and those who are to be born".[20] If one takes seriously the debt humanity owes to past generations, Ord argues the best way of repaying it might be to 'pay it forward', and ensure that humanity's inheritance is passed down to future generations.[3]:49–51

Estimates of existential risk[edit]

There have been a number of estimates of existential risk, extinction risk, or a global collapse of civilisation:

  • John Leslie estimates a 30% risk over the next five centuries (equivalent to around 9% per century, on average).[21]
  • Nick Bostrom gave the following estimate of existential risk over the long term: ‘My subjective opinion is that setting this probability lower than 25% would be misguided, and the best estimate may be considerably higher.’[1]
  • In 2003, Martin Rees estimated a 50% chance of collapse of civilisation in the twenty-first century.[22]
  • A 2008 survey of attendees at a workshop hosted by the Future of Humanity Institute estimated 19% risk of human extinction in the next century.[23]
  • Toby Ord estimates existential risk in the next century at ‘1 in 6’.[3]:31
  • A 2016 survey of AI experts found a median estimate of 5% that human-level AI would cause an outcome that was "extremely bad (e.g. human extinction)".[24]
  • Metaculus users currently estimate a 3% probability of humanity going extinct before 2100.[25]

Sources of existential risk[edit]

Natural vs. anthropogenic[edit]

Homo sapiens, like other species, has always been subject to risks from natural catastrophes. Of all species that have ever lived, 99% have gone extinct.[11] Earth has experienced numerous mass extinction events, in which up to 96% of all species present at the time were eliminated.[11] A notable example is the K-T extinction event, which killed the dinosaurs. More recently, though, humanity has started to pose risks to itself through human action in addition to the natural risks. Toby Ord suggests that the first such anthropogenic (human-originating) risks came about in the twentieth century: risks from nuclear warfare and from man-made climate change.[3]

A key difference between natural and anthropogenic risks is that empirical evidence can place an upper bound on the level of natural risk. Humanity has existed for at least 200,000 years, over which it has been subject to a roughly constant level of natural risk. If the natural risk were high, then it would be highly unlikely that humanity would have survived as long as it has. Snyder-Beattie, Ord & Bonsall formalise this argument and conclude that we can be confident that natural risk is lower than 1 in 140,000 per year.[26]

Since anthropogenic risk is a relatively recent phenomenon, humanity's track record of survival cannot provide similar assurances. Humanity has only survived 75 years since the creation of nuclear weapons, and for future technologies there is no track record at all. This has led thinkers like Carl Sagan to conclude that humanity is currently in a ‘time of perils’[27]—a uniquely dangerous period in human history, where it is subject to unprecedented levels of risk, beginning from when we first started posing risks to ourselves through our actions.[28]

Major risks[edit]

Organizations working on existential risk reduction[edit]

Criticism[edit]

Steven Pinker has argued that estimates of existential risk “tend to be inflated by the Availability and negativity biases”, and perverse incentives leading those with high risk estimates being seen as “serious and responsible, while those who are measured are seen as complacent and naive”. He argues that “technology has not made this a uniquely dangerous era in the history of our species, but a uniquely safe one”, due to the protection it provides us from natural risks. Specifically, he suggests bioterrorism may be a “phantom menace”, and that arguments for artificial intelligence risk are “self-refuting”.[29][page needed] William MacAskill has argued that it is unlikely we are living in the most important time in history, which is an implication of existential risk being very high.[30]

The case for existential risk from AI has some notable skeptics. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."[31] Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."[32] Computer scientist Oren Etzioni has claimed “predictions that superintelligence is on the foreseeable horizon are not supported by the available data.”[33]

History[edit]

Early history of thinking about human extinction[edit]

Prior to the 18th and 19th centuries, the possibility that humans or other organisms could go extinct was viewed with scepticism. It contradicted the principle of plenitude, a doctrine that all possible things exist. The principle traces back to Aristotle, and was an important tenet of Christian theology.[34]:121 The doctrine was gradually undermined by evidence from the natural sciences, particular the discovery of fossil evidence of species that appeared to no longer exist, and the development of theories of evolution.[34]:121 In On the Origin of Species, Darwin discussed the extinction of species as a natural process and core component of natural selection.[35] Notably, Darwin was skeptical of the possibility of sudden extinctions, viewing it as a gradual process. He held that the abrupt disappearance of species from the fossil record were not evidence of catastrophic extinctions, but rather were a function of unrecognised gaps in the record.[35]

As the possibility of extinction became more widely established in the sciences, so did the prospect of human extinction. Beyond science, human extinction was explored in literature. The Romantic authors and poets were particularly interested in the topic. Lord Byron wrote about the extinction of life on earth in his 1816 poem ‘Darkness’, and in 1824 envisaged humanity being threatened by a comet impact, and employing a missile system to defend against it.[36] Mary Shelley’s 1826 novel The Last Man is set in a world where humanity has been nearly destroyed by a mysterious plague.[36]

Atomic era[edit]

The invention of the atomic bomb prompted a wave of discussion about the risk of human extinction among scientists, intellectuals, and the public at large. In a 1945 essay, Bertrand Russell wrote that "[T]he prospect for the human race is sombre beyond all precedent. Mankind are faced with a clear-cut alternative: either we shall all perish, or we shall have to acquire some slight degree of common sense."[37] A 1950 Gallup poll found that 19% of Americans believed that another world war would mean "an end to mankind".[38]

The discovery of 'nuclear winter' in the early 1980s, a specific mechanism by which nuclear war could result in human extinction, again raised the issue to prominence. Writing about these findings in 1983, Carl Sagan argued that measuring the badness of extinction solely in terms of those who die "conceals its full impact," and that nuclear war "imperils all of our descendants, for as long as there will be humans."[14]

Modern[edit]

John Leslie's 1996 book The End of The World was an academic treatment of the science and ethics of human extinction. In it, Leslie considered a range of threats to humanity and what they have in common. In 2005, Nick Bostrom founded the Future of Humanity Institute at the University of Oxford to study existential risk. Since then, a number of other organizations focusing on existential risk have been established including the Centre for the Study of Existential Risk at the University of Cambridge and the Future of Life Institute.

References[edit]

  1. 1.0 1.1 Bostrom, Nick (2002), "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards", Journal of Evolution and Technology, 9
  2. 2.0 2.1 Bostrom, Nick (February 2013), "Existential Risk Prevention as Global Priority", Global Policy, 4: 15–31, doi:10.1111/1758-5899.12002
  3. 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. New York: Hachette. ISBN 9780316484916. Search this book on
  4. Cotton-Barratt, Owen; Ord, Toby (2015), Existential risk and existential hope: Definitions (PDF), Future of Humanity Institute – Technical Report #2015-1, pp. 1–4
  5. Bostrom, Nick; Cirkovic, Milan (2008). Global Catastrophic Risks. Oxford: Oxford University Press. p. 1. ISBN 978-0-19-857050-9. Search this book on
  6. Ziegler, Philip (2012). The Black Death. Faber and Faber. p. 397. ISBN 9780571287116. Search this book on
  7. Muehlhauser, Luke (15 March 2017). "How big a deal was the Industrial Revolution?". lukemuelhauser.com. Retrieved 3 August 2020.
  8. Nunn, Nathan; Qian, Nancy (2010). "The Columbian Exchange: A History of Disease, Food, and Ideas". Journal of Economic Perspectives. 12 (1): 163–188. doi:10.3201/eid1201.050979. PMC 3291398. PMID 16494711.
  9. Taubenberger, Jeffery; Morens, David (2006). "1918 Influenza: the Mother of All Pandemics". Emerging Infectious Diseases. 12 (1): 15–22. doi:10.1257/jep.24.2.163. PMC 3291398. PMID 16494711.
  10. Bostrom, Nick; Cirkovic, Milan (2008). Global Catastrophic Risks. Oxford: Oxford University Press. p. 1. ISBN 978-0-19-857050-9. Search this book on
  11. 11.0 11.1 11.2 Barnosky, Anthony D.; Matzke, Nicholas; Tomiya, Susumu; Wogan, Guinevere O. U.; Swartz, Brian; Quental, Tiago B.; Marshall, Charles; McGuire, Jenny L.; Lindsey, Emily L.; Maguire, Kaitlin C.; Mersey, Ben; Ferrer, Elizabeth A. (3 March 2011). "Has the Earth's sixth mass extinction already arrived?". Nature. 471 (7336): 51–57. Bibcode:2011Natur.471...51B. doi:10.1038/nature09678. PMID 21368823. Unknown parameter |s2cid= ignored (help)
  12. Orwell, George (1949). Nineteen Eighty-Four. A novel. London: Secker & Warburg. Search this book on
  13. Bryan Caplan (2008). "The totalitarian threat". Global Catastrophic Risks, eds. Bostrom & Cirkovic (Oxford University Press): 504-519. ISBN 9780198570509 Search this book on .
  14. 14.0 14.1 14.2 Sagan, Carl (Winter 1983). "Nuclear War and Climatic Catastrophe: Some Policy Implications". Foreign Affairs. Council on Foreign Relations. doi:10.2307/20041818. JSTOR 20041818. Retrieved 4 August 2020.
  15. 15.0 15.1 Yudkowsky, Eliezer (2008). "Cognitive Biases Potentially Affecting Judgment of Global Risks" (PDF). Global Catastrophic Risks: 91–119. Bibcode:2008gcr..book...86Y.
  16. Desvousges, W.H., Johnson, F.R., Dunford, R.W., Boyle, K.J., Hudson, S.P., and Wilson, N. 1993, Measuring natural resource damages with contingent valuation: tests of validity and reliability. In Hausman, J.A. (ed), Contingent Valuation:A Critical Assessment, pp. 91–159 (Amsterdam: North Holland).
  17. Parfit, Derek (1984). Reasons and Persons. Oxford University Press. pp. 453–454. Search this book on
  18. Narveson, Jan (1973). "Moral Problems of Population". The Monist. 57 (1): 62–86. doi:10.5840/monist197357134. PMID 11661014.
  19. Lewis, Gregory (23 May 2018). "The person-affecting value of existential risk reduction". www.gregoryjlewis.com. Retrieved 7 August 2020.
  20. Burke, Edmund (1999) [1790]. "Reflections on the Revolution in France" (PDF). In Canavan, Francis. Select Works of Edmund Burke Volume 2. Liberty Fund. p. 192. Search this book on
  21. Leslie, John (1996). The End of the World: The Science and Ethics of Human Extinction. Routledge. p. 146. Search this book on
  22. Rees, Martin (2004) [2003]. Our Final Century. Arrow Books. p. 9. Search this book on
  23. Global Catastrophic Risks Survey, Technical Report, 2008, Future of Humanity Institute
  24. Grace, Katja; Salvatier, John; Dafoe, Allen; Zhang, Baobao; Evans, Owain (3 May 2018). "When Will AI Exceed Human Performance? Evidence from AI Experts". arXiv:1705.08807 [cs.AI].
  25. https://www.metaculus.com/questions/578/human-extinction-by-2100
  26. Snyder-Beattie, Andrew; Ord, Toby; Bonsall, Michael (July 2019). "An upper bound for the background rate of human extinction". Scientific Reports. 9: 11054. Bibcode:2019NatSR...911054S. doi:10.1038/s41598-019-47540-7. Unknown parameter |s2cid= ignored (help)
  27. Sagan, Carl (1994). Pale Blue Dot. Random House. pp. 305–6. ISBN 0-679-43841-6. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others are not so lucky or so prudent, perish. Search this book on
  28. Parfit, Derek (2011). On What Matters Vol. 2. Oxford University Press. p. 616. ISBN 9780199681044. We live during the hinge of history ... If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period. Search this book on
  29. Pinker, Steven (2018). Enlightenment Now. Viking. ISBN 978-0-525-42757-5. Search this book on
  30. MacAskill, William (3 September 2019). "Are we living at the most influential time in history?". Retrieved 14 August 2020.
  31. "Tech Luminaries Address Singularity". IEEE Spectrum: Technology, Engineering, and Science News (SPECIAL REPORT: THE SINGULARITY). 1 June 2008. Retrieved 8 April 2020.
  32. Shermer, Michael (1 March 2017). "Apocalypse AI". Scientific American. p. 77. Bibcode:2017SciAm.316c..77S. doi:10.1038/scientificamerican0317-77. Retrieved 27 November 2017.
  33. Etzioni, Oren (20 September 2016). "No, the Experts Don't Think Superintelligent AI is a Threat to Humanity". MIT Technology Review. Retrieved 7 August 2020.
  34. 34.0 34.1 Darwin, Charles; Costa, James T. (2009). The Annotated Origin. Harvard University Press. ISBN 978-0674032811. Search this book on
  35. 35.0 35.1 Raup, David M. (1995). "The Role of Extinction in Evolution". In Fitch, W. M.; Ayala, F. J. Tempo And Mode In Evolution: Genetics And Paleontology 50 Years After Simpson. Search this book on
  36. 36.0 36.1 Moynihan, Thomas (2019). "The end of us". Aeon. Retrieved 14 August 2020.
  37. Russell, Bertrand (1945). "The Bomb and Civilization". Archived from the original on 7 August 2020.
  38. Erskine, Hazel Gaudet (1963). "The Polls: Atomic Weapons and Nuclear Energy". The Public Opinion Quarterly. 27 (2): 155–190. doi:10.1086/267159. JSTOR 2746913.


This article "Existential risk" is from Wikipedia. The list of its authors can be seen in its historical and/or the page Edithistory:Existential risk. Articles copied from Draft Namespace on Wikipedia could be seen on the Draft Namespace of Wikipedia and not main one.