Vulnerable world hypothesis

Existential risk concept From Wikipedia, the free encyclopedia

The vulnerable world hypothesis[1] or the "black ball" hypothesis[2] refers to the idea that civilizations may likely be destroyed by some disruptive technologies (a black ball) unless extraordinary measures are taken against the scenario from happening. The philosopher Nick Bostrom introduced the hypothesis in an initial publication in 2019 in the journal Global Policy[3][1] and later further discussed in a 2022 essay published in Aeon along with co-author Matthew van der Merwe.[4] The hypothesis is quoted in discussions about the safety of advanced technologies.[5][6]

Background and definition

Bostrom illustrated the hypothesis using an urn analogy. He likened the process of technological invention to drawing balls from an urn where the color of balls represents their impact. White balls are beneficial and constitute most of the balls drawn from the urn. Some balls are gray, which represent technologies with mixed or moderate effects. Black balls represent hypothetical technologies that tend to destroy by default the civilization that invents it. According to Bostrom, it is largely due to luck that humanity hasn't encountered a black ball yet, rather than carefulness or wisdom.[5]

Bostrom defined the vulnerable world hypothesis as the possibility that "If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semi-anarchic default condition."[3] except in some specific cases.[a] The "semi-anarchic default condition" refers here to having:[3][7]

  1. Limited capacity for preventive policing.
  2. Limited capacity for global governance.
  3. Actors with diverse motivations[b]

Types of vulnerabilities

Summarize
Perspective

To exemplify the vulnerabilities, Bostrom proposed a classification system and gave examples of how technology could have gone wrong, and policy recommendations such as differential technological development.[5][3] If a technology that entails such a vulnerability is developed, the solutions supposed to be needed to survive (i.e. effective global governance or preventive policing depending on the type of vulnerability) are controversial.[5][6][8] The classification includes:[3][1]

  • Type 0 ("surprising strangelets") : a technology carries a hidden risk and inadvertently devastates the civilization.

A proposed hypothetical example of this is if nuclear bombs had been able to ignite the atmosphere. Nuclear ignition was predicted not to occur for the Trinity nuclear test in a report commissioned by Robert Oppenheimer. But the report has been deemed shaky given the potential consequences : "One may conclude that the arguments of this paper make it unreasonable to expect that the N + N reaction could propagate. An unlimited propagation is even less likely. However, the complexity of the argument and the absence of satisfactory experimental foundation makes further work on the subject highly desirable."[5]

  • Type 1 ("easy nukes") : a technology gives small groups of people the ability to cause mass destruction.

The "easy nukes" thought experiment proposed by Nick Bostrom opens the question of what would have happened if nuclear chain reactions had been easier to produce, for example by "sending an electric current through a metal object placed between two sheets of glass."[5]

  • Type 2a ("safe first strike") : a technology has the potential to devastate the civilization, and powerful actors are incentivized to use it, potentially because using it first seems to bring an advantage, or because of some tragedy of the commons scenario.
  • Type 2b ("worse global warming") : a great many actors face incentives to take some slightly damaging action such that the combined effect of those actions is civilizational devastation.

Mitigation

According to Bostrom, pausing the technological progress may not be possible or desirable. An alternative would be to prioritize the technologies that are expected to have a positive impact, and to delay those that may be catastrophic, a principle called differential technological development.[5]

The potential solutions varies depending on the type of vulnerability. Dealing with type-2 vulnerabilities may require a very effective governance and international cooperation. For type-1 vulnerabilities, if mass-destruction ever gets accessible to individuals, there may be at least some small fraction of the population that would use it.[5] In extreme cases, mass surveillance might be required to avoid the destruction of civilization, a controversial prospect that received significant media coverage.[9][10][11][12][13]

Technologies that have been proposed as potential vulnerabilities are advanced artificial intelligence, nanotechnology and synthetic biology (synthetic biology may give the ability to easily create enhanced pandemics).[14][2][15][16]

Footnotes

  1. It depends, according to Nick Bostrom, on whether society is in a "semi-anarchic default condition" (see § Definitions).
  2. And in particular, the motivation of at least some small fraction of the population to destroy the civilization even at a personal cost. According to Bostrom : “Given the diversity of human character and circumstance, for any ever so imprudent, immoral, or self-defeating action, there is some residual fraction of humans who would choose to take that action.”[5]

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.