Top Qs
Timeline
Chat
Perspective
Risk of astronomical suffering
Scenarios of large amounts of future suffering From Wikipedia, the free encyclopedia
Remove ads
Risks of astronomical suffering, also called suffering risks or s-risks, are risks involving much more suffering than all that has occurred on Earth so far.[3][4]

According to some scholars, s-risks warrant serious consideration as they are not extremely unlikely and can arise from unforeseen scenarios. Although they may appear speculative, factors such as technological advancement, power dynamics, and historical precedents indicate that advanced technology could inadvertently result in substantial suffering.[5]
Sources of possible s-risks include advanced artificial intelligence,[6][7] space colonization[8] and the spread of wild animal suffering to other planets.[9]
S-risks can be intentional, driven by factors like tribalism, sadism or a desire for retribution; or incidental, for example arising as a byproduct of economic incentives.[10]
Remove ads
Sources of s-risk
Summarize
Perspective
Artificial intelligence
Artificial intelligence is central to s-risk discussions because it may eventually enable powerful actors to control vast technological systems. In a worst-case scenario, AI could be used to create systems of perpetual suffering, such as a totalitarian regime expanding across space.[11] Additionally, s-risks might arise incidentally, such as through AI-driven simulations of conscious beings experiencing suffering, or from economic activities that disregard the well-being of nonhuman or digital minds.[4] Steven Umbrello, an AI ethics researcher, has warned that biological computing may make system design more prone to s-risks.[6]
Space colonization
Space colonization could increase suffering by introducing wild animals to new environments. Animals may struggle to survive, facing hunger, disease, and predation, which could result in widespread suffering.[12] Phil Torres argues that space colonization poses significant "suffering risks", where expansion into space will lead to the creation of diverse civilizations and post-human species with conflicting interests. These differences, combined with advanced weaponry and the vast distances between civilizations slowing communications, could result in catastrophic and unresolvable conflicts. Strategies like a "cosmic Leviathan" to impose order or deterrence policies are constrained by the physics of space and the destructive power of future technologies. Torres believes that space colonization should be delayed or avoided altogether.[13]
Magnus Vinding's "astronomical atrocity problem" questions whether vast amounts of happiness can justify extreme suffering from space colonization. He highlights moral concerns such as diminishing returns on positive goods, the potentially incomparable weight of severe suffering, and the priority of preventing misery. He argues that if colonization is inevitable, it should be led by agents deeply committed to minimizing harm.[14]
Malevolent actors and ideologies

S-risk scenarios may also arise intentionally. Such risks escalate in situations such as warfare or terrorism, especially when advanced technology is involved, as conflicts can amplify destructive tendencies like sadism, tribalism, and retributivism. War often intensifies these dynamics, with the possibility of catastrophic threats being used to force concessions. Agential s-risks are further aggravated by malevolent traits in powerful individuals, such as narcissism or psychopathy. This is exemplified by totalitarian dictators like Hitler and Stalin, whose actions in the 20th century inflicted widespread suffering.[10]
Genetic engineering
David Pearce has argued that genetic engineering is a potential s-risk. Pearce argues that while technological mastery over the pleasure-pain axis could lead to the potential eradication of suffering, it could also potentially be used to create "hyperpain" or "dolorium" that experience levels of suffering beyond the human range.[15]
Remove ads
Mitigation strategies
To mitigate s-risks, efforts focus on researching and understanding the factors that exacerbate them, particularly in emerging technologies and social structures. Targeted strategies include promoting safe AI design, ensuring cooperation among AI developers, and modeling future civilizations to anticipate risks. Broad strategies may advocate for moral norms against large-scale suffering and stable political institutions.[16]
Remove ads
See also
References
Further reading
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads
