50 pages 1 hour read

The Precipice: Existential Risk and the Future of Humanity

Nonfiction | Book | Adult | Published in 2020

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Index of Terms

Anthropogenic Risk

Anthropogenic risk refers to existential threats that are the result of human action, as opposed to natural events. The Precipice discusses how the rise of technology and industry has led to new dangers that humanity must navigate, such as nuclear warfare, climate change, and biotechnology risks. These risks are a focal point of the book as they can potentially lead to human extinction if not managed properly.

Toby Ord contrasts anthropogenic risks with natural risks such as supervolcanic explosions, supernovae, and asteroids colliding with the Earth. Though these events could trigger an extinction-level catastrophe, Ord argues that humanity’s own innovations prove a far greater threat to its survival.

Artificial Intelligence

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to perform tasks that typically require human intelligence. This encompasses a range of capabilities, including machine learning, problem-solving, and natural language comprehension. AI systems can analyze data, recognize patterns, and adapt to changing circumstances, allowing them to perform tasks such as image and speech recognition, decision-making, and translation. AI has applications across various industries, from healthcare and finance to transportation and entertainment, and its ongoing development raises important ethical and societal considerations.

In The Precipice, Ord acknowledges that AI holds immense promise in enhancing efficiency, automating tasks, and solving complex problems across various domains and, in many cases, improving quality of life. However, the rapid evolution of AI raises concerns about ethical dilemmas, job displacement, and the potential for unintended consequences. Ensuring responsible AI development and establishing robust ethical frameworks are interventions Ord sees as critical to harnessing the positive aspects of AI while mitigating the risks. Ultimately, The Precipice argues for a balance between innovation and ethical considerations to fully realize the potential of artificial intelligence while averting its potential long-term risks.

Dystopia

Dystopia refers to an imagined, often nightmarish society characterized by oppression, suffering, and a dehumanizing environment. It is usually considered to be the result of a large-scale catastrophe such as war, famine, or disease, which has changed life irrevocably for the worse.

Dystopian scenarios often revolve around the unchecked development and misuse of advanced technologies. In The Precipice, Ord argues that the irresponsible use of technology could permanently alter society. Potential existential threats include artificial intelligence surpassing human control, ecological devastation that result from unchecked technological development, or the unintended consequences of biotechnological experiments. The dystopian narrative of The Precipice underscores the importance of ethical considerations and responsible innovation to mitigate risks and ensure that technological advancements contribute positively to the collective well-being rather than leading to catastrophic outcomes.

Existential Risk

Existential risk is defined as a risk “that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development” (Bostrom, Nick. “Existential Risk Prevention as Global Priority.” Global Policy, vol. 4, no. 1, 2013). The Precipice emphasizes the importance of understanding, mitigating, and preventing existential risks to ensure the survival and flourishing of humanity. The term is central to the book’s exploration of potential global hazards and strategies for dealing with them.

Longtermism

Longtermism is an ethical stance that emphasizes the importance of taking actions that positively influence the long-term future. The term was coined by Scottish philosopher William MacAskill and Australian philosopher Toby Ord in 2017 and was expanded upon in The Precipice and MacAskill’s book What We Owe the Future (2022). Advocates of longtermism argue that decisions and actions should be guided by their potential impact on the distant future, considering the consequences that may unfold over centuries or even millennia. This philosophy encourages a broad and forward-looking approach to problem-solving, addressing issues such as climate change, existential risks, and technological advancements with a focus on their long-term implications.

Proponents believe that by adopting a longtermist mindset, societies can make choices that contribute to the sustained flourishing of humanity and the preservation of a thriving planet. However, longtermism also proposes careful consideration of uncertainties, ethical implications, and the practicality of implementing policies that align with such far-reaching goals. As a philosophical framework, longtermism stimulates discourse on the responsibilities of individuals, institutions, and societies in shaping a future that extends beyond immediate concerns, fostering a holistic and contemplative approach to decision-making.

Ord’s book posits that safeguarding the future of humanity is a moral imperative and that current generations have a responsibility to future ones. The concept of longtermism underpins the book’s philosophical perspective on existential risks and humanity’s priorities.

Precipice

The “Precipice” in the book’s title refers to the current period in human history where the potential for global catastrophe is particularly high due to the development of powerful technologies without corresponding increases in wisdom or governance. Ord uses this term to describe the perilous nature of our times, urging a careful and deliberate approach to reducing existential risks. The Precipice is not only a peak but an edge, a decision point where humanity can choose to continue moving forward without considering the long-term dangers of its current actions or back away from disaster and focus on maintaining progress while reducing risk.

Technological Stagnation

Technological stagnation is a potential future scenario where technological progress significantly slows down or stops. In The Precipice, Ord explores the term in the context of existential risk, examining how a halt in technological development could affect humanity’s ability to solve crucial problems and progress toward a safer future. Ord emphasizes that his goal is not to halt technological progress but to proceed with humanity’s collective wisdom and concern for its long-term future.

The Doomsday Clock

Ord’s metaphor of the Precipice echoes that of the Doomsday Clock, the metaphor created in 1947 by the Bulletin of the Atomic Scientists to show how close humanity was to nuclear annihilation. The Clock’s original setting was seven minutes to midnight, emphasizing how nuclear proliferation posed an extreme danger to humanity’s survival. Here, “midnight” represents human extinction. Ord references the Doomsday Clock being set to two minutes to midnight in 2018 after peace talks between the US and North Korea failed.

The Doomsday Clock is still maintained, and as of January 24, 2023, it is set at 90 seconds to midnight, based on scientific predictions of the severity of climate change, the persistent threat of nuclear war, pandemics such as COVID-19, and the malicious use of technology.

blurred text
blurred text
blurred text
blurred text
Unlock IconUnlock all 50 pages of this Study Guide

Plus, gain access to 8,800+ more expert-written Study Guides.

Including features:

+ Mobile App
+ Printable PDF
+ Literary AI Tools