Random Stuff
Chapter 347 - A Long-run Perspective On Strategic Cause Selection And Philanthropy[2]
Is the long run actionable in the short run?
As just mentioned, we believe that maximizing good accomplished largely reduces to doing what is best in terms of very long-run outcomes for humanity, and that this has strategic implications for people aiming to maximize good accomplished with their resources. We think these implications are significant when choosing between causes or program areas, and less significant when comparing opportunities within program areas.
There is a lot of detail behind this perspective and it is hard to summarize briefly. But here is an attempt to quickly explain our reasoning:
We think humanity has a reasonable probability of lasting a very long time, becoming very large, and/or eventually enjoying a very high quality of life. This could happen through radical (or even moderate) technological change, if industrial civilization persists as long as agriculture has persisted (though upper limits for life on Earth are around a billion years), or if future generations colonize other regions of space. Though we wouldn't bet on very specific details, we think some of these possibilities have a reasonable probability of occurring.
Because of this, we think that, from an impartial perspective, almost all of the potential good we can accomplish comes through influencing very long-run outcomes for humanity.
We believe long-run outcomes may be highly sensitive to how well humanity handles key challenges and opportunities, especially challenges from new technology, in the next hundred years or so.
We believe that (especially with substantial resources) we could have small but significant positive impacts on how effectively we face these challenges and opportunities, and thereby affect expected long-run outcomes for humanity.
We could face these challenges and opportunities more effectively by preparing for specific challenges and opportunities (such as nuclear security and climate change in the past and present, and advances in synthetic biology and artificial intelligence in the future), or by enhancing humanity's general capacities to deal with these challenges and opportunities when we face them (through higher rates of economic growth, improved political coordination, improved use of information and decision-making for individuals and groups, and increases in education and human capital).
We believe that this perspective diverges from the recommendations of a more short-run focus in a few ways.
First, when we consider attempts to prepare for global challenges and opportunities in general, we weigh such factors as economic output, log incomes, education, quality-adjusted life-years (QALYs), scientific progress, and governance quality differently than if we would if we put less emphasis on long-run outcomes for humanity. In particular, a more short-term focus would lead to a much stronger emphasis on QALYs and log incomes, which we suspect could be purchased more cheaply through interventions targeting people in developing countries, e.g. through public health or more open migration. Attending to long-run impacts creates a closer contest between such interventions and those which increase economic output or institutional quality (and thus the quality of our response to future challenges and opportunities). Our perspective would place an especially high premium on intermediate goals such as the quality of forecasting and the transmission of scientific knowledge to policy makers, which are disproportionately helpful for navigating global challenges and opportunities.
Second, when there are opportunities for identifying specific major challenges or opportunities for affecting long-run outcomes for humanity, our perspective favors treating these challenges and opportunities with the utmost seriousness. We believe that reducing the risk of catastrophes with the potential to destroy humanity--which we call "global catastrophic risks" or sometimes "existential risks"--has an unusually clear and positive connection with long-run outcomes, and this is a reason we are unusually interested in problems in this area.
Third, the long-run perspective values resilience against permanent disruption or worsening of civilization over and above resilience to short-term catastrophe. From a long-run perspective, there is an enormous difference between a collapse of civilization followed by eventual recovery, versus a permanent collapse of civilization. This point has been made by philosophers like Derek Parfit (very memorably at the end of his book Reasons and Persons) and Peter Singer (in a short piece he wrote with Nick Beckstead and Matt Wage).
As just mentioned, we believe that maximizing good accomplished largely reduces to doing what is best in terms of very long-run outcomes for humanity, and that this has strategic implications for people aiming to maximize good accomplished with their resources. We think these implications are significant when choosing between causes or program areas, and less significant when comparing opportunities within program areas.
There is a lot of detail behind this perspective and it is hard to summarize briefly. But here is an attempt to quickly explain our reasoning:
We think humanity has a reasonable probability of lasting a very long time, becoming very large, and/or eventually enjoying a very high quality of life. This could happen through radical (or even moderate) technological change, if industrial civilization persists as long as agriculture has persisted (though upper limits for life on Earth are around a billion years), or if future generations colonize other regions of space. Though we wouldn't bet on very specific details, we think some of these possibilities have a reasonable probability of occurring.
Because of this, we think that, from an impartial perspective, almost all of the potential good we can accomplish comes through influencing very long-run outcomes for humanity.
We believe long-run outcomes may be highly sensitive to how well humanity handles key challenges and opportunities, especially challenges from new technology, in the next hundred years or so.
We believe that (especially with substantial resources) we could have small but significant positive impacts on how effectively we face these challenges and opportunities, and thereby affect expected long-run outcomes for humanity.
We could face these challenges and opportunities more effectively by preparing for specific challenges and opportunities (such as nuclear security and climate change in the past and present, and advances in synthetic biology and artificial intelligence in the future), or by enhancing humanity's general capacities to deal with these challenges and opportunities when we face them (through higher rates of economic growth, improved political coordination, improved use of information and decision-making for individuals and groups, and increases in education and human capital).
We believe that this perspective diverges from the recommendations of a more short-run focus in a few ways.
First, when we consider attempts to prepare for global challenges and opportunities in general, we weigh such factors as economic output, log incomes, education, quality-adjusted life-years (QALYs), scientific progress, and governance quality differently than if we would if we put less emphasis on long-run outcomes for humanity. In particular, a more short-term focus would lead to a much stronger emphasis on QALYs and log incomes, which we suspect could be purchased more cheaply through interventions targeting people in developing countries, e.g. through public health or more open migration. Attending to long-run impacts creates a closer contest between such interventions and those which increase economic output or institutional quality (and thus the quality of our response to future challenges and opportunities). Our perspective would place an especially high premium on intermediate goals such as the quality of forecasting and the transmission of scientific knowledge to policy makers, which are disproportionately helpful for navigating global challenges and opportunities.
Second, when there are opportunities for identifying specific major challenges or opportunities for affecting long-run outcomes for humanity, our perspective favors treating these challenges and opportunities with the utmost seriousness. We believe that reducing the risk of catastrophes with the potential to destroy humanity--which we call "global catastrophic risks" or sometimes "existential risks"--has an unusually clear and positive connection with long-run outcomes, and this is a reason we are unusually interested in problems in this area.
Third, the long-run perspective values resilience against permanent disruption or worsening of civilization over and above resilience to short-term catastrophe. From a long-run perspective, there is an enormous difference between a collapse of civilization followed by eventual recovery, versus a permanent collapse of civilization. This point has been made by philosophers like Derek Parfit (very memorably at the end of his book Reasons and Persons) and Peter Singer (in a short piece he wrote with Nick Beckstead and Matt Wage).
You'll Also Like
-
Game of Gods and Demons: The Mage is the eternal truth!
Chapter 1540 4 hours ago -
Science Succubus: Adopted by Eri Kisaki at the beginning!
Chapter 291 4 hours ago -
Naruto: I traveled through time with a max-level account
Chapter 655 4 hours ago -
The hardcore Pokémon world can also be warm
Chapter 462 4 hours ago -
Super Rich: I became the richest man in the world by withdrawing money from the game
Chapter 196 4 hours ago -
Rebirth: If the school beauty is not sweet, why should I lick her?
Chapter 526 4 hours ago -
Villain: I can check the script of my life!
Chapter 216 4 hours ago -
Watch Hogwarts video, Voldemort's death
Chapter 337 4 hours ago -
The game started with a 10,000-fold increase. I am really invincible.
Chapter 945 4 hours ago -
Naruto: COS real Danzo said he is my father
Chapter 63 4 hours ago