Most of us care not just about this generation, but also about preserving the planet for future generations. Because the future is so vast, the number of people who could exist in the future is probably many times greater than the number of people alive today. This suggests that it may be extremely important to ensure that life on earth continues, and that people in the future have positive lives. Of course, this idea might seem counterintuitive: we don't often think about the lives of our great-grandchildren, let alone their great-grandchildren. But just as we shouldn't ignore the plight of the global poor just because they live in a foreign country, we shouldn't ignore future generations just because they are less visible.
Unfortunately, there are many ways in which we might miss out on a very positive long-term future. Climate change and nuclear war are well-known threats to the long-term survival of our species. Many researchers believe that risks from emerging technologies, such as advanced artificial intelligence and designed pathogens, may be even more worrying. Of course, it is hard to be sure exactly how technologies will develop, or the impact they'll have. But it seems that these technologies have the potential to radically shape the course of progress over the centuries to come. Because of the scale of the future, it seems likely that work on this problem is even more high impact than work on the previous two cause areas.
Yet existential risks stemming from new technologies have been surprisingly neglected - there are just tens of people working on risks from AI or pathogens worldwide. US households spend around 2% of their budgets on personal insurance, on average. If we were to spend a comparable percentage of global resources on addressing risks to civilization, there would be millions of people working on these problems, with a budget of trillions of dollars per year. But instead, we spend just a tiny fraction of that amount, even though such risks may become substantial in the decades to come. If we value protection against unlikely but terrible outcomes individually, as our insurance coverage suggests we do, we should also value protection against terrible outcomes collectively. After all, a collective terrible outcome, like human extinction, is terrible for everyone individually, too. For this reason, it seems prudent for our civilization to spend more time and money mitigating existential risks. (You can find more detail here.)
Join the AI-safety focus group in Singapore →