Moral philosophy (Topic archive) - 80,000 Hours https://80000hours.org/topic/foundations/moral-philosophy/ Tue, 05 Dec 2023 09:42:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe https://80000hours.org/podcast/episodes/jeff-sebo-ethics-digital-minds/ Wed, 22 Nov 2023 21:00:29 +0000 https://80000hours.org/?post_type=podcast&p=84537 The post Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe appeared first on 80,000 Hours.

]]>
The post Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe appeared first on 80,000 Hours.

]]>
Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe https://80000hours.org/podcast/episodes/anders-sandberg-best-things-possible-in-our-universe/ Fri, 06 Oct 2023 20:20:25 +0000 https://80000hours.org/?post_type=podcast&p=84134 The post Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe appeared first on 80,000 Hours.

]]>
The post Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe appeared first on 80,000 Hours.

]]>
Toby Ord on the perils of maximising the good that you do https://80000hours.org/podcast/episodes/toby-ord-perils-of-maximising-good/ Fri, 08 Sep 2023 20:24:48 +0000 https://80000hours.org/?post_type=podcast&p=83546 The post Toby Ord on the perils of maximising the good that you do appeared first on 80,000 Hours.

]]>

The post Toby Ord on the perils of maximising the good that you do appeared first on 80,000 Hours.

]]>
Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI risk https://80000hours.org/podcast/episodes/holden-karnofsky-how-ai-could-take-over-the-world/ Mon, 31 Jul 2023 23:27:31 +0000 https://80000hours.org/?post_type=podcast&p=82914 The post Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI risk appeared first on 80,000 Hours.

]]>
The post Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI risk appeared first on 80,000 Hours.

]]>
Elie Hassenfeld on two big-picture critiques of GiveWell’s approach, and six lessons from their recent work https://80000hours.org/podcast/episodes/elie-hassenfeld-givewell-critiques-and-lessons/ Fri, 02 Jun 2023 21:52:25 +0000 https://80000hours.org/?post_type=podcast&p=82103 The post Elie Hassenfeld on two big-picture critiques of GiveWell’s approach, and six lessons from their recent work appeared first on 80,000 Hours.

]]>
The post Elie Hassenfeld on two big-picture critiques of GiveWell’s approach, and six lessons from their recent work appeared first on 80,000 Hours.

]]>
Joe Carlsmith on navigating serious philosophical confusion https://80000hours.org/podcast/episodes/joe-carlsmith-navigating-serious-philosophical-confusion/ Fri, 19 May 2023 22:53:39 +0000 https://80000hours.org/?post_type=podcast&p=81832 The post Joe Carlsmith on navigating serious philosophical confusion appeared first on 80,000 Hours.

]]>
The post Joe Carlsmith on navigating serious philosophical confusion appeared first on 80,000 Hours.

]]>
Longtermism: a call to protect future generations https://80000hours.org/articles/future-generations/ Tue, 28 Mar 2023 00:00:52 +0000 https://80000hours.org/?post_type=article&p=40132 The post Longtermism: a call to protect future generations appeared first on 80,000 Hours.

]]>
When the 19th-century amateur scientist Eunice Newton Foote filled glass cylinders with different gases and exposed them to sunlight, she uncovered a curious fact. Carbon dioxide became hotter than regular air and took longer to cool down.1

Remarkably, Foote saw what this momentous discovery meant.

“An atmosphere of that gas would give our earth a high temperature,” she wrote in 1857.2

Though Foote could hardly have been aware at the time, the potential for global warming due to carbon dioxide would have massive implications for the generations that came after her.

If we ran history over again from that moment, we might hope that this key discovery about carbon’s role in the atmosphere would inform governments’ and industries’ choices in the coming century. They probably shouldn’t have avoided carbon emissions altogether, but they could have prioritised the development of alternatives to fossil fuels much sooner in the 20th century, and we might have prevented much of the destructive climate change that present people are already beginning to live through — which will affect future generations as well.

We believe it would’ve been much better if previous generations had acted on Foote’s discovery, especially by the 1970s, when climate models were beginning to reliably show the future course of warming global trends.3

If this seems right, it’s because of a commonsense idea: to the extent that we are able to, we have strong reasons to consider the interests and promote the welfare of future generations.

That was true in the 1850s, it was true in the 1970s, and it’s true now.

But despite the intuitive appeal of this moral idea, its implications have been underexplored. For instance, if we care about generations 100 years in the future, it’s not clear why we should stop there.

And when we consider how many future generations there might be, and how much better the future could go if we make good decisions in the present, our descendants’ chances to flourish take on great importance. In particular, we think this idea suggests that improving the prospects for all future generations is among the most morally important things we can do.

This article will lay out the argument for this view, which goes by the name longtermism.

We’ll say where we think the argument is strongest and weakest, respond to common objections, and say a bit about what we think this all means for what we should do.

Prefer a book? Enter your email and we’ll mail you a book

Sign up to our newsletter, and we’ll mail you a free copy of The Precipice by philosopher Toby Ord. It gives an overview of the moral importance of future generations, and what we can do to help them today.

You’ll be joining over 300,000 people who receive weekly updates on our research and job opportunities. You can unsubscribe in one click.

We’d like to give special thanks to Ben Todd, who wrote a previous version of this essay, and Fin Moorhouse, who gave insightful comments on an early draft.


By J Zapell – Public Domain, CC0

The case for longtermism

While most recognize that future generations matter morally to some degree, there are two other key premises in the case for longtermism that we believe are true and underappreciated. All together, the premises are:

  1. We should care about how the lives of future individuals go.
  2. The number of future individuals whose lives matter could be vast.
  3. We have an opportunity to affect how the long-run future goes — whether there may be many flourishing individuals in the future, many suffering individuals in the future, or perhaps no one at all.4

In the rest of this article, we’ll explain and defend each of these premises. Because the stakes are so high, this argument suggests that improving the prospects for all future generations should be a top moral priority of our time. If we’re able to make an exceptionally big impact, positively influencing many lives with enduring consequences, it’s incumbent upon us to take this seriously.

This doesn’t mean it’s the only morally important thing — or that the interests of future generations matter to the total exclusion of the present generation. We disagree with both of those claims.

There’s also a good chance this argument is flawed in some way, so much of this article discusses objections to longtermism. While we don’t find them on the whole convincing, some of them do reduce our confidence in the argument in significant ways.

If we’re able to make an exceptionally big impact, positively influencing many lives with enduring consequences, it’s incumbent upon us to take this seriously.

However, we think it’s clear that our society generally neglects the interests of future generations. Philosopher Toby Ord, an advisor to 80,000 Hours, has argued that at least by some measures, the world spends more money on ice cream each year than it does on reducing the risks to future generations.5

Since, as we believe, the argument for longtermism is generally compelling, we should do a lot more compared to the status quo to make sure the future goes well rather than badly.

It’s also crucial to recognise that longtermism by itself doesn’t say anything about how best to help the future in practice, and this is a nascent area of research. Longtermism is often confused with the idea that we should do more long-term planning. But we think the primary upshot is that it makes it more important to urgently address extinction risks in the present — such as catastrophic pandemics, an AI disaster, nuclear war, or extreme climate change. We discuss the possible implications in the final section.

But first, why do we think the three premises above are true?

1. We should care about how the lives of future individuals go

Should we actually care about people who don’t exist yet?

The discussion of climate change in the introduction is meant to draw out the common intuition that we do have reason to care about future generations. But sometimes, especially when considering the implications of longtermism, people doubt that future generations matter at all.

Derek Parfit, an influential moral philosopher, offered a simple thought experiment to illustrate why it’s plausible that future people matter:

Suppose that I leave some broken glass in the undergrowth of a wood. A hundred years later this glass wounds a child. My act harms this child. If I had safely buried the glass, this child would have walked through the wood unharmed.

Does it make a moral difference that the child whom I harm does not now exist?6

We agree it would be wrong to dispose of broken glass in a way that is likely to harm someone. It’s still wrong if the harm is unlikely to occur until 5 or 10 years have passed — or in another century, to someone who isn’t born yet. And if someone else happens to be walking along the same path, they too would have good reason to pick up the glass and protect any child who might get harmed at any point in the future.

But Parfit also saw that thinking about these issues raised surprisingly tricky philosophical questions, some of which have yet to be answered satisfactorily. One central issue is called the ‘non-identity problem’, which we’ll discuss in the objections section below. However, these issues can get complex and technical, and not everyone will be interested in reading through the details.

Despite these puzzles, there are many cases similar to Parfit’s example of the broken glass in the woods in which it’s clearly right to care about the lives of future people. For instance, parents-to-be rightly make plans based around the interests of their future children even prior to conception. Governments are correct to plan for the coming generations not yet born. And if it is reasonably within our power to prevent a totalitarian regime from arising 100 years from now,7 or to avoid using up resources our descendants may depend on, then we ought to do so.

While longtermism may seem to some like abstract, obscure philosophy, it in fact would be much more bizarre and contrary to common sense to believe we shouldn’t care about people who don’t yet exist.

2. The number of future individuals whose lives matter could be vast.

Humans have been around for hundreds of thousands of years. It seems like we could persist in some form for at least a few hundred thousand more.

There is, though, serious risk that we’ll cause ourselves to go extinct — as we’ll discuss more below. But absent that, humans have proven that they are extremely inventive and resilient. We survive in a wide range of circumstances, due in part to our ability to use technology to adjust our bodies and our environments as needed.

How long can we reasonably expect the human species to survive?

That’s harder to say. More than 99 percent of Earth’s species have gone extinct over the planet’s lifetime,8 often within a few million years or less.9

It’s possible our own inventiveness could prove to be our downfall.

But if you look around, it seems clear humans aren’t the average Earth species. It’s not ‘speciesist’ — unfairly discriminatory on the basis of species membership — to say that humans have achieved remarkable feats for an animal: conquering many diseases through invention, spreading across the globe and even into orbit, expanding our life expectancy, and splitting the atom.

It’s possible our own inventiveness could prove to be our downfall. But if we avoid that fate, our intelligence may let us navigate the challenges that typically bring species to their ends.

For example, we may be able to detect and deflect comets and asteroids, which have been implicated in past mass extinction events.

If we can forestall extinction indefinitely, we may be able to thrive on Earth for as long as it’s habitable — which could be another 500 million years, perhaps more.

As of now, there are about 8 billion humans alive. In total, there have been around 100 billion humans who ever lived. If we survive to the end of Earth’s habitable period, all those who have existed so far will have been the first raindrops in a hurricane.

If we’re just asking about what seems possible for the future population of humanity, the numbers are breathtakingly large. Assuming for simplicity that there will be 8 billion people for each century of the next 500 million years,10 our total population would be on the order of forty quadrillion. We think this clearly demonstrates the importance of the long-run future.

And even that might not be the end. While it remains speculative, space settlement may point the way toward outliving our time on planet Earth.11 And once we’re no longer planet-bound, the potential number of people worth caring about really starts getting big.

In What We Owe the Future, philosopher and 80,000 Hours co-founder Will MacAskill wrote:

…if humanity ultimately takes to the stars, the timescales become literally astronomical. The sun will keep burning for five billion years; the last conventional star formations will occur in over a trillion years; and, due to a small but steady stream of collisions between brown dwarfs, a few stars will still shine a million trillion years from now.

The real possibility that civilisation will last such a long time gives humanity an enormous life expectancy.

Some of this discussion may sound speculative and fantastical — which it is! But if you consider how fantastical our lives and world would seem to humans 100,000 years ago, you should expect that the far future could seem at least as alien to us now.

And it’s important not to get bogged down in the exact numbers. What matters is that there’s a reasonable possibility that the future is very long, and it could contain a much greater number of individuals.12 So how it goes could matter enormously.

There’s another factor that expands the scope of our moral concern for the future even further. Should we care about individuals who aren’t even human?

It seems true to us that the lives of non-human animals in the present day matter morally — which is why factory farming, in which billions of farmed animals suffer every day, is such a moral disaster.13 The suffering and wellbeing of future non-human animals matters no less.

And if the far-future descendants of humanity evolve into a different species, we should probably care about their wellbeing as well. We think we should even potentially care about possible digital beings in the future, as long as they meet the criteria for moral patienthood — such as, for example, being able to feel pleasure and pain.

We’re highly uncertain about what kinds of beings will inhabit the future, but we think humanity and its descendants have the potential to play a huge role. And we want to have a wide scope of moral concern to encompass all those for whom life can go well or badly.14

When we think about the possible scale of the future ahead of us, we feel humbled. But we also believe these possibilities present a gigantic opportunity to have a positive impact for those of us who have appeared so early in this story.

The immense stakes involved strongly suggest that, if there’s something we can do to have a significant and predictably positive impact on the future, we have good reason to try.

Arnold Paul, CC BY-SA 2.5 (cropped)

3. We have an opportunity to affect how the long-run future goes

When Foote discovered the mechanism of climate change, she couldn’t have foreseen how the future demand for fossil fuels would trigger a consequential global rise in temperatures.

So even if we have good reason to care about how the future unfolds, and we acknowledge that the future could contain immense numbers of individuals whose lives matter morally, we might still wonder: can anyone actually do anything to improve the prospects of the coming generations?

It’d be better for the future if we avoid extinction, manage our resources carefully, foster institutions that promote cooperation rather than violent conflict, and responsibly develop powerful technology.

Many things we do affect the future in some way. If you have a child or contribute to compounding economic growth, the effects of these actions ripple out over time, and to some extent, change the course of history. But these effects are very hard to assess. The question is whether we can predictably have a positive impact over the long term.

We think we can. For example, we believe that it’d be better for the future if we avoid extinction, manage our resources carefully, foster institutions that promote cooperation rather than violent conflict, and responsibly develop powerful technology.

We’re never going to be totally sure our decisions are for the best — but often we have to make decisions under uncertainty, whether we’re thinking about the long-term future or not. And we think there are reasons to be optimistic about our ability to make a positive difference.

The following subsections discuss four primary approaches to improving the long-run future:

Reducing extinction risk

One plausible tactic for improving the prospects of future generations is to increase the chance that they get to exist at all.

Of course, if there was a nuclear war or an asteroid that ended civilization, most people would agree that it was an unparalleled calamity.

Longtermism suggests, though, that the stakes involved could be even higher than they first seem. Sudden human extinction wouldn’t just end the lives of the billions currently alive — it would cut off the entire potential of our species. As the previous section discussed, this would represent an enormous loss.

And it seems plausible that at least some people can meaningfully reduce the risks of extinction. We can, for example, create safeguards to reduce the risk of accidental launches of nuclear weapons, which might trigger a cataclysmic escalatory cycle that brings on nuclear winter. And NASA has been testing technology to potentially deflect large near-Earth objects on dangerous trajectories.15 Our efforts to detect asteroids that could pose an extinction threat have arguably already proven extremely cost-effective.



So if it’s true that reducing the risk of extinction is possible, then people today can plausibly have a far-reaching impact on the long-run future. At 80,000 Hours, our current understanding is that the biggest risks of extinction we face come from advanced artificial intelligence, nuclear war, and engineered pandemics.16

And there are real things we can do to reduce these risks, such as:

  • Developing broad-spectrum vaccines that protect against a wide range of pandemic pathogens
  • Enacting policies that restrict dangerous practices in biomedical research
  • Inventing more effective personal protective equipment
  • Increasing our knowledge of the internal workings of AI systems, to better understand when and if they could pose a threat
  • Technical innovations to ensure that AI systems behave how we want them to
  • Increasing oversight of private development of AI technology
  • Facilitating cooperation between powerful nations to reduce threats from nuclear war, AI, and pandemics.

We will never know with certainty how effective any given approach has been in reducing the risk of extinction, since you can’t run a randomised controlled trial with the end of the world. But the expected value of these interventions can still be quite high, even with significant uncertainty.17

One response to the importance of reducing extinction risk is to note that it’s only positive if the future is more likely to be good than bad on balance. That brings us onto the next way to help improve the prospects of future generations.

Positive trajectory changes

Preventing humanity’s extinction is perhaps the clearest way to have a long-term impact, but other possibilities may be available. If we’re able to take actions that influence whether our future is full of value or is comparatively bad, we would have the opportunity to make an extremely big difference from a longtermist perspective. We can call these trajectory changes.18

Climate change, for example, could potentially cause a devastating trajectory shift. Even if we believe it probably won’t lead to humanity’s extinction, extreme climate change could radically reshape civilisation for the worse, possibly curtailing our viable opportunities to thrive over the long term.

There might even be potential trajectories that could be even worse. For example, humanity might get stuck with a value system that undermines general wellbeing and may lead to vast amounts of unnecessary suffering.

How could this happen? One way this kind of value ‘lock-in’ could occur is if a totalitarian regime establishes itself as a world government and uses advanced technology to sustain its rule indefinitely.19 If such a thing is possible, it could snuff out opposition and re-orient society away from what we have most reason to value.

We might also end up stagnating morally such that, for instance, the horrors of poverty or mass factory farming are never mitigated and are indeed replicated on even larger scales.

It’s hard to say exactly what could be done now to reduce the risks of these terrible outcomes. We’re generally less confident in efforts to influence trajectory changes compared to preventing extinction. If such work is feasible, it would be extremely important.

Trying to strengthen liberal democracy and promote positive values, such as by advocating on behalf of farm animals, could be valuable to this end. But many questions remain open about what kinds of interventions would be most likely to have an enduring impact on these issues over the long run.

Grappling with these issues and ensuring we have the wisdom to handle them appropriately will take a lot of work, and starting this work now could be extremely valuable.

Cobija, CC BY-SA 3.0, via Wikimedia Commons

Longtermist research

This brings us to the third approach to longtermist work: further research.

Asking these types of questions in a systematic way is a relatively recent phenomenon. So we’re confident that we’re pretty seriously wrong about at least some parts of our understanding of these issues. There are probably several suggestions in this article that are completely wrong — the trouble is figuring out which.

So we believe much more research into whether the arguments for longtermism are sound, as well as potential avenues for having an impact on future generations, is called for. This is one reason why we include ‘global priorities research’ among the most pressing problems for people to work on.

Capacity building

The fourth category of longtermist approaches is capacity building — that is, investing in resources that may be valuable to put toward longtermist interventions down the line.

In practice, this can take a range of forms. At 80,000 Hours, we’ve played a part in building the effective altruism community, which is generally aimed at finding and understanding the world’s most pressing problems and how to solve them. Longtermism is in part an offshoot of effective altruism, and having this kind of community may be an important resource for addressing the kinds of challenges longtermism raises.

There are also more straightforward ways to build resources, such as investing funds now so they can grow over time, potentially to be spent at a more pivotal time when they’re most needed.

You can also invest in capacity building by supporting institutions, such as government agencies or international bodies, that have the mission of stewarding efforts to improve the prospects of the long-term future.

Summing up the arguments

To sum up: there’s a lot on the line.

The number and size of future generations could be vast. We have reason to care about them all.

Those who come after us will have to live with the choices we make now. If they look back, we hope they’ll think we did right by them.

But the course of the future is uncertain. Humanity’s choices now can shape how events unfold. Our choices today could lead to a prosperous future for our descendants, or the end of intelligent life on Earth — or perhaps the rise of an enduring, oppressive regime.

We feel we can’t just turn away from these possibilities. Because so few of humanity’s resources have been devoted to making the future go well, those of us who have the means should figure out whether and how we can improve the chances of the best outcomes and decrease the chances of the worst.

We can’t — and don’t want to — set our descendants down a predetermined path that we choose for them now; we want to do what we can to ensure they have the chance to make a better world for themselves.

Those who come after us will have to live with the choices we make now. If they look back, we hope they’ll think we did right by them.

A protostar is embedded within a cloud of material feeding its growth. Credit: NASA, ESA, CSA, STScI

Objections to longtermism

In what follows, we’ll discuss a series of common objections that people make to the argument for longtermism.

Some of them point to important philosophical considerations that are complex but that nonetheless seem to have solid responses. Others raise important reasons to doubt longtermism that we take seriously and that we think are worth investigating further. And some others are misunderstandings or misrepresentations of longtermism that we think should be corrected. (Note: though long, this list doesn’t cover all objections!)

Making moral decisions always involves tradeoffs. We have limited resources, so spending on one issue means we have less to spend on another. And there are many deserving causes we could devote our efforts to. If we focus on helping future generations, we will necessarily not prioritise as highly many of the urgent needs in the present.

But we don’t think this is as troubling an objection to longtermism as it may initially sound, for at least three reasons:

1. Most importantly, many longtermist priorities, especially reducing extinction risk, are also incredibly important for people alive today. For example, we believe preventing an AI-related catastrophe or a cataclysmic pandemic are two of the top priorities, in large part because of their implications for future generations. But these risks could materialise in the coming decades, so if our efforts succeed most people alive today would benefit. Some argue that preventing global catastrophes could actually be the single most effective way to save the lives of people in the present.

2. If we all took moral impartiality more seriously, there would be a lot more resources going to help the worst-off today — not just the far future. Impartiality is the idea that we should care about the interests of individuals equally, regardless of their nationality, gender, race, or other characteristics that are morally irrelevant. This impartiality is part of what motivates longtermism — we think the interests of future individuals are often unjustifiably undervalued.

We think if impartiality were taken more seriously in general, we’d live in a much better world that would commit many more resources than it currently does toward alleviating all kinds of suffering, including for the present generation. For example, we’d love to see more resources go toward fighting diseases, improving mental health, reducing poverty, and protecting the interests of animals.

3. Advocating for any moral priority means time and resources are not going to another cause that may also be quite worthy of attention. Advocates for farmed animals’ or prisoners’ rights are in effect deprioritising the interests of alternative potential beneficiaries, such as the global poor. So this is not just an objection to longtermism — it’s an objection to any kind of prioritisation.

Ultimately, this objection hinges on the question of whether future generations are really worth caring about — which is what the rest of this article is about.

Some people, especially those trained in economics, claim that we shouldn’t treat individual lives in the future equally to lives today. Instead, they argue, we should systematically discount the value of future lives and generations by a fixed percentage.

(We’re not talking here about discounting the future due to uncertainty, which we cover below.)

When economists compare benefits in the future to benefits in the present, they typically reduce the value of the future benefits by some amount called the “discount factor.” A typical rate might be 1% per year, which means that benefits in 100 years are only worth 36% as much as benefits today, and benefits in 1,000 years are worth almost nothing.

This may seem like an appealing way to preserve the basic intuition we began with — that we have strong reasons to care about the wellbeing of future generations — while avoiding the more counterintuitive longtermist claims that arise from considering the potentially astronomical amounts of value that our universe might one day hold. On this view, we would care about future generations, but not as much as the present generation, and mostly only the generations that will come soon after us.

We agree there are good reasons to discount economic benefits. One reason is that if you receive money now, you can invest it, and earn a return each year. This means it’s better to receive money now rather than later. People in the future might also be wealthier, which means that money is less valuable to them.

However, these reasons don’t seem to apply to welfare — people having good lives. You can’t directly ‘invest’ welfare today and get more welfare later, like you can with money. The same seems true for other intrinsic values, such as justice. And longtermism is about reasons to care about the interests of future generations, rather than wealth.

As far as we know, most philosophers who have worked on the issue don’t think we should discount the intrinsic value of future lives — even while they strongly disagree about other questions in population ethics. It’s a simple principle that is easy to accept: one person’s happiness is worth just the same amount no matter when it occurs.

Indeed, if you suppose we can discount lives in the far future, we can easily end up with conclusions that sound absurd. For instance, a 3% discount rate would imply that the suffering of one person today is morally equal to the suffering of 16 trillion people in 1,000 years. This seems like a truly horrific conclusion to accept.

And any discount rate will mean that, if we found some reliable way to save 1 million lives from intense suffering in either 1,000 years or 10,000 years, it would be astronomically more important to choose the sooner option. This, too, seems very hard to accept.20

If we reject the discounting of the value of future lives, then the many potential generations that could come after us are still worthy of moral concern. And this doesn’t stand in tension with the economic practice of discounting monetary benefits.

If you’d like to see a more technical discussion of these issues, see Discounting for Climate Change by Hilary Graves. There is a more accessible discussion at 1h00m50s in our podcast with Toby Ord and in Chapter 4 of Stubborn Attachments by Tyler Cowen.

There are some practical, rather than intrinsic, reasons to discount the value of the future. In particular, our uncertainty about how the future will unfold makes it much harder to influence than the present, and even more near-term actions can be exceedingly difficult to forecast.

And because of the possibility of extinction, we can’t even be confident that the future lives we think are so potentially valuable will come into existence. As we’ve argued, that gives us reason to reduce extinction risks when it’s feasible — but it also gives us reason to be less confident these lives will exist and thus to weight them somewhat less in our deliberations.

In the same way, a doctor performing triage may choose to prioritise caring for a patient who had a good chance of surviving their injuries over one who has much less clear likelihood of survival regardless of the medical care they receive.

This uncertainty — along with the extreme level of difficulty in trying to predict the long-term impacts of our actions — certainly makes it much harder to help future generations, all else equal. And in effect, this point lowers the value of working to benefit future generations.

So even if we can affect how things unfold for future generations, we’re generally going to be very far from certain that we are actually making things better. And arguably, the further away in time the outcomes of our actions are, the less sure we can be that they will come about. Trying to improve the future will never be straightforward.

Still, even given the difficulty and uncertainty, we think the potential value at stake for the future means that many uncertain projects are still well worth the effort.

You might disagree with this conclusion if you believe that human extinction is so likely and practically unavoidable in the future that the chance that our descendants will still be around rapidly declines as we look a few centuries down the line. We don’t think it’s that likely — though we are worried about it.

Journalist Kelsey Piper critiqued MacAskill’s argument for longtermist interventions focused on positive trajectory changes (as opposed to extinction risks) in Asterisk, writing:

What share of people who tried to affect the long-term future succeeded, and what share failed? How many others successfully founded institutions that outlived them — but which developed values that had little to do with their own?

Most well-intentioned, well-conceived plans falter on contact with reality. Every simple problem splinters, on closer examination, into dozens of sub-problems with their own complexities. It has taken exhaustive trial and error and volumes of empirical research to establish even the most basic things about what works and what doesn’t to improve peoples’ lives.

Piper does still endorse working on extinction reduction, which she thinks is a more tractable course of action. Her doubts about the possibility of reliably anticipating our impact on the trajectory of the future, outside of extinction scenarios, are worth taking very seriously.

You might have a worry about longtermism that goes deeper than just uncertainty. We act under conditions of uncertainty all the time, and we find ways to manage it.

There is a deeper problem known as cluelessness. While uncertainty is about having incomplete knowledge, cluelessness refers to the state of having essentially no basis of knowledge at all.

Some people believe we’re essentially clueless about the long-term effects of our actions. This is because virtually every action we take may have extremely far-reaching unpredictable consequences. In time travel stories, this is sometimes referred to as the “butterfly effect” — because something as small as a butterfly flapping its wings might influence air currents just enough to cause a monsoon on the other side of the world (at least for illustrative purposes).

If you think your decision of whether to go to the grocery store on Thursday or Friday might determine whether the next Gandhi or Stalin is born, you might conclude that actively trying to make the future go well is a hopeless task.

Like some other important issues discussed here, cluelessness remains an active area of philosophical debate, so we don’t think there’s necessarily a decisive answer to these worries. But there is a plausible argument, advanced philosopher and advisor to 80,000 Hours Hilary Greaves that longtermism is, in fact, the best response to the issue of cluelessness.

This is because cluelessness hangs over the impact of all of our actions. Work trying to improve the lives of current generations, such as direct cash transfers, may predictably benefit a family in the foreseeable future. But the long-term consequences of the transfer are a complete mystery.

Successful longtermist interventions, though, may not have this quality — particularly interventions to prevent human extinction. If we, say, divert an asteroid that would otherwise have caused the extinction of humanity, we are not clueless about the long-term consequences. Humanity will at least have the chance to continue existing into the far future, which it wouldn’t have otherwise had.

There’s still uncertainty, of course, in preventing extinction. The long-term consequences of such an action aren’t fully knowable. But we’re not clueless about them either.

If it’s correct that the problem of cluelessness bites harder for some near-term interventions than longtermist ones, and perhaps least of all for preventing extinction, then this apparent objection doesn’t actually count against longtermism.

For an alternative perspective, though, check out The 80,000 Hours Podcast interview with Alexander Berger.

Because of the nature of human reproduction, the identity of who gets to be born is highly contingent. Any individual is the result of the combination of one sperm and one egg, and a different combination of sperm and egg would’ve created a different person. Delaying the act of conception at all — for example, by getting stuck at a red light on your way home — can easily result in a different sperm fertilising the egg, which means another person with a different combination of genes will be born.

This means — somewhat surprisingly — that pretty much all our actions have the potential to impact the future by changing which individuals get born in the future.

If you care about affecting the future in a positive way, this creates a perplexing problem. Many actions undertaken to improve the future, such as trying to reduce the harmful effects of climate change or developing a new technology to improve people’s lives, may deliver the vast majority of their benefits to people who wouldn’t have existed had the course of action never been taken.

So while it seems obviously good to improve the world in this way, it may be impossible to ever point to specific people in the future and say they were made better off by these actions. You can make the future better overall, but you may not make it better for anyone in particular.

Of course, the reverse is true: you may take some action that makes the future much worse, but all the people who experience the consequences of your actions may never have existed had you chosen a different course of action.

This is known as the ‘non-identity problem.’ Even when you can make the far future better with a particular course of action, you will almost certainly never make any particular individuals in the far future better off than they otherwise would be.

Should this problem cause us to abandon longtermism? We don’t think so.

While the issue is perplexing, accepting it as a refutation of longtermism would prove too much. It would, for example, undermine much of the very plausible case that policymakers should in the past have taken significant steps to limit the effects of climate change (since those policy changes can be expected to, in the long run, lead to different people being born).

Or consider a hypothetical case of a society that is deciding what to do with its nuclear waste. Suppose there are two ways of storing it: one way is cheap, but it means that in 200 years time, the waste will overheat and expose 10,000,000 people to sickening radiation that dramatically shortens their lives. The other storage method guarantees it will never hurt anyone, but it is significantly more expensive, and it means currently living people will have to pay marginally higher taxes.

Assuming this tax policy alters behaviour just enough to start changing the identities of the children being born, it’s entirely plausible that, in 200 years time, no one would exist who would’ve existed if the cheap, dangerous policy had been implemented. This means that none of the 10,000,000 people who have their lives cut short can say they would have been better off had their ancestors chosen the safer storage method.21

Still, it seems intuitively and philosophically unacceptable to believe that a society wouldn’t have very strong reasons to adopt the safe policy over the cheap, dangerous policy. If you agree with this conclusion, then you agree that the non-identity problem does not mean we should abandon longtermism. (You may still object to longtermism on other grounds!)

Nevertheless, this puzzle raises pressing philosophical questions that continue to generate debate, and we think better understanding these issues is an important project.

We said that we thought it would be very bad if humanity was extinguished, in part because future individuals who might have otherwise been able to live full and flourishing lives wouldn’t ever get the chance.

But this raises some issues related to the ‘non-identity problem.’ Should we actually care whether future generations come into existence, rather than not?

Some people argue that perhaps we don’t actually have moral reasons to do things that affect whether individuals exist — in which case ensuring that future generations get to exist, or increasing the chance that humanity’s future is long and expansive or would be morally neutral in itself.

This issue is very tricky from a philosophical perspective; indeed, a minor subfield of moral philosophy called population ethics sets out to answer this and related questions.

So we can’t expect to fully address the question here. But we can give a sense of why we think working to ensure humanity survives and that the future is filled with flourishing lives is a high moral priority.

Consider first a scenario in which you, while travelling the galaxy in a spaceship, come across a planet filled with an intelligent species leading happy, moral, fulfilled lives. They haven’t achieved spaceflight, and may never do so, but they appear likely to have a long future ahead of them on their planet.

Would it not seem like a major tragedy if, say, an asteroid were on course to destroy their civilization? Of course, any plausible moral view would advise saving the species for their own sakes. But it also seems like it’s an unalloyed good that, if you divert the asteroid, this flourishing species will be able to continue on for many future generations, flourishing in their corner of the universe.

If we have that view about that hypothetical alien world, we should probably have the same view of our own planet. Humans, of course, aren’t necessarily that happy, moral, and fulfilled for their lives. But the vast majority of us want to keep living — and it seems at least possible that our descendants could have lives many times more flourishing than we have. They might even ensure that all other sentient beings have joyous lives well-worth living. This seems to give us strong reasons to make this potential a reality.

For a different kind of argument along these lines, you can read Joe Carlsmith’s “Against neutrality about creating happy lives.”

Some people advocate a ‘person-affecting’ view of ethics. This view is sometimes summed up with the quip: “ethics is about helping make people happy, not making happy people.”

In practice, this means we only have moral obligations to help those who are already alive22 — not to enable more people to exist with good lives. For people who hold such views, it may be permissible to create a happy person, but doing so is morally neutral.

This view has some plausibility, and we don’t think it can be totally ignored. However, philosophers have uncovered a number of problems with it.

Suppose you have the choice to bring into existence one person with an amazing life, or another person whose life is barely worth living, but still more good than bad. Clearly, it seems better to bring about the amazing life.

But if creating a happy life is neither good nor bad, then we have to conclude that both options are neither good nor bad. This implies the options are equal, and you have no reason to do one or the other, which seems bizarre.

And if we accepted a person-affecting view, it might be hard to make sense of many of our common moral beliefs around issues like climate change. For example, it would imply that policymakers in the 20th century might have had little reason to mitigate the impact of CO2 emissions on the atmosphere if the negative effects would only affect people who would be born several decades in the future. (This issue is discussed more above.)

This is a complex debate, and rejecting the person-affecting view also has counterintuitive conclusions. In particular, Parfit showed that if you agree that it’s good to create people whose lives are more good than bad, there is a strong argument for the conclusion that we could have a better world filled with a huge number of people whose lives are just barely worth living. He called this the “repugnant conclusion”.

Both sides make important points in this debate. You can see a summary of the arguments in this public lecture by Hilary Greaves (based on this paper). It’s also discussed in our podcast with Toby Ord.

We’re uncertain about what the right position is, but we’re inclined to reject person-affecting views. Since many people hold something like the person-affecting view, though, we think it deserves some weight, and that means we should act as if we have somewhat greater obligations to help someone who’s already alive compared to someone who doesn’t exist yet. (This is an application of moral uncertainty).

One note however: even people who otherwise embrace a person-affecting view often think that is morally bad to do something that brings someone into existence who has a life full of suffering and who wishes they’d never been born. If that’s right, you should still think that we have strong moral reasons to care about the far future, because there’s the possibility it could be horrendously bad as well as very good for a large number of individuals. On any plausible view, there’s a forceful case to be made for working to avert astronomical amounts of suffering. So even someone who believes strongly in a person-affecting view of ethics might have reason to embrace a form of longtermism that prioritises averting large-scale suffering in the future.

Trying to weigh this up, we think society should have far greater concern for the future than it does now, and that as with climate change, it often makes sense to prioritise making things go well for future individuals. In the case of climate change, for example, it was likely the case that society should have long ago taken on the non-trivial costs of financing efforts to develop highly reliable clean energy and navigating away from a carbon-intensive economy.

Because of moral uncertainty, though, we care more about the present generation than we would if we naively weighed up the numbers.

Yes, it would be arrogant. But longtermism doesn’t require us to know the future.

Instead, the practical implication of longtermism is that we take steps that are likely to be good over the wide range of possible futures. We think it’s likely better for the future if, as we said above, we avoid extinction, we manage our resources carefully, we foster institutions that promote cooperation rather than violent conflict, and we responsibly develop powerful technology. None of these strategies requires us knowing what the future will look like.

We talk more about the importance of all this uncertainty in the sections above.

This isn’t exactly an objection, but one response to longtermism asserts not that the view is badly off track but that it’s superfluous.

This may seem plausible if longtermism primarily inspires us to prioritise reducing extinction risks. As discussed above, doing so could benefit existing people — so why even bother talking about the benefits to future generations?

One reply is: we agree that you don’t need to embrace longtermism to support these causes! And we’re happy if people do good work whether or not they agree with us on the philosophy.

But we still think the argument for longtermism is true, and we think it’s worth talking about.

Firstly, when we actually try to compare the importance of work in certain cause areas — such as global health or mitigating the risk of extinction from nuclear war — whether and how much we weigh the interests of future generations may play a decisive role in our conclusions about prioritisation.

Moreover, some longtermist priorities, such as ensuring that we avoid the lock-in of bad values or developing a promising framework for space governance, may be entirely ignored if we don’t consider the interests of future generations.

Finally, if it’s right that future generations deserve much more moral concern than they currently get, it just seems good for people to know that. Maybe issues will come up in the future that aren’t extinction threats but which could still predictably affect the long-run future – we’d want people to take those issues seriously.

In short, no. Total utilitarianism is the view that we are obligated to maximise the total amount of positive experiences over negative experiences, typically by weighting for intensity and duration.

This is one specific moral view, and many of its proponents and sympathisers advocate for longtermism. But you can easily reject utilitarianism of any kind and still embrace longtermism.

For example, you might believe in ‘side constraints’ — moral rules about what kinds of actions are impermissible, regardless of the consequences. So you might believe that you have strong reasons to promote the wellbeing of individuals in the far future, so long as doing so doesn’t require violating anyone’s moral rights. This would be one kind of non-utilitarian longtermist view.

You might also be a pluralist about value, in contrast to utilitarians who think a singular notion of wellbeing is the sole true value. A non-utilitarian might intrinsically value, for instance, art, beauty, achievement, good character, knowledge, and personal relationships, quite separately from their impact on wellbeing.

(See our definition of social impact for how we incorporate these moral values into our worldview.)

So you might be a longtermist precisely because you believe the future is likely to contain vast amounts of all the many things you value, so it’s really important that we protect this potential.

You could also think we have an obligation to improve the world for future generations because we owe it to humanity to “pass the torch”, rather than squander everything people have done to build up civilisation. This would be another way of understanding moral longtermism that doesn’t rely on total utilitarianism.23

Finally, you can reject the “total” part of utilitarianism and still believe longtermism. That is, you might believe it’s important to make sure the future goes well in a generally utilitarian sense without thinking that means we’ll need to keep increasing the population size in order to maximise total wellbeing. You can read more about different kinds of views in population ethics here.

As we discussed above, people who don’t think it’s morally good to bring a flourishing population into existence usually think it’s still important to prevent future suffering — in which case you might support a longtermism focused on guarding against the worst outcomes for future generations.

No.

We believe, for instance, that you shouldn’t have a harmful career just because you think you can do more good than bad with the money you’ll earn. There are practical, epistemic, and moral reasons that justify this stance.

And as a general matter, we think it’s highly unlikely to be the case that working in a harmful career will be the path that has the best consequences overall.

Some critics of longtermism say the view can be used to justify all kinds of egregious acts in the name of a glorious future. We do not believe this, in part because there are plenty of plausible intrinsic reasons to object to egregious acts on their own, even if you think they’ll have good consequences. As we explained in our article on the definition of ‘social impact’:

We don’t think social impact is all that matters. Rather, we think people should aim to have a greater social impact within the constraints of not sacrificing other important values – in particular, while building good character, respecting rights and attending to other important personal values. We don’t endorse doing something that seems very wrong from a commonsense perspective in order to have a greater social impact.

Perhaps even more importantly, it’s bizarrely pessimistic to believe that the best way to make the future go well is to do horrible things now. This is very likely false, and there’s little reason anyone should be tempted by this view.

Some of the claims in this article may sound like science fiction. We’re aware this can be off-putting to some readers, but we think it’s important to be upfront about our thinking.

And the fact that a claim sounds like science fiction is not, on its own, a good reason to dismiss it. Many speculative claims about the future have sounded like science fiction until technological developments made them a reality.

From Eunice Newton Foote’s perspective in the 19th century, the idea that the global climate would actually be transformed based on a principle she discovered in a glass cylinder may have sounded like science fiction. But climate change is now our reality.

Similarly, the idea of the “atomic bomb” had literally been science fiction before Leo Szilard discovered the possibility of the nuclear chain reaction in 1933. Szilard first read about such weapons in H.G. Wells’ The World Set Free. As W. Warren Wager explained in The Virginia Quarterly:

Unlike most scientists then doing research into radioactivity, Szilard perceived at once that a nuclear chain reaction could produce weapons as well as engines. After further research, he took his ideas for a chain reaction to the British War Office and later the Admiralty, assigning his patent to the Admiralty to keep the news from reaching the notice of the scientific community at large. “Knowing what this [a chain reaction] would mean,” he wrote, “—and I knew it because I had read H.G. Wells—I did not want this patent to become public.”

This doesn’t mean we should accept any idea without criticism. And indeed, you can reject many of the more ‘sci-fi’ claims of some people who are concerned with future generations — such as the possibility of space settlement or the risks from artificial intelligence — and still find longtermism compelling.

One worry about longtermism some people have is that it seems to rely on having a very small chance of achieving a very good outcome.

Some people think this sounds suspiciously like Pascal’s wager, a highly contentious argument for believing in God — or a variant of this idea, “Pascal’s mugging.” The concern is that this type of argument may be used to imply an apparent obligation to do absurd or objectionable things. It’s based on a thought experiment, as we described in a different article:

A random mugger stops you on the street and says, “Give me your wallet or I’ll cast a spell of torture on you and everyone who has ever lived.” You can’t rule out with 100% probability that he won’t — after all, nothing’s 100% for sure. And torturing everyone who’s ever lived is so bad that surely even avoiding a tiny, tiny probability of that is worth the $40 in your wallet? But intuitively, it seems like you shouldn’t give your wallet to someone just because they threaten you with something completely implausible.

This deceptively simple problem raises tricky issues in expected value theory, and it’s not clear how they should be resolved — but it’s typically assumed that we should reject arguments that rely on this type of reasoning.

The argument for longtermism given above may look like a form of this argument because it relies in part on the premise that the number of individuals in the future could be so large. Since it’s a relatively novel, unconventional argument, it may sound suspiciously like the mugger’s (presumably hollow) threat in the thought experiment.

But there are some key differences. To start, the risks to the long-term future may be far from negligible. Toby Ord estimated the chance of an existential catastrophe that effectively curtails the potential of future generations in the next century at 1 in 6.24

Now, it may be true that any individual’s chance of meaningfully reducing these kinds of threats is much, much smaller. But we accept small chances of doing good all the time — that’s why you might wear a seatbelt in a car, even though in any given drive your chances of being in a serious accident are miniscule. Many people buy life insurance to guarantee that their family members will have financial support in the unlikely scenario that they die young.

And while an individual is unlikely to be solely responsible for driving down the risk of human extinction by any significant amount (in the same way no one individual could stop climate change), it does seem plausible that a large group of people working diligently and carefully might be able to do it. And if the large group of people can achieve this laudable end, then taking part in this collective action isn’t comparable to Pascal’s mugging.

But if we did conclude the chance to reduce the risks humanity faces is truly negligible, then we would want to look much more seriously into other priorities, especially since there are so many other pressing problems. As long as it’s true, though, that there are genuine opportunities to have a significant impact on improving the prospects for the future, then longtermism does not rely on suspect and extreme expected value reasoning.

This is a lot to think about. So what are our bottom lines on how we think we’re most likely to be wrong about longtermism?

Here are a few possibilities we think are worth taking seriously, even though they don’t totally undermine the case from our perspective:

  • Morality may require a strong preference for the present: There might be strong moral reasons to give preference to existing people and individuals over future generations. This might be because something like a person-affecting view is true (described above) or maybe even because we should systematically discount the value of future beings.
    • We don’t think the arguments for such a strong preference are very compelling, but given the high levels of uncertainty in our moral beliefs, we can’t confidently rule it out.
  • Reliably affecting the future may be infeasible. It’s possible that further research will ultimately conclude that the opportunities for impacting the far future are essentially non-existent or extremely limited. It’s hard to believe we could ever entirely close the question — researchers who come to this conclusion in the future could themselves be mistaken — but it might dramatically reduce our confidence that pursuing a longtermist agenda is worthwhile and thus leave the project as a pretty marginal endeavour.

  • Reducing extinction risk may be intractable beyond a certain point. It’s possible that there’s a base level of extinction risk that humans will have to accept at some point and that we can’t reduce any further. And if, for instance, there were an irreducible risk of an extinction catastrophe at 10 percent every century, then the future, in expectation, would be much less significant than we think. This would dramatically reduce the pull of longtermism.

  • A crucial consideration could change our assessment in ways we can’t predict. This falls into the general category of ‘unknown unknowns,’ which are always important to be on the watch for.

You could also read the following essays criticising longtermism that we have found interesting:

If I don’t agree with 80,000 Hours about longtermism, can I still benefit from your advice?

Yes!

We want to be candid about what we believe and what our priorities are, but we don’t think everyone needs to agree with us.

And we have lots of advice and tools that are broadly useful for people thinking about their careers, regardless of what they think about longtermism.

There are also many places where longtermist projects converge with other approaches to thinking about having a positive impact with your career. For example, working to prevent pandemics seems robustly good whether you prioritise near- or long-term benefits.

Though we focus as an organisation on issues that may affect all future generations, we would generally be really happy to also see more people working for the benefit of the global poor and farmed animals, two tractable causes that we think are unduly neglected in the near term. We also discuss these issues on our podcast and list jobs for them on our job board.

Credit: Yen Chao CC2.0

What are the best ways to help future generations right now?

While answering this question satisfactorily would require a sweeping research agenda in itself, we do have some general thoughts about what longtermism means for our practical decision making. And we’d be excited to see more attention paid to this question.

Some people may be motivated by these arguments to find opportunities to donate to longermist projects or cause areas. We believe Open Philanthropy — which is a major funder of 80,000 Hours — does important work in this area.

But our primary aim is to help people have impactful careers. Informed by longtermism, we have created a list of what we believe are the most pressing problems to work on in the world. These problems are important, neglected, and tractable.

As of this writing, the top eight problem areas are:

  1. Risks from artificial intelligence
  2. Catastrophic pandemics
  3. Building effective altruism
  4. Global priorities research
  5. Nuclear war
  6. Improving decision making (especially in important institutions)
  7. Climate change
  8. Great power conflict

We’ve already given few examples of concrete ways to tackle these issues above.

The above list is provisional, and it is likely to change as we learn more. We also list many other pressing problems that we believe are highly important from a longtermist point of view, as well as a few that would be high priorities if we rejected longtermism.

We hope more people will challenge our ideas and help us think more clearly about them. As we have argued, the stakes are incredibly high.

We have a related list of high-impact careers that we believe are appealing options for people who want to work to address these and related problems and to help the long-term future go well.

But we don’t have all the answers. Research in this area could reveal crucial considerations that might overturn longtermism or cast it in a very different light. There are likely pressing cause areas we haven’t thought of yet.

We hope more people will challenge our ideas and help us think more clearly about them. As we have argued, the stakes are incredibly high. So it’s paramount that, as much as is feasible, we get this right.

Want to focus your career on the long-run future?

If you want to work on ensuring the future goes well, such as controlling nuclear weapons or shaping the development of artificial intelligence or biotechnology, you can speak to our team one-on-one.

We’ve helped hundreds of people choose an area to focus, make connections, and then find jobs and funding in these areas. If you’re already in one of these areas, we can help you increase your impact within it.

Speak to us

Learn more

Read next

This article is part of our advanced series. See the full series, or keep reading:

If you’re newJoin our newsletter and we’ll mail you a free book

Enter your email and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity.

The post Longtermism: a call to protect future generations appeared first on 80,000 Hours.

]]>
Why you should think about virtues — even if you’re a consequentialist https://80000hours.org/2023/03/why-you-should-think-about-virtues-even-if-youre-a-consequentialist/ Fri, 17 Mar 2023 23:50:20 +0000 https://80000hours.org/?p=81102 The post Why you should think about virtues — even if you’re a consequentialist appeared first on 80,000 Hours.

]]>
The idea this week: virtues are helpful shortcuts for making moral decisions — but think about consequences to decide what counts as a virtue.

Your career is really ethically important, but it’s not a single, discrete choice. To build a high-impact career you need to make thousands of smaller choices over many years — to take on this particular project, to apply for that internship, to give this person a positive reference, and so on.

How do you make all those little decisions?

This blog post was first released to our newsletter subscribers.

Join over 350,000 newsletter subscribers who get content like this in their inboxes every two weeks — and we’ll also mail you a free book!

If you want to have an impact, you hope to make the decisions that help you have a bigger impact rather than a smaller one. But you can’t go around explicitly estimating the consequences of all the different possible actions you could take — not only would that take too long, you’d probably get it wrong most of the time.

This is where the idea of virtues — lived moral traits like courage, honesty, and kindness — can really come in handy. Instead of calculating out the consequences of all your different possible actions, try asking yourself, “What’s the honest thing to do? What’s the kind thing to do?”

A few places I find ‘virtues thinking’ motivating and useful:

  • When I am facing a difficult work situation, I sometimes ask myself, “What virtue is this an opportunity to practise?” For example, maybe now is a great opportunity to practise being honest — even when it’s difficult or embarrassing.
  • Sometimes I get socially anxious. When that happens I often find it helpful to ask what virtues I can bring to a conversation — how would a kind, gracious, and curious person act?

(Some people think that acting in line with virtues is just what it is to live a moral life, but even if you don’t think that, they are still useful mental shortcuts.)

Honesty and kindness are very commonly thought of as virtues. But it’s also worth asking whether there are other, non-standard virtues you might want to cultivate to help you accomplish what you think is most important.

In my case, I think ‘scope sensitivity‘ — caring much more about something that affects many more individuals or affects them to a much greater degree — is a virtue. Everyone has to prioritise in their work, and I think scope sensitivity helps people prioritise in ways that make the world better.

For example, becoming a doctor lets you help a lot of people. But if you’re also well-suited to a career that could help prevent the next catastrophic pandemic, you might be able to help even more people than you would in other areas of medicine. If you’re scope sensitive, you may notice this opportunity, and working to prevent or mitigate pandemics might become very appealing. I think that’s an admirable quality.

The main questions virtues bring up are:

  1. How do you decide which character traits are virtues?
  2. What do you do when two virtues conflict? For example, should you tell someone the brutal truth (honesty) or spare their feelings (kindness) by obscuring it?

I think to answer the first question it makes sense to look at the long-term consequences of acting in a certain way. If you act honestly as a general rule throughout your life, is that likely to make things go better or worse? I think there’s a good case that it will go much better; people will trust you, because you’ll be trustworthy — and this can be a big help in many areas of life.

You can also look at role models — what kinds of lived moral traits do the people who seem to have had a big positive impact have? One answer is courage: for example, Benjamin Lay stood up over and over for the idea that slavery was wrong, even when he faced severe social sanctions for it. To do a lot of good you might have to face down tough situations with courage.

And you can put plausible-sounding virtues to the test by asking what would happen if you applied them consistently over time. For example, you might think that agreeableness is a virtue because it makes people feel comfortable and supported. But if you’re practically always agreeable, people might not trust you to tell them what you really think, and you might go along with others’ plans even when they are bad. So it seems like a mixed bag.

Of course you can also go overboard with honesty; most people probably agree it’s a bad idea to tell the truth if a murderer asks for your friend’s location. But what makes honesty a virtue in my book is that it’s much better to go through life erring on the side of being too honest than not honest enough — so it’s a good general guide for behaviour. I think that’s less true of agreeableness.

Deciding what to do when virtues are in conflict is tough. But here are a few strategies:

  • Is there something about the situation that makes one of the virtues more important? If the context makes trust especially important, you might have extra reason to lean into honesty in this case even at the expense of other virtues.

  • Is there a commonly thought of hard and fast ethical rule that says what you should do? Even if you don’t philosophically believe in absolute rules, they usually point to very important considerations.

  • Look at the consequences: in cases where heuristics fail us, it can be worth it to think explicitly about the consequences of each action — especially if the consequences are dramatically better or worse depending on what you choose.

The web team at 80,000 Hours has a few ‘virtues for 2023’ to help us navigate unexpected situations:

  • Openness: be curious, transparent, and open to changing our minds.
  • Patience: go for what’s best in the long-run, and don’t get unduly distracted by what’s salient now.
  • Boldness: don’t be afraid to prioritise what we think is most important and push for it.

Learn more:

The post Why you should think about virtues — even if you’re a consequentialist appeared first on 80,000 Hours.

]]>
What is social impact? A definition https://80000hours.org/articles/what-is-social-impact-definition/ https://80000hours.org/articles/what-is-social-impact-definition/#comments Fri, 01 Oct 2021 21:29:44 +0000 http://80000-hours-wp.local/?page_id=20511 The post What is social impact? A definition appeared first on 80,000 Hours.

]]>
Lots of people say they want to “make a difference,” “do good,” “have a social impact,” or “make the world a better place” — but they rarely say what they mean by those terms.

By getting clearer about your definition, you can better target your efforts. So how should you define social impact?

Over two thousand years of philosophy have gone into that question. We’re going to try to sum up that thinking; introduce a practical, rough-and-ready definition of social impact; and explain why we think it’s a good definition to focus on.

This is a bit ambitious for one article, so to the philosophers in the audience, please forgive the enormous simplifications.

A simple definition of social impact

If you just want a quick answer, here’s the simple version of our definition (a more philosophically precise one — and an argument for it — follows below):

Your social impact is given by the number of people1 whose lives you improve and how much you improve them, over the long term.

This shows that you can increase your impact in two ways: by helping more people over time, or by helping the same number of people to a greater extent (pictured below).

two ways to have impact

We say “over the long term” because you can help more people either by helping a greater number now, or taking actions with better long-term effects.

This definition is enough to help you figure out what to aim at in many situations — e.g. by roughly comparing the number of people affected by different issues. But sometimes you need a more precise definition.

A more rigorous definition of social impact

Here’s our working definition of “social impact”:

“Social impact” or “making a difference” is (tentatively) about promoting total expected wellbeing — considered impartially, over the long term.

We don’t think social impact is all that matters. Rather, we think people should aim to have a greater social impact within the constraints of not sacrificing other important values – in particular, while building good character, respecting rights and attending to other important personal values. We don’t endorse doing something that seems very wrong from a commonsense perspective in order to have a greater social impact.

In fact, we even think that paying attention to these other values is probably the best way to in fact have the most social impact anyway, even if that’s all you want to aim for.

In the rest of this article, we’ll expand on:

  1. Why we think social impact is primarily about promoting what’s of value — i.e. making the world better
  2. Why we think making the world a better place is in large part about promoting total expected wellbeing
  3. How social impact fits with other values
  4. How we can assess what makes a difference in the face of uncertainty

We believe that taking this definition seriously has some potentially radical implications about where people who want to do good should focus, which we also explore.

Two final notes before we go into more detail. First, our definition is tentative — there’s a good chance we’re wrong and it might change. Second, its purpose is practical — it aims to cover the most important aspects of doing good to help people make better real-life decisions, rather than capture everything that’s morally relevant.

In a nutshell

The definition:

“Social impact” or “making a difference” is (tentatively) about promoting total expected wellbeing — considered impartially, over the long term.

Why “promoting”? When people say they want to “make a difference,” we think they’re primarily talking about making the world better — i.e. ‘promoting’ good things and preventing bad ones — rather than merely not doing unethical actions (e.g. stealing) or being virtuous in some other way.

Why “wellbeing”? We understand wellbeing as an inclusive notion, meaning anything that makes people better off. We take this to encompass at least promoting happiness, health, and the ability for people to live the life they want. We chose this as the focus because most people agree these things matter, but there are often large differences in how much different actions improve these outcomes.

Why do we say “expected” wellbeing? We can never know with certainty the effects that our actions will have on wellbeing. The best we can do is try to weigh the benefits of different actions by their probability — i.e. compare based on ‘expected value.’ Note that while the action with the highest expected value is best in principle, that doesn’t imply that the best way to find the best action is to make explicit quantitative estimates. It’s often better in practice to use rules of thumb, our intuition, or other methods, since these maximise expected value better than explicit expected value calculations. (Read more on expected value.)

Why “considered impartially”? We mean that we strive to treat equal effects on different beings’ welfare as equally morally important, no matter who they are — including people who live far away or in the future. In addition, we think that the interests of many nonhuman animals, and even potentially sentient future digital beings, should be given significant weight, although we’re unsure of the exact amount. Thus, we don’t think social impact is limited to promoting the welfare of any particular group we happen to be partial to (such as people who are alive today, or human beings as a species).

Why do we say “over the long term”? We think that if you take an impartial perspective, then the welfare of those who live in the future matters. Because there could be many more future generations than those alive today, our effects on them could be of great moral importance. We thus try to always consider not just the direct and short-term effects of actions, but also any indirect effects that might occur in the far future.

Social impact is about making the world better

What does it mean to act ethically? Moral philosophers have debated this question for millennia, and have arrived at three main kinds of answers:

  1. Being virtuous — e.g. being honest, kind, and just
  2. Acting rightly — e.g. respecting the rights of others and not doing wrong
  3. Making the world better — e.g. helping others

These correspond to virtue ethics, deontology, and consequentialism, respectively.

Whatever you think about which perspective is most fundamental, we think all three have something to offer in practice (we’ll expand on this in the rest of the article). So we could say that a simple, general recipe for a moral life would be to:

  1. Cultivate good character
  2. Respect constraints, such as the rights of others
  3. And then do as much good as you can

In short, character, constraints and consequences.

When our readers talk about wanting to “make a difference,” we think most interested in the third of these perspectives — changing the world for the better.

We agree focusing more on doing good makes sense for most people. First, we don’t just want to avoid doing wrong, or live honest lives, but actually leave the world better than we found it. And more importantly, there’s so much we can do to help.

For instance, we’ve shown that by donating 10% of their income to highly effective charities, most college graduates can save the lives of over 40 people over their lifetimes with a relatively minor sacrifice.

From an ethical perspective, whether you save 40 lives or not will probably be one of the most significant questions you’ll ever face.

In our essay on your most important decision, we argued that some career paths open to you will do hundreds of times more to make the world a better place than others. So it seems really important to figure out what those paths are.

Even philosophers who emphasise moral rules and virtue agree that if you can make others better off, that’s a good thing to do. And they agree it’s even better to make more people better off than fewer. (More broadly, we think deontologists and utilitarians agree a lot more than people think.)

John Rawls was one of the most influential (non-consequentialist) philosophers of the 20th century, and he said:

All ethical doctrines worth our attention take consequences into account in judging rightness. One which did not would simply be irrational, crazy.2

Since there seem to be big opportunities to make people better off, and some seem to be better than others, we should focus on finding those.

This might sound obvious, but most discussion of ethical living is very focused on reducing harm rather than doing more good.

For instance, when it comes to fighting climate change, there’s a lot of focus on our personal carbon emissions, rather than figuring out what we can do to best fight climate change.

Asking the second question suggests radically different actions. The best things we can do to fight climate change probably involve working on, advocating for, and donating to exceptional research and advocacy opportunities, rather than worrying about plastic bags, recycling, or turning out the lights.

Why is there so much focus on our personal emissions? One explanation is that ethical views originate from before the 20th century, and sometimes from thousands of years ago. If you were a medieval peasant, your main ethical priority was to help your family survive, without cheating or harming your neighbors. You didn’t have the knowledge, power, or time to help hundreds of people or affect the long-term future.

The Industrial Revolution gave us wealth and technology not even available to kings and queens in previous centuries. Now, many ordinary citizens of rich countries have enormous power to do good, and this means the potential consequences of our actions are usually what’s most ethically significant about them.

But this isn’t to say that we can ignore harms. In general, we think it’s vital to avoid doing anything that seems very wrong from a commonsense perspective, even if it seems like it might lead to a greater social impact, and to cultivate good character

In general, we see our advice as about striving to have a greater impact, within the constraints of your other important values.

So, we think ‘social impact’ or ‘making a difference’ should be about making the world better. But what does that mean?

What does it mean to “make the world better”?

We imagine building a world in which the most beings can have the best possible lives in the long term — lives that are free from suffering and injustice, and full of happiness, adventure, connection, and meaning.

There are two key components to this vision — impartiality and a focus on wellbeing — which we’ll now unpack.

Impartiality: everyone matters equally

When it comes to ‘making a difference,’ we think we should strive to be more impartial — i.e. to give equal weight to everyone’s interests.

This means striving to avoid privileging the interests of anyone based on arbitrary factors such as their race, gender, or nationality, as well as where or even when they live. We also think that the interests of many nonhuman animals should be given significant weight, although we’re unsure of the exact amount. Importantly, we’re also concerned about potentially sentient future digital beings, which could exist in very large numbers and whose welfare could be in part determined by how we design them.

The idea of impartiality is common in many ethical traditions, and is closely related to the “Golden Rule” of treating others as you’d like to be treated, no matter who they are.

Acting impartially is an ideal, and it’s not all that matters. As individuals, we all have other personal goals, such as caring for our friends and family, carrying out our personal projects, and having our own lives go well. Even considering only moral goals, it’s plausible we have other values or ethical commitments beyond impartially helping others.

We’re not saying you should abandon these other goals and strive to treat everyone equally in all circumstances.

Rather, the claim is that insofar as your goal is to ‘make a difference’ or ‘have a social impact,’ we don’t see good reason to privilege any one group over another — and that you should therefore have some concern for the interests of strangers, nonhumans, and other neglected groups.

(And even if you think that the ultimate ideal is to have equal concern for all beings, as a matter of psychology, you probably have other, competing goals, and it’s not helpful to pretend you don’t.)

In Peter Singer’s essay, Famine, Affluence, and Morality, he imagines you’re walking and come across a child drowning in a pond. Everyone agrees that you should run in and save the child, even if it would ruin your new suit and shoes.

This illustrates a principle that many people can get behind: if you can help a stranger a great deal with little cost to yourself, that’s a good thing to do. This shows that most people give some weight to the interests of others.

If it also turns out that you have a lot of power to help others (as we argued above), then it would imply that social impact should be one of the main focuses of your life.

Impartiality also implies that you should think carefully about who you can help the most. It’s common to say that “charity begins at home,” but if everyone’s interests matter equally, and you can help more people who are living far away (e.g. because they’re without cheap basic necessities you can provide), then you should help the more distant people.

We’re convinced that a degree of impartiality is reasonable, and that many people should try thinking harder about impartiality than they are used to. But there remains huge questions about how impartial to be.

The trend over history seems to have been towards a greater impartiality and a wider and wider circle of concern, but we’re unsure where that should stop. For instance, compared to people today, how exactly should we weigh the interests of nonhuman animals, people who don’t exist yet, and potential digital beings? This is called the question of moral patienthood.

Here’s an example of the stakes of this question: we don’t see much reason to discount the interests of future generations simply because they’re distant from us in time. But because there could be so many people in the future, the main focus of efforts to do good should be to leave the best possible world for those future generations. This idea has been called ‘longtermism,’ and is explored in a separate article. We think longtermism is an important perspective, which is part of why we say “over the long term” in our definition of social impact.

This section was about who to help; the next section is about what helps.

Wellbeing: what does it mean to help others?

When aiming to help others, our tentative hypothesis is that we should aim to increase their wellbeing as much as possible — i.e. enable more individuals to live flourishing lives that are healthy, happy, fulfilled; are in line with their wishes; and are free from avoidable suffering.

Although people disagree over whether wellbeing is the only thing that matters morally, almost everyone agrees that things like health and happiness matter a lot — and so we think it should be a central focus in efforts to make a difference.

Putting impartiality and a focus on wellbeing together means that, roughly, how much positive difference an action makes depends on how it increases the wellbeing of those affected, and how many are helped — no matter when or where they live.

What wellbeing consists of more precisely is a controversial question, which you can read about in this introduction by Fin Moorhouse. In brief, there are three main views:

  • The hedonic view: wellbeing consists in your degree of positive vs negative mental states such as happiness, meaning, discovery, excitement, connection, and equanimity.
  • The preference satisfaction view: wellbeing consists in your desires being fulfilled.
  • Objective list theories: wellbeing consists in achieving certain goods, like friendship, knowledge, and health.

In philosophical thought experiments, these different views have very different implications. For instance, if you support the hedonic view, you’d need to accept that being secretly placed into a virtual reality machine that generates amazing experiences is better for you than staying in the real world. If instead you support (or just have some degree of belief in) the preference satisfaction view, you don’t have to accept this implication, because your desires can include not being deceived and achieving things in the real world.

In practical situations, however, we rarely find that different views of wellbeing drive different decisions, such as about which global problems to focus on. The three notions correlate closely enough that differences in views are usually driven by other factors (such as the question of where to draw the boundaries of the expanding circle discussed in the previous section).

What else might matter besides wellbeing? There are many candidates, which is why we say promoting wellbeing is only a “tentative” hypothesis.

Preserving the environment enables the planet to support more beings with greater wellbeing in the long term, and so is also good from the perspective of promoting wellbeing. However, some believe that we should preserve the environment even if it doesn’t make life better for any sentient beings, showing they place intrinsic value on preserving the environment.

Others think we should place intrinsic value on autonomy, fairness, knowledge, and many other values.

Fortunately, promoting these other values often goes hand in hand with promoting wellbeing, and there are often common goals that people with many values can share, such as avoiding existential risks. So again, we believe that the weight people put on these different values has less effect on what to do than often supposed, although they can lead to differences in emphasis.

We’re not going to be able to settle the question of defining everything that’s of moral value in this article, but we think that promoting wellbeing is a good starting point — it captures much of what matters and is a goal that almost everyone can get behind.

How good is it to create a happy person?

We’ve mostly spoken above as if we’re dealing with potential effects on a fixed population, but some decisions could result in more people existing in the long term (e.g. avoiding a nuclear war), while others mainly benefit people who already exist (e.g. treating people who have parasitic worms, which rarely kill people but cause a lot of suffering). So we need to compare the value of increasing the number of people with positive wellbeing with benefiting those who already exist.

This question is studied by the field of ‘population ethics’ and is an especially new and unsettled area of philosophy.

We won’t try to summarise this huge topic here, but our take is that the most plausible view is that we should maximise total wellbeing — i.e. the number of people (again including all beings whose lives matter morally) who exist in all of history, weighted by their level of wellbeing. This is why we say “total” wellbeing in the definition.

That said, there are some powerful responses to this position, which we briefly sketch out in the article on longtermism. For this reason, we’re not certain of this ‘totalist’ view, and so put some weight on other perspectives.

Expected value: acting under uncertainty

How do you know what will increase wellbeing the most?

In short, you don’t.

You have to weigh up the different likelihoods of different outcomes, and act even though you’re uncertain. We believe the theoretical ideal here is to take the action with the greatest expected value compared to the counterfactual. This means taking into account both how much wellbeing our actions could result in, and how likely those outcomes are, and adding them together.

In practice, we try to approximate this with rules of thumb, like the importance, neglectedness, tractability framework.

Going into expected value theory would take us too far afield, so if you want to learn more, check out our separate articles:

Why do we emphasise respecting other values?

This is to remind us of how much could be left out of our definition, and how radical our uncertainty is.

Many moral views that were widely held in the past are regarded as flawed or even abhorrent today. This suggests we should expect our own moral views to be flawed in ways that are difficult for us to recognise.

There is still significant moral disagreement within society, among contemporary moral philosophers, and, indeed, within our own team.

And past projects aiming to pursue an abstract ethical ideal to the exclusion of all else have often ended badly.

We believe it’s important to pursue social impact within the constraints of trying very hard to:

  • Not do anything that seems very wrong from a commonsense perspective
    • E.g. to not violate important legal and ethical rights
  • Cultivate good character, such as honesty, humility and kindness
  • Respect other people’s important values and to be willing to compromise with them
  • Respect your other important personal values, such as your family, personal projects and wellbeing

First, following these principles most likely increases your social impact in the long-term, once you take account of the indirect benefits of following them and the limitations of your knowledge.

Second, many believe following these principles is inherently valuable. We’re morally uncertain so try to consider a range of perspectives and do what makes sense on balance.

You can read more about why we think following principles like the above makes sense no matter your ethical views in these additional articles:

Considering these principles, if we had to sum up our ethical code into a single sentence, it might be something like: cultivate a good character, respect the rights of others, and promote wellbeing for the wider world.

And we think this is a position that people with consequentialist, deontological, and virtue-based ethics should all be able to get behind — it’s just that they support it for different reasons.

Is this just utilitarianism?

No. Utilitarianism claims that you’re morally obligated to take the action that does the most to increase wellbeing, as understood according to the hedonic view.

Our definition shares an emphasis on wellbeing and impartiality, but we depart from utilitarianism in that:

  • We don’t make strong claims about what’s morally obligated. Mainly, we believe that helping more people is better than helping fewer. If we were to make a claim about what we ought to do, it would be that we should help others when we can benefit them a lot with little cost to ourselves. This is much weaker than utilitarianism, which says you ought to sacrifice an arbitrary amount so long as the benefits to others are greater.
  • Our view is compatible with also putting weight on other notions of wellbeing, other moral values (e.g. autonomy), and other moral principles. In particular, we don’t endorse harming others for the greater good.
  • We’re very uncertain about the correct moral theory and try to put weight on multiple views.

Read more about how effective altruism is different from utilitarianism.

Overall, many members of our team don’t identify as being straightforward utilitarians or consequentialists.

Our main position isn’t that people should be more utilitarian, but that they should pay more attention to consequences than they do — and especially to the large differences in the scale of the consequences of different actions.

If one career path might save hundreds of lives, and another won’t, we should all be able to agree that matters.

In short, we think ethics should be more sensitive to scope.

Conclusion

We’re not sure what it means to make a difference, but we think our definition is a reasonable starting point that many people should be able to get behind:

“Social impact” or “making a difference” is (tentatively) about promoting total expected wellbeing — considered impartially, over the long term.

We’ve also gestured at how social impact might fit with your personal priorities and what else matters ethically, as well as many of our uncertainties about the definition — which can have a big effect on where to focus.

We think one of the biggest questions is whether to accept longtermism, so we’ve dedicated a whole separate article to that question.

From there, you can start to explore which global problems are most pressing based on whatever definition of social impact you think is correct.

You’ll most likely find that the question of which global problems to focus on is more driven by empirical or methodological uncertainties than moral ones. But if you find a moral question is crucial, you can come back and explore the further reading below.

In short, if you have the extraordinary privilege to be a college graduate in a rich country and to have options for how to spend your career, it’s plausible that social impact, as defined in this way, should be one of your main priorities.

Learn more

Read next

This article is part of our advanced series. See the full series, or keep reading:

If you’re newJoin our newsletter and we’ll mail you a free book

Enter your email and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity.

The post What is social impact? A definition appeared first on 80,000 Hours.

]]>
https://80000hours.org/articles/what-is-social-impact-definition/feed/ 5
Why being open to changing our minds is especially important right now https://80000hours.org/2022/11/why-being-open-to-changing-our-minds-is-especially-important-right-now/ Fri, 25 Nov 2022 21:21:08 +0000 https://80000hours.org/?p=80027 The post Why being open to changing our minds is especially important right now appeared first on 80,000 Hours.

]]>
If something surprises you, your view of the world should change in some way.

We’ve argued that you should approach your career like a scientist doing experiments: be willing to test out many different paths and gather evidence about where you can have the most impact.

More generally, this approach of open truth-seeking — being constantly, curiously on the lookout for new evidence and arguments, and always being ready to change our minds — is a virtue we think is absolutely crucial to doing good.

This blog post was first released to our newsletter subscribers.

Join over 350,000 newsletter subscribers who get content like this in their inboxes every two weeks — and we’ll also mail you a free book!

One of our first-ever podcast episodes was an interview with Julia Galef, author of The Scout Mindset (before she wrote the book!).

Julia argues — in our view, correctly — that it’s easy to end up viewing the world like a soldier, when really you should be more like a scout.

Soldiers have set views and beliefs, and defend those beliefs. When we are acting like soldiers, we display motivated reasoning: for example, confirmation bias, where we seek out information that supports our existing beliefs and misinterpret information that is evidence against our position so that it seems like it’s not.

Scouts, on the other hand, need to form correct beliefs. So they have to change their minds as they view more of the landscape.

Acting like a scout isn’t always easy:

  • There’s lots of psychological evidence suggesting that we all have cognitive biases that cloud our thinking.
  • It can sometimes be really painful to admit you were wrong or to come to think something unpleasant, even if that’s what the evidence suggests.
  • Even if you know you should change your beliefs, it’s difficult to know how much they should change in response to new evidence — the subject of our interview with Spencer Greenberg.
  • Having good judgement is actually just a difficult skill that needs to be practiced and developed over time.

But if we want to form correct beliefs about the world and how it works, we have to try.

And if we want to do good, forming correct beliefs about how our actions will impact others seems pretty crucial.

Why are we talking about this now?

Up until a few weeks ago, we’d held up Sam Bankman-Fried as a positive example of someone pursuing a high-impact career, and had written about how we encouraged him to use a strategy of earning to give.

Sam had pledged to donate 99% of his earnings to charity — and a year ago his net worth was estimated to be more than $20 billion. We were excited about what he might achieve with his philanthropy.

On November 11, Sam’s company, FTX, declared bankruptcy, and its collapse is likely to cause a tremendous amount of harm.

Sam appears to have made decisions which were, to say the least, seriously harmful.

If newspaper reports are accurate, customer deposits that were meant to be safely held by FTX were being used to make risky investments — investments which left FTX owing billions of dollars more than it had.

These reported actions are appalling.

We failed to see this coming.

So this week’s thoughts on the scout mindset are as much a reminder for us at 80,000 Hours as anyone else.

In the coming weeks and months, we want to thoughtfully examine what we believed and why — and in particular, where we were wrong — so we can, where needed, change our views.

There are so many questions for us to consider, in order to shape our future actions — here are a few:

There aren’t yet clear answers to these questions.

As we learn more about what happened and the wider effects of the collapse of FTX, we’re going to do our best to act like scouts, not soldiers: to defend beliefs only if they’re worthy of defence, and to be prepared and ready to change our minds.

We’ve released a statement regarding the collapse of FTX, and hope to write more on the topic soon.

We’re optimistic that the work of identifying, prioritising, and pursuing solutions to some of the world’s most pressing problems will continue. We hope you’ll be with us in that project.

Learn more

The post Why being open to changing our minds is especially important right now appeared first on 80,000 Hours.

]]>