Career capital (Topic archive) - 80,000 Hours https://80000hours.org/topic/career-advice-strategy/career-capital/ Tue, 05 Dec 2023 09:38:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 Benjamin Todd on the history of 80,000 Hours https://80000hours.org/after-hours-podcast/episodes/benjamin-todd-history-80k/ Fri, 01 Dec 2023 21:20:31 +0000 https://80000hours.org/?post_type=podcast_after_hours&p=84722 The post Benjamin Todd on the history of 80,000 Hours appeared first on 80,000 Hours.

]]>
The post Benjamin Todd on the history of 80,000 Hours appeared first on 80,000 Hours.

]]>
Ezra Klein on existential risk from AI and what DC could do about it https://80000hours.org/podcast/episodes/ezra-klein-ai-and-dc/ Mon, 24 Jul 2023 21:18:39 +0000 https://80000hours.org/?post_type=podcast&p=82791 The post Ezra Klein on existential risk from AI and what DC could do about it appeared first on 80,000 Hours.

]]>
The post Ezra Klein on existential risk from AI and what DC could do about it appeared first on 80,000 Hours.

]]>
Should you work at a leading AI lab? https://80000hours.org/career-reviews/working-at-an-ai-lab/ Tue, 20 Jun 2023 11:27:49 +0000 https://80000hours.org/?post_type=career_profile&p=82309 The post Should you work at a leading AI lab? appeared first on 80,000 Hours.

]]>

In a nutshell: Working at a leading AI lab is an important career option to consider, but the impact of any given role is complex to assess. It comes with great potential for career growth, and many roles could be (or lead to) highly impactful ways of reducing the chances of an AI-related catastrophe — one of the world’s most pressing problems. However, there’s a risk of doing substantial harm in some cases. There are also roles you should probably avoid.

Pros

  • Many roles have a high potential for impact by reducing risks from AI
  • Among the best and most robust ways to gain AI-specific career capital
  • Possibility of shaping the lab’s approach to governance, security, and standards

Cons

  • Can be extremely competitive to enter
  • Risk of contributing to the development of harmful AI systems
  • Stress and frustration, especially because of a need to carefully and frequently assess whether your role is harmful

Key facts on fit

Excellent understanding of the risks posed by future AI systems, and for some roles, comfort with a lot of quick and morally ambiguous decision making. You’ll also need to be a good fit for the specific role you’re applying for, whether you’re in research, comms, policy, or something else (see our related career reviews).

Recommendation: it's complicated

We think there are people in our audience for whom this is their highest impact option — but some of these roles might also be very harmful for some people. This means it's important to take real care figuring out whether you're in a harmful role, and, if not, whether the role is a good fit for you.

Review status

Based on a medium-depth investigation

This review is informed by two surveys of people with expertise about this path — one on whether you should be open to roles that advance AI capabilities (written up here), and a second follow-up survey. We also performed an in-depth investigation into at least one of our key uncertainties concerning this path. Some of our views will be thoroughly researched, though it's likely there are still some gaps in our understanding, as many of these considerations remain highly debated.

Why might it be high-impact to work for a leading AI lab?

We think AI is likely to have transformative effects over the coming decades. We also think that reducing the chances of an AI-related catastrophe is one of the world’s most pressing problems.

So it’s natural to wonder — if you’re thinking about your career — whether it would be worth working in the labs that are doing the most to build, and shape, these future AI systems.

Working at a top AI lab, like Google DeepMind, OpenAI, or Anthropic, might be an excellent way to build career capital to work on reducing AI risk in the future. Their work is extremely relevant to solving this problem, which suggests you’ll likely gain directly useful skills, connections, and credentials (more on this later).

In fact, we suggest working at AI labs in many of our career reviews; it can be a great step in technical AI safety and AI governance and coordination careers. We’ve also looked at working in AI labs in our career reviews on information security, software engineering, data collection for AI alignment, and non-technical roles in AI labs.

What’s more, the importance of these organisations to the development of AI suggests that they could be huge forces for either good or bad (more below). If the former, they might be high-impact places to work. And if the latter, there’s still a chance that by working in a leading lab you may be able to reduce the risks.

All that said, we think it’s crucial to take an enormous amount of care before working at an organisation that might be a huge force for harm. Overall, it’s complicated to assess whether it’s good to work at a leading AI lab — and it’ll vary from person to person, and role to role. But we think this is an important option to consider for many people who want to use their careers to reduce the chances of an existential catastrophe (or other harmful outcomes) resulting from the development of AI.

What relevant considerations are there?

Labs could be a huge force for good — or harm

We think that a leading — but careful — AI project could be a huge force for good, and crucial to preventing an AI-related catastrophe. Such a project could, for example:

(Read more about what AI companies can do today to reduce risks).1

But a leading and uncareful — or just unlucky — AI project could be a huge danger to the world. It could, for example, generate hype and acceleration (which we’d guess is harmful), make it more likely (through hype, open-sourcing or other actions) that incautious players enter the field, normalise disregard for governance, standards and security, and ultimately it could even produce the very systems that cause a catastrophe.

So, in order to successfully be a force for good, a leading AI lab would need to balance continuing their development of powerful AI (and possibly even retaining a leadership position), whilst also appropriately prioritising doing things that reduce the risk overall.

This tightrope seems difficult to walk, with constant tradeoffs to make between success and caution. And it seems hard to assess from the outside which labs are doing this well. The top labs — as of 2023, OpenAI, Google DeepMind, and Anthropic — seem reasonably inclined towards safety, and it’s plausible that any or all of these could be successfully walking the tightrope, but we’re not really sure.

We don’t feel confident enough to give concrete recommendations on which of these labs people should or should not work for. We can only really recommend that you put work into forming your own views about whether a company is a force for good. But the fact that labs could be such a huge force for good is part of why we think it’s likely there are many roles at leading AI labs that are among the world’s most impactful positions.

It’s often excellent career capital

Top AI labs are high-performing, rapidly growing organisations. In general, one of the best ways to gain career capital is to go and work with any high-performing team — you can just learn a huge amount about getting stuff done. They also have excellent reputations more widely (AI is one of the world’s most sought-after fields right now, and the top labs are top for a reason). So you get the credential of saying you’ve worked in a leading lab, and you’ll also gain lots of dynamic, impressive connections. So even if we didn’t think the development of AI was a particularly pressing problem, they’d already seem good for career capital.

But you will also learn a huge amount about and make connections within AI in particular, and, in some roles, gain technical skills which could be much harder to learn elsewhere.

We think that, if you’re early in your career, this is probably the biggest effect of working for a leading AI lab, and the career capital is (generally) a more important consideration than the direct impact of the work. You’re probably not going to be having much impact at all, whether for good or for bad, when you’re just getting started.

However, your character is also shaped and built by the jobs you take, and matters a lot for your long-run impact, so is one of the components of career capital. Some experts we’ve spoken to warn against working at leading AI labs because you should always assume that you are psychologically affected by the environment you work in. That is, there’s a risk you change your mind without ever encountering an argument that you’d currently endorse (for example, you could end up thinking that it’s much less important to ensure that AI systems are safe, purely because that’s the view of people around you). Our impression is that leading labs are increasingly concerned about the risks, which makes this consideration less important — but we still think it should be taken into account in any decision you make. There are ways of mitigating this risk, which we’ll discuss later.

Of course, it’s important to compare working at an AI lab with other ways you might gain career capital. For example, to get into technical AI safety research, you may want to go do a PhD instead. Generally, the best option for career capital will depend on a number of factors, including the path you’re aiming for longer term and your personal fit for the options in front of you.

You might advance AI capabilities, which could be (really) harmful

We’d guess that, all else equal, we’d prefer that progress on AI capabilities was slower.

This is because it seems plausible that we could develop transformative AI fairly soon (potentially in the next few decades). This suggests that we could also build potentially dangerous AI systems fairly soon — and the sooner this occurs the less time society has to successfully mitigate the risks. As a broad rule of thumb, less time to mitigate risks seems likely to mean that the risks are higher overall.

But that’s not necessarily the case. There are reasons to think that advancing at least some kinds of AI capabilities could be beneficial. Here are a few:

  • This distinction between ‘capabilities’ research and ‘safety’ research is extremely fuzzy, and we have a somewhat poor track record of predicting which areas of research will be beneficial for safety work in the future. This suggests that work that advances some (and perhaps many) kinds of capabilities faster may be useful for reducing risks.
  • Moving faster could reduce the risk that AI projects that are less cautious than the existing ones can enter the field.
  • Lots of work that makes models more useful — and so could be classified as capabilities (for example, work to align existing large language models) — probably does so without increasing the risk of danger . This kind of work might allow us to use these models to reduce the risk overall, for example, through the kinds of defensive deployment discussed earlier.
  • It’s possible that the later we develop transformative AI, the faster (and therefore more dangerously) everything will play out, because other currently-constraining factors (like the amount of compute available in the world) could continue to grow independently of technical progress. Slowing down advances now could increase the rate of development in the future, when we’re much closer to being able to build transformative AI systems. This would give the world less time to conduct safety research with models that are very similar to ones we should be concerned about but which aren’t themselves dangerous. (When this is caused by a growth in the amount of compute, it’s often referred to as a hardware overhang.)

Overall, we think not all capabilities research is made equal — and that many roles advancing AI capabilities (especially more junior ones) will not be harmful, and could be beneficial. That said, our best guess is that the broad rule of thumb that there will be less time to mitigate the risks is more important than these other considerations — and as a result, broadly advancing AI capabilities should be regarded overall as probably harmful.

This raises an important question. In our article on whether it’s ever OK to take a harmful job to do more good, we ask whether it might be morally impermissible to do a job that causes serious harm, even if you think it’s a good idea on net.

It’s really unclear to us how jobs that advance AI capabilities fall into the framework proposed in that article.

This is made even more complicated by our view that a leading AI project could be crucial to preventing an AI-related catastrophe — and failing to prevent a catastrophe seems, in many value systems, similarly bad to causing one.

Ultimately, answering the question of moral permissibility is going to depend on ethical considerations about which we’re just hugely uncertain. Our guess is that it’s good for us to sometimes recommend that people work in roles that could harmfully advance AI capabilities — but we could easily change our minds on this.

For another article, we asked the 22 people we thought would be most informed about working in roles that advance AI capabilities — and who we knew had a range of views — to write a summary of their takes on the question: if you want to help prevent an AI-related catastrophe, should you be open to roles that also advance AI capabilities, or steer clear of them? There’s a range of views among the 11 responses we received, which we’ve published here.

You may be able to help labs reduce risks

As far as we can tell, there are many roles at leading AI labs where the primary effects of the roles could be to reduce risks.

Most obviously, these include research and engineering roles focused on AI safety. Labs also often don’t have enough staff in relevant teams to develop and implement good internal policies (like on evaluating and red-teaming their models and wider activity), or to figure out what they should be lobbying governments for (we’d guess that many of the top labs would lobby for things that reduce existential risks). We’re also particularly excited about people working in information security at labs to reduce risks of theft and misuse.

Beyond the direct impact of your role, you may be able to help guide internal culture in a more risk-sensitive direction. You probably won’t be able to influence many specific decisions, unless you’re very senior (or have the potential to become very senior), but if you’re a good employee you can just generally become part of the ‘conscience’ of an organisation. Just like anyone working at a powerful institution, you can also — if you see something really harmful occurring — consider organising internal complaints, whistleblowing, or even resigning. Finally, you could help foster good, cooperative working relationships with other labs as well as the public.

To do this well, you’d need the sorts of social skills that let you climb the organisational ladder and bring people round to your point of view. We’d also guess that you should spend almost all of your work time focused on doing your job well; criticism is usually far more powerful coming from a high performer.

There’s a risk that doing this badly could accidentally cause harm, for example, by making people think that arguments for caution are unconvincing.

How can you mitigate the downsides of this option?

There are a few things you can do to mitigate the downsides of taking a role in a leading AI lab:

  • Don’t work in certain positions unless you feel awesome about the lab being a force for good. This includes some technical work, like work that improves the efficiency of training very large models, whether via architectural improvements, optimiser improvements, improved reduced-precision training, or improved hardware. We’d also guess that roles in marketing, commercialisation, and fundraising tend to contribute to hype and acceleration, and so are somewhat likely to be harmful.
  • Think carefully, and take action if you need to. Take the time to think carefully about the work you’re doing, and how it’ll be disclosed outside the lab. For example, will publishing your research lead to harmful hype and acceleration? Who should have access to any models that you build? Be an employee who pays attention to the actions of the company you’re working for, and speaks up when you’re unhappy or uncomfortable.
  • Consult others. Don’t be a unilateralist. It’s worth discussing any role in advance with others. We can give you 1-1 advice, for free. If you know anyone working in the area who’s concerned about the risks, discuss your options with them. You may be able to meet people through our community, and our advisors can also help you make connections with people who can give you more nuanced and personalised advice.
  • Continue to engage with the broader safety community. To reduce the chance that your opinions or values will drift just because of the people you’re socialising with, try to find a way to spend time with people who more closely share your values. For example, if you’re a researcher or engineer, you may be able to spend some of your working time with a safety-focused research group.
  • Be ready to switch. Avoid being in a financial or psychological situation where it’s just going to be really hard for you to switch jobs into something more exclusively focused on doing good. Instead, constantly ask yourself whether you’d be able to make that switch, and whether you’re making decisions that could make it harder to do so in the future.

How to predict your fit in advance

In general, we think you’ll be a better fit for working at an AI lab if you have an excellent understanding of risks from AI. If the positive impact of your role comes from being able to persuade others to make better decisions, you’ll also need very good social skills. You’ll probably have a better time if you’re pragmatic and comfortable with making decisions that can, at times, be difficult, time-pressured, and morally ambiguous.

While a career in a leading AI lab can be rewarding and high impact for some, it’s not suitable for everyone. People who should probably not work at an AI lab include:

  • People who can’t follow tight security practices: AI labs often deal with sensitive information that needs to be handled responsibly.
  • People who aren’t able to keep their options open — that is, they aren’t (for a number of possible reasons) financially or psychologically prepared to leave if it starts to seem like the right idea. (In general, whatever your career path, we think it’s worth trying to build at least 6-12 months of financial runway.)
  • People who are more sensitive than average to incentives and social pressure: you’re just more likely to do things you wouldn’t currently endorse.

More specifically than that, predicting your fit will depend on the exact career path you’re following, and for that you can check out our other related career reviews.

How to enter

Some labs have internships (e.g. at Google DeepMind) or residency programmes (e.g. at OpenAI) — but the path to entering a leading AI lab can depend substantially on the specific role you’re interested in. So we’d suggest you look at our other career reviews for more detail, as well as plenty of practical advice.

Recommended organisations

We’re really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs. Some people we spoke to have strong opinions about which of these is best, but they disagree with each other substantially.

Big tech companies like Apple, Microsoft, Meta, Amazon, and NVIDIA — which have the resources to potentially become rising stars in AI — are also worth considering, as there’s a need for more people in these companies who care about AI safety and ethics. Relatedly, plenty of startups can be good places to gain career capital, especially if they’re not advancing dangerous capabilities. However, the absence of teams focused on existential safety means that we’d guess these are worse choices for most of our readers.

Want one-on-one advice on pursuing this path?

If you think this path might be a great option for you, but you need help deciding or thinking about what to do next, our team might be able to help.

We can help you compare options, make connections, and possibly even help you find jobs or funding opportunities.

APPLY TO SPEAK WITH OUR TEAM

Find a job in this path

If you think you might be a good fit for this path and you’re ready to start looking at job opportunities that are currently accepting applications, see our list of opportunities for this path:

    View all opportunities

    Learn more about working at AI labs

    Learn more about making career decisions where there’s a risk of harm:

    Relevant career reviews (for more specific and practical advice):

    Read next:  Learn about other high-impact careers

    Want to consider more paths? See our list of the highest-impact career paths according to our research.

    Plus, join our newsletter and we’ll mail you a free book

    Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity.

    The post Should you work at a leading AI lab? appeared first on 80,000 Hours.

    ]]>
    Know what you’re optimising for https://80000hours.org/2022/06/know-what-youre-optimising-for/ Wed, 15 Jun 2022 13:42:49 +0000 https://80000hours.org/?p=78037 The post Know what you’re optimising for appeared first on 80,000 Hours.

    ]]>

    There is (sometimes) such a thing as a free lunch

    You live in a world where most people, most of the time, think of things as categorical, rather than continuous. People either agree with you or they don’t. Food is healthy or unhealthy. Your career is ‘good for the world,’ or it’s neutral, or maybe even it’s bad — but it’s only the category that matters, not the size of the benefit or harm. Ideas are wrong, or they are right. Predictions end up confirmed or falsified.

    In my view, one of the central ideas of effective altruism is the realisation that ‘doing good’ is not such a binary. That as well as it mattering that we help others at all, it matters how much we help. That helping more is better than helping less, and helping a lot more is a lot better.

    For me, this is also a useful framing for thinking rationally. Here, rather than ‘goodness,’ the continuous quantity is truth. The central realisation is that ideas are not simply true or false; they are all flawed attempts to model reality, and just how flawed is up for grabs. If we’re wrong, our response should not be to give up, but to try to be less wrong.

    When you realise something is continuous that most people are treating as binary, this is a good indication that you’re in a situation where it’s unusually easy to achieve something you care about. Because if most people don’t see huge differences between options that you do, you can concentrate on the very best options and face little competition from others.

    Sometimes the converse is also true: people may treat something as continuous, and work hard at it, despite the returns to working harder actually being very small.

    An example that sticks in my mind from my time teaching maths is about how neatly work is presented. Lots of people care about neat work or good presentation, and sometimes there’s a very good reason for this. If work is messy enough that it’s difficult to read, or that the student is making mistakes caused by misreading their own writing, this is important to fix!

    The problem is, the returns on neatness suddenly drop off a cliff when the work is clear enough to be easily readable, and yet some students will put huge amounts of effort into making their work look not just clear, but unnecessarily neat.

    Worse still, some teachers will praise this additional effort, implying it’s a good thing that someone takes three times as long as they need to on every piece of work just to make it look nice. But it’s usually1 not — that extra time could be used for learning, or just hanging out with friends!

    I remember speaking to some students who were struggling with their workload, only to discover that they were doing each piece of work twice: once to get the maths, and another to copy everything out beautifully to hand in. It broke my heart.

    Even when it’s fairly normal to try really hard at something, it’s worth checking that more effort is reliably leading to more of what you care about. That is to say, there are some things you should half-ass with everything you’ve got.

    Thinking about these ideas as I tried to help my students — and now as I try to help the people I advise — I’ve noticed two ideas that frequently appear in the advice I give.

    1. Try optimising for something.
    2. Know what you’re optimising for.

    In the rest of this article, I describe how I think about applying these two ideas, and the sort of mistakes that I hope they can prevent. I include lots of examples, and most of these are linked to career decisions inspired by real conversations I’ve had, though none were written with a specific person in mind, and all of the names are made up.

    I also try to include some more abstract mathematical intuition, made (hopefully) clearer with the addition of some pretty graphs.

    At the end of the article, I try to think of ways in which the advice might not apply or be misleading, though you may well generate others as you read, and trying to do so seems like a useful exercise.

    Idea #1: Consider optimising for something

    You are allowed to try really hard to achieve a thing you care about, even when it’s a thing not that many people try hard to achieve — in some ways, especially in those cases. You don’t have to stop at ‘enough,’ or even at ‘lots’ — you can keep going. You can add More Dakka.

    The thought of trying really hard at something feels very natural to some people, including many who I expect might find useful ideas in the rest of the article. But to many others, it feels gross, or unnatural, or in some way ‘not allowed’ — ‘tryhard’ is a term some people even use to insult others! It’s for this last reason that I framed this idea in terms of permission — I don’t think you need it, but if you found the idea off-putting, now you have permission to do it anyways.

    Idea #2: Know what you’re optimising for

    This idea is about being deliberate in what you’re trying hard to achieve. It’s about trying to ensure that the subject of the majority of your effort is in fact the most important thing. In some sense, like optimising at all, it’s about permission: knowing that you are allowed to realise that one thing is much more important for you to get than all of the others, and trying to get it (even if it’s not the typical thing people want).

    Know what you’re optimising for is also, I suspect, often about picking only one thing at a time, even if multiple things are important. Even in cases where picking one thing doesn’t seem best, asking the question “Which one thing should I optimise for?” seems like it might produce useful insights.

    People often optimise for the wrong thing

    I first saw people repeatedly optimising for the wrong thing when I was teaching. Students care about many things, from status among their peers to getting good enough grades for university. Many of these things are directly rewarded by people that students interact with: parents will praise good grades; other students will let you know what they think of you; and some teachers will be fairly transparent about who they think the smart kids are (even if they try to hide it).

    Importantly, though several of these things are correlated with learning, none of them are perfect indicators of actually learning. Even though most people agree to some extent that one of the major purposes of school is learning, learning has a really weak reward signal, and it’s easy to drift through school without really trying to learn.

    There’s a difference between doing things that are somewhat correlated with things you want (or even doing things that you expect to lead to things you want), and trying really unusually hard to actually get what you want. Sometimes working out what you actually want can be really hard — for many, working out what one ultimately values can be a lifetime’s work. However, I’ve been frequently surprised, during my time as an advisor, by how often it’s been sufficient to just ask:

    It looks like you’re trying to achieve X here. Is X really the thing you want?

    The mistake of optimising for not quite the thing you want can be particularly easy to miss if the thing is useful in general, but in this instance is not useful for you. For one thing, it’s hard to internally notice without specifically looking for it. But you’re also less likely to have others point out this mistake, because things that are useful in general seem more ‘normal’ to have as a goal. For instance, appearing high status seems pretty useful, and it’s a goal that many people have to some extent, so who’s going to stop and ask you whether you really endorse playing as many status games as you are?

    Perhaps a more relevant example is that I often see (usually young) effective altruists optimising for impact per unit of time, rather than for the total impact they expect to have over their career. They ask themselves what the most impactful thing they can do right now is, and then do that. This often works well, and there are many worse heuristics to use. Unfortunately, it’s not always the case that trying to do the very best thing right now puts you in the best position to do the most good overall.

    People seem to accept this when it comes to going to university. Choosing to do an undergraduate degree is to some extent like choosing to take a negative salary job — which usually doesn’t produce any useful output to others — purely to learn a lot and set yourself up well to achieve things later. For many people, this is a great idea! But then something strange happens when people graduate. For an altruist, taking a role in a for-profit company where you’ll gain a whole bunch of useful skills can look very unattractive, as you won’t be having any direct impact. Taking a salary hit for an opportunity to learn a ton also doesn’t look good (that is, unless the opportunity is called ‘grad school,’ in which case it looks fine again). Neither of these strategies are necessarily best, but they are at least worth considering! The lost impact or salary at the outset might be made up for many times over if you’re able to access more impactful opportunities later.

    The law of equal and opposite advice applies in many places, and this is one of them. Just as you might make the mistake of under-investing in yourself, you can also stay in the ‘building up to have a big impact later’ phase for too long. Someone I advised not too long ago referred to themself as “an option value addict,” which I thought was a great way to frame this idea. While the idea of option value — that it can be useful to preserve your ability to choose something later — is a really valuable one, it’s only valuable to keep options that you actually have some chance of choosing. The smaller the chance that you ever take a particular option, the less valuable it is to preserve it — so thinking about how likely you personally are to use it ends up being important.

    For example, it might be worthwhile for some people to keep an extremely low profile on all forms of social media in case a spicy social media presence prevents them from later working for an intelligence agency or running for office. But if you have absolutely no intention of ever working in government, this reason doesn’t apply to you! (There are, of course, other reasons one might want to limit social media exposure.)

    Trying to optimise for too many things can lead to optimising for nothing in particular

    As well as optimising for the wrong things, I often speak to people who are shooting for too many things at once. This typically plays out in one of two ways:

    • People try to optimise for so many things that they don’t end up making progress on any.
    • People just don’t optimise at all — because when so many things seem important, where do you even start?

    In both cases, this often ends up with people trying to find an option that looks at least kind-of good according to multiple different criteria. Doing well on many uncorrelated criteria is pretty hard.2 This often leads to only one option being considered… and that option not looking great.

    What might this look like?

    The examples below have been inspired by conversations I’ve had. Each involves a hypothetical person describing an option which seems pretty good. It might even be the best option they have. But all of these pretty good options follow the pattern of ‘this thing looks good for many different reasons’ — and ‘looks good for many reasons’ misses the importance of scale: that doing much, much better in one way is often better than doing a little better in several ways at the same time. The people in the examples would benefit from considering what their decision would look like if they picked one source of value, and tried to get as much of that as possible.

    Alex

    If I join this cleantech startup, I will be contributing to the fight against climate change. It’s also a startup, so there’s some chance it will go really well — so this is also an earning-to-give strategy, and I might learn some things by being there.

    • If I’m hoping to pursue a ‘hits-based’ earning-to-give strategy as a startup founder or early-stage employee, almost all the expected value is going to come from the outcomes where the project really takes off. If I look around the startup space for other options, how likely does it seem that this is the one that will take off? Can I find a much better opportunity if I drop the requirement that it has to be in cleantech?
    • When I really reflect on which causes seem important, I realise that I’m quite likely to make my donations to reducing global catastrophic biological risks, rather than climate charities. There’s a lot of need for founders in the biosecurity space, and my skills and earnings won’t be that useful in the next few years, so maybe the learning value from being part of an early-stage startup is the most important consideration here. Does the cleantech startup look best on that basis, or is there somewhere else I might be able to learn much more, even if the primary motive of the founders is profit rather than climate change?

    Luisa

    If I do this data science in healthcare internship, I’ll learn some useful machine learning skills, and I might be able to directly contribute to reducing harm from heart disease.

    • Developing my machine learning skills seems like the most important thing for me to focus on, given what I want to work on after graduating. It’s not clear that this internship is going to be particularly helpful — I’m probably just going to end up cleaning data. I don’t learn well without structure though. Could I find someone to supervise or mentor me through a machine learning project?
    • I’m pretty sure I’ll learn loads during summer; I’ve done really well at teaching myself programming so far and would probably learn even more if I didn’t do the internship. But I don’t want to have to move back into my parents’ house in the middle of nowhere where I’ll be miserable, and the pay from the internship will mean I can afford to stay in a city, see my friends, and keep motivated. If the main thing I’m getting from the internship is money, can I apply for a grant? Or can I find something shorter which will still pay me enough, or something where I’ll be writing more code even if it’s not in healthcare?

    Benjamin

    This role isn’t directly relevant to the cause I think is most important, but it’s still helping somewhat, and it’s fairly well paid so I can also contribute with my donations.

    • If I just took the highest-salary job I could, how much more would I be able to donate? Would that do more good than my direct work in my current role? I think my donations are directly saving a lot of lives, so I should at least run the numbers.
    • I’m giving away a decent fraction of my salary anyway, so I’m happy to live on less than this job is giving me. Did I restrict my options too much by looking for such a high salary? I should look at whether there are any jobs I could take where I’d be able to do much more good directly than the total of my current work and donations are doing now.

    When facing a situation with multiple potential sources of value, you might be able to get outsized gains by just pushing really hard on one of them. In particular, it’s possible to get gains so big that they more than outweigh losses elsewhere.

    It’s not always the case that you can completely trade off different good things against each other — many people, for example, want to have at least some interest in their work. But it is sometimes the case, and it’s worth noticing when you’re in one of those situations. In particular, if the different good things you’re achieving are all roughly described as ‘positive effects on the world,’ you can estimate the size of the effects and see how they trade off against each other. What matters is that you’re doing good, not how you’re doing it. Of course, be careful not to take that last part too far.

    The ‘alternative framings’ in the examples above all replace optimising for nothing in particular with just optimising for one thing. The other things either got dropped entirely, or were only satisficed,3 rather than optimised for. This isn’t an accident. Picking one thing forces you to be deliberate about which thing you shoot for, and it makes it seem possible to actually optimise. I think those benefits alone are enough to at least consider just picking one thing.

    But I actually suspect that something even stronger is true: often just having a single goal is best.

    The intuition here is that when you value things differently to the population average, your best options are likely to be skewed towards the things you care relatively more about. Markets are fairly efficient for average preferences, but when your preferences are different to the average, you might find big inefficiencies. For example, if you’re househunting and you absolutely love cooking but never work from home, it’s worth looking for places that have unusually big kitchens compared to the size of the other rooms. Most people are willing to pay more for bigger rooms, or a home office — if you don’t need those things, don’t pay for them!

    Let’s sketch some graphs to try to see what’s going on here. Consider the case where you care about two things — let’s say salary and interestingness. (Often you’ll care about more than two things, but 2D plots are easier to sketch, and I suspect that the effect I sketch below is even stronger in higher dimensions.) You might expect the job market to look something like Figure 1:

    Initial distribution of jobs
    Figure 1. Initial distribution of jobs

    Let’s assume that the average person cares equally about salary and interestingness, and rates them by just adding up the two scores. When this is the case, we should expect that higher-salaried jobs that are more interesting will be harder to get.

    In Figure 2, I’ve colour coded easy jobs to get as black/purple and harder jobs to get as orange/yellow below. But what if I care much more about my job being interesting than it paying well? In this case, the best jobs for me won’t be quite the same as the hardest for me to get. I’ve shown this preference in Figure 3 by colour coding a different plot from bright yellow (perfect for me) to dark purple (terrible for me). I assumed that I still cared about salary, but that interest was three times as important — so to rank the jobs, I multiplied the interest score by three before adding salary.

    Jobs colour coded by competitiveness
    Figure 2. Jobs colour coded by competitiveness
    Jobs colour coded by personal preference
    Figure 3. Jobs colour coded by personal preference

    I want to look for jobs that are easier for me to get (darker on Figure 2), and that I’ll actually want (lighter on Figure 3). The easiest jobs for me to get are in the bottom left, which doesn’t help much, as I don’t want these. The jobs I want most are in the top right, which also doesn’t help much as these are hardest to get. If my theory is correct, I should get the best tradeoffs between these two things by focusing hard on the thing I care more about than average (interest), while not worrying as much about the thing I care less about than average (salary). This would tell me to look first in the bottom right of the graph.

    It’s a little hard to tell from just these two figures exactly how well the theory is doing, so let’s make things a bit easier to see in Figure 4 below. First, I removed the top 10% most popular jobs among the general public, to represent some jobs being competitive enough to not even be worth trying. I then also removed the bottom 50% of the jobs according to my preferences, to represent wanting to look for something better than average. Both of these cutoffs are arbitrary, but the conclusion doesn’t change when you pick different ones.

    Jobs I'll be able to get that I also want
    Figure 4. Jobs I’ll be able to get that I also want

    As expected, the best-looking options I’ll actually be able to get look like very interesting, low-salary jobs.

    In practice, all of the tradeoffs above will be much less clean. Preferences about different options probably shouldn’t be linear, for example, certainly not in the case of salary. Despite all this, the conclusion remains that if your preferences are in some way different from the average, some of the best places to look to exploit the differences are the extremes.

    When do I expect this not to apply?

    Multiplicative factors

    In the sorts of situations I describe above, the total value tends to come from the values from each different consideration being added up: my job being interesting makes me a bit happier, and so does being paid more; donating money to effective charities saves lives, and so does working for one of those charities. In these cases, less of one thing pretty directly gets traded for more of another. Even in these cases, it can still be worth getting to some minimum level,3 if you get most of the gains from getting to that level and/or it’s easy.

    Sometimes though, success looks more like a bunch of factors multiplied together than a bunch of things added together. When this is the case, it becomes really important that none of those factors end up getting set too low, which can be catastrophic.

    In my view, the most important example of something that can be a multiplier on everything else you’re doing is personal health and wellbeing, especially when it is in danger of dropping below a certain level. Burnout is already a big risk when you’re optimising for doing as much as possible to help, especially among people who really care about others. In fact, one of my biggest concerns in writing this piece is that it might make this risk higher.

    In some sense, we can frame this problem as a mistake of optimising for the wrong thing: impact right now instead of impact over the long run. But on this topic, the thing I care most about is not what it says about optimisation — I care most that you take care of yourself as your number one priority. These resources provide useful perspectives on this risk, as well as some ideas for how to reduce it:

    Very good might be good enough

    You’ll often find that as you keep trying to push the envelope further, it gets harder and harder to make progress. At some point then, even after you’ve seen substantial gains from deciding to optimise at all, you may reach a point where effort on the most important thing is going to pay off less than effort on something else.

    This might happen because there are fewer and fewer people who you can learn from. It could be that you are in fact now making much fewer mistakes in your efforts, and the fewer mistakes you make, the harder it is to catch and eliminate them. Maybe it’s just that you’re starting to enter the domain of people who are really trying, and competition is heating up. Whatever the reason is, there’s a chance that this is the time to pick a second thing, and push on that too. In particular, when it comes to personal skill development, not only can it be easier to get extremely good at two things than truly world-class at one, in this case your skill set might look quite special.

    Next steps

    People who know what they are optimising for might ask themselves things like:

    • Is what I’m trying to achieve in this situation the right thing?
    • Am I trying to achieve multiple things at once? Is that the best strategy?
    • Does the thing I’m trying to achieve actually lead to something I want?
    • What would it look like if I focused on the most important thing and dropped the others?

    It might be worth thinking about some aspect of your life, and ask yourself those questions now. Did one work particularly well, or can you think of an alternative question that works better for you?

    After reading this article, you may well think that this kind of mindset isn’t well-suited to the way you think. If that’s the case, that’s fine! Hopefully you now at least have a different perspective you can look at some decisions with. Even if it seems unlikely you’ll use it often, it might shed some light on decisions made by people like me.

    The post Know what you’re optimising for appeared first on 80,000 Hours.

    ]]>
    Michelle and Habiba on what they’d tell their younger selves, and the impact of the 1-1 team https://80000hours.org/after-hours-podcast/episodes/michelle-habiba-advice-for-younger-selves/ Wed, 09 Mar 2022 17:38:34 +0000 https://80000hours.org/?post_type=podcast_after_hours&p=77784 The post Michelle and Habiba on what they’d tell their younger selves, and the impact of the 1-1 team appeared first on 80,000 Hours.

    ]]>
    The post Michelle and Habiba on what they’d tell their younger selves, and the impact of the 1-1 team appeared first on 80,000 Hours.

    ]]>
    Michelle Hutchinson & Habiba Islam on balancing competing priorities and other themes from our 1-on-1 careers advising https://80000hours.org/podcast/episodes/michelle-hutchinson-habiba-islam-themes-from-careers-advising/ Wed, 09 Mar 2022 17:36:31 +0000 https://80000hours.org/?post_type=podcast&p=76772 The post Michelle Hutchinson & Habiba Islam on balancing competing priorities and other themes from our 1-on-1 careers advising appeared first on 80,000 Hours.

    ]]>
    The post Michelle Hutchinson & Habiba Islam on balancing competing priorities and other themes from our 1-on-1 careers advising appeared first on 80,000 Hours.

    ]]>
    23 career choice heuristics https://80000hours.org/2022/03/23-career-choice-heuristics/ Mon, 07 Mar 2022 10:57:14 +0000 https://80000hours.org/?p=76737 The post 23 career choice heuristics appeared first on 80,000 Hours.

    ]]>

    Note: This is a cross-post from the Effective Altruism Forum, written by Jack Ryan and Olivia Jimenez — not by authors at 80,000 Hours. We decided to post it here because we liked it and thought our audience might enjoy it!

    We decided to make a list of all of the career choice heuristics we could think of — see below. Many of these are stated as if completely true, even though we think they aren’t. We invite you to add any additional heuristics you have in the comments of the original post.

    • Scale, number helped — do something that impacts many people positively
    • Scale, degree helped — do something that impacts people to a great positive degree
    • Neglectedness — do something that few others are doing or that won’t be done counterfactually
    • Tractability — do something that makes significant progress on a problem
    • Moments of progress — notice where progress happens in your life and find a career path that integrates those
    • Strong team — if you haven’t worked well alone, join an excellent team
    • Likable people — join a team of people that you like
    • Mental well-being — do something that is optimized for being good for your mental health
    • Team smarter than you — join a team where most people are smarter than you
    • Be a thought or org leader — roughly, there are two types of leaders – thought leaders and org leaders; figure out which type you are more likely to be and optimize for succeeding at that type
    • Learn from leaders — learn from the leaders who you most want to be like
    • Maximize learning/skills, unless — in your early career, focus almost entirely on learning and building skills unless there’s an exceptional impact opportunity that won’t be possible later
    • Rare learning1 — do something where you learn rare knowledge like technical skills or management
    • Maximize late-career impact — do something that maximizes the impact you will have when you are at your career peak (e.g. because the calendar year will be higher leverage, or because it is better to grow and learn before focusing on direct impact)
    • Maximize immediate impact2 — do something that maximizes the impact you will have in the next few years (since things will get less neglected and/or because the calendar year will be lower leverage later)
    • Shower thoughts — do something that you will think about in the shower or while you are falling asleep
    • Comparative advantage — do something that leverages your comparative advantage or personal fit
    • Comparative disadvantage — avoid whatever utilizes your comparative disadvantages
    • Career capital — do something that gives you power/influence and/or career capital in important parts of broader society
    • Be honest and have integrity — be honest and have integrity so that the rest of the EA community responds to your actions optimally (including e.g. by giving you a job or status)
    • Be a founder — do something that involves starting a company
    • Scalability — do something that can scalably use money and/or labor
    • Optimize one thing at a time3 — goals likely vary in (expected) value greatly, so you should probably only be optimizing one major thing at a time, or taking on one major project at a time; if you aren’t sure what to optimize, optimize for figuring that out

    The post 23 career choice heuristics appeared first on 80,000 Hours.

    ]]>
    Holden Karnofsky on building aptitudes and kicking ass https://80000hours.org/podcast/episodes/holden-karnofsky-building-aptitudes-kicking-ass/ Thu, 26 Aug 2021 20:02:28 +0000 https://80000hours.org/?post_type=podcast&p=73549 The post Holden Karnofsky on building aptitudes and kicking ass appeared first on 80,000 Hours.

    ]]>
    The post Holden Karnofsky on building aptitudes and kicking ass appeared first on 80,000 Hours.

    ]]>
    Chris Olah on working at top AI labs without an undergrad degree https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/ Wed, 11 Aug 2021 13:13:09 +0000 https://80000hours.org/?post_type=podcast&p=73395 The post Chris Olah on working at top AI labs without an undergrad degree appeared first on 80,000 Hours.

    ]]>
    The post Chris Olah on working at top AI labs without an undergrad degree appeared first on 80,000 Hours.

    ]]>
    Cal Newport on an industrial revolution for office work https://80000hours.org/podcast/episodes/cal-newport-industrial-revolution-for-office-work/ Wed, 28 Jul 2021 16:58:23 +0000 https://80000hours.org/?post_type=podcast&p=73161 The post Cal Newport on an industrial revolution for office work appeared first on 80,000 Hours.

    ]]>
    The post Cal Newport on an industrial revolution for office work appeared first on 80,000 Hours.

    ]]>