Thoughts on Magnus Vinding’s “Priorities for reducing suffering: Reasons not to prioritize the Abolitionist Project”
Preamble
Magnus Vinding recently wrote a blog post on his reasons for not prioritising the abolitionist project (AP). I made a number of notes/comments on reading it and thought of presenting just them. However, I didn’t feel they could convey my view on the matter, which is somewhat different from (my reading of) Magnus’ view. Hence this post.
(In case you’re in doubt: your critique or other feedback would be much appreciated.)
Background assumptions
Like David Pearce and Magnus Vinding, I’m of a view that reducing suffering should be our chief ethical priority, with severe suffering being of the foremost importance.
Where we may differ is our perspectives on “s-risks” and what counts as ones, and what increase in suffering makes a world a “worse” place.
On the possibility of “digital sentience”, I share Pearce’s and Vinding’s scepticism, with an extra point that digital simulations are interpretation dependent.
Caveats
I haven’t spent nearly as much time as Magnus (and David) on reflecting on and researching priorities for effective suffering reduction.
Partly for this reason, my reading of their views can be accurate only to a degree, so please give them the benefit of the doubt (or ideally, read their work first).
Acknowledgements
- Magnus and David (and other suffering reducers!) for taking suffering seriously
- Jone for discussing Magnus’ post with me and for criticizing my ideas
- Ruth for remote co-working and other support
- Participants and organisers of the suffering abolitionist working meetup for creating a space of accountability and inspiration
In short
For various reasons, my current view is that it’s not enough that we “only” avoid futures that are (even) worse than what the current world has allowed, for the recurrent horrific suffering in this world is already absolutely unacceptable. It must thus be abolished for good.
The good news is that there’s potentially a strategy to secure both, which would mean a world without any severe suffering (and, optimistically, without any involuntary suffering at all). The strategy is to work towards a robust future: for example, a future where sentience is invincible to severe suffering and where “natural” selection is superseded by compassionate design.
Given the constraints of (genetically-unenriched-)human nature and the scale of wild-animal suffering (and the deadline of humans’ spreading life beyond the solar system or civilisational collapse), I don’t see how biotechnology-enabled solutions can be avoided. Hence the abolitionist project; or, less controversially, a version thereof that is specifically about abolishing severe, on most accounts needless suffering and that prioritises currently most neglected (or abused) sentient beings.
Irrespective, humans’ low concern for (distant) suffering remains a common bottleneck for most approaches to effective suffering reduction. This suggests the common initial aim of promoting concern for suffering and better policies, although (successful) abolitionist proof-of-concept projects can be used to raise awareness about the problem already as well.
Longer version
“Critical” difference
In his recent blog post and in the chapter on AP in Suffering-Focused Ethics: Defense and Implications, Magnus Vinding considers pros and cons of prioritising AP for effective suffering reduction. He concludes that AP probably is not “among the best ways … to address worst-case outcomes”.
I comment on some of his considerations below. But first it’s important to establish whether and how our critical criteria differ. Here’s Magnus’ “critical question” from the post:
What reasons do we have to think that prioritizing and promoting the Abolitionist Project is the single best way, or even among the best ways, to address worst-case outcomes?
Magnus argues that preventing worst-case futures “is, plausibly, the main priority for reducing suffering in expectation”. For example, he writes that “it seems reasonable to give disproportionate focus to the prevention of worst-case outcomes, as they contain more suffering (in expectation). [emphasis mine]”
Worst-case outcomes should be avoided at any lesser cost almost by definition. Yet I’m hesitant to embrace this in practice.
My concern is that by focusing exclusively on worst-case scenarios we may thereby severely neglect or even ignore the present and proximate futures. This is a problem, for, arguably, the level of suffering that the current world has allowed is already absolutely unacceptable.
This doesn’t matter, one may say, as, other things being equal, it doesn’t make sense to prioritise the ongoing suffering over a worse future.
Again, I don’t disagree if the tradeoff is indeed as described. But I don’t think it necessarily holds in practice. For if the existing hellscape is solvable, then ending it could constitute a robust future infertile to any horrific suffering. Avoiding “worst-case outcomes” in particular, on the other hand, wouldn’t necessarily abolish the ongoing suffering.
It can also be argued that the most reliable way to help the future is to work on existing problems. After all, we can and do know a great deal more about the current world compared to the world even in ten years from now. There are too many variables, with their interdependencies and dynamics, to get right. And the uncertainty only grows into the future. What technologies, power structures, economies, and social dynamics will exist in just one hundred years we can only speculate.[1]
Anyway, one may also object that abolishing suffering isn’t marginal thinking: as Magnus notes in the section with the smallpox analogy, “it’s important to … distinguish what humanity at large should ideally do versus what the, say, 1,000 most dedicated suffering reducers should do with most of their resources, on the margin, in our imperfect world”.
He also writes:
Futures in which the Abolitionist Project is completed, and in which our advocacy for the Abolitionist Project helps bring on its completion, say, a century sooner, are almost by definition not the kinds of future scenarios that contain the most suffering.[2]
On that latter point I may note that focusing on AP isn’t necessarily only about eventuating it sooner: it also can be about reducing chances of scenarios where AP failed to come about (as opposed to Magnus’ “futures in which the Abolitionist Project is completed”), i.e. scenarios where s-risks aren’t foreclosed. If AP can indeed create a robust future without suffering, then working on failure modes of AP may be a potential priority for “marginal” suffering reduction.
In this regard “just” accelerating completion of AP might in fact make a difference between abolishing and preventing suffering on the one hand and not making it at all on the other: for example, there may be a real deadline imposed by civilisational collapse or space colonisation.[3]
In any case, as far as current prioritisation is concerned, both of us seems to agree that finding the best ways of increasing humanity’s concern for (“others’”) suffering (of all sentient beings) may be the top priority for the most dedicated suffering reducers at this stage (subject to their personal and professional profiles).[4]
What best marginal action lies beyond or after maximal improvement is done on the common sociopolitical bottleneck is unclear. It partly depends on outcomes of that initial outreach focus, for example. Irrespective, those who agree that the severity of suffering on this planet is already strictly unacceptable may find that even on the margin it’s most effective to focus on abolishing suffering (with a priority for the most neglected beings).
We can expect, for example, that the wild animal part of AP will remain a marginal cause since mainstream use of biotechnologies will continue targeting predominantly humans and maybe companion animals. Also, as I mentioned above, if suffering abolition’s happening is essential for preventing worse-than-present futures, then it may be justified that some of us focus on neglected scenarios where abolition doesn’t happen.
AP variant
Assuming a robust future without intense suffering is our aim, is AP among the best ways to get there?[5]
I can only say that I don’t know of a better way and that approaches that humanity has tried haven't and arguably cannot succeed. (And no, I don’t think rendering Earth completely uninhabitable can be achieved in practice.)
However, until AP is adopted and carried out, my best guess[6] is a more specific version of AP (denoted as “AP^” henceforth), one that is restricted in scope to severe human suffering, suffering of exploited and wild animals, and possibly compassion bioenhancement.
Already at the initial stage of targeting the sociopolitical bottleneck, advocating for AP^ rather than AP would have a benefit of raising awareness about the worst suffering, including suffering of non-human animals. AP^ would also be a safer option for the concern mentioned below that personal suffering is a requirement for greater empathy.
In its explicit focus on severe, on most accounts pointless suffering, AP^ may be less controversial than abolishing all forms of suffering. Some people would find such focus too “negative”, but then they could join other AP-aligned initiatives[7].
Finally, even if AP were part of the general civilisational development, or if it were an official program of a big international collaboration, there are reasons to expect that it would be highly human-focused or would drift in that direction even if it started as “pure” AP. In this regard, simply by existing AP^ could serve as a reminder for other AP-aligned projects to stay calibrated (though, of course, AP^ itself could drift to less optimal pursuits if we aren’t careful).
Other comments
Empathy
I share Magnus’ concern that abolishing some forms of suffering in humans may result in reduced empathy or compassion. For without having ever experiencing suffering, how would one get the “raw nastiness” of it?[8]
At the same time personal wellbeing allows greater capacity for empathy (for “others’” pain, for example). And it should also be possible to (permanently) amplify empathy itself once technology is mature enough. As David writes,
… mastery of the biology of emotion means that we’ll be able, for instance, to enlarge our capacity for empathy, functionally amplifying mirror neurons and engineering a sustained increase in oxytocin-release to promote trust and sociability.[9]
Further, David makes an analogous point about intelligence:
… greater intelligence brings a greater cognitive capacity for empathy - and potentially an extended circle of compassion.
Finally, biotechnology can be used to identify genetic basis of malevolence like sadism and psychopathy and thus inform criteria for preimplantation genetic screening.
That said, by default humanity probably will continue to disproportionately lean on more “selfish” pursuits like personal comfort and intelligence (by the narrow, IQ-inspired definition[10]). Hence, again, AP^, which prioritises intense suffering and non-humans.
Abolition vs minimisation
As I mentioned in the ““Critical” difference” section, a way to minimise suffering, in the abstract, is to prevent it by achieving a robust future where suffering is impossible by default. Among the most plausible ways of actualising such a future, in my current view, is making (at least severe) suffering biologically impossible via some version of AP, with the ongoing and projected suffering as the best starting target.
So if my thinking is on the whole correct, the “abolition vs minimisation” dichotomy collapses from one end: one way to minimise suffering in expectation is to abolish it for good.
Next, Magnus proposes that the goal of abolishing suffering may be a subject to the proportion dominance bias: “our tendency to intuitively care more about helping 10 out of 10 individuals rather than helping 10 out of 100, even though the impact is in fact the same”.
If by “helping” we mean “alleviating one’s suffering”, the impact isn’t the same, for the result is different: in the “10 out of 10” case no one suffers intensely anymore (whereas in the “10 out of 100” case suffering persists). (If Magnus intended a different interpretation of the example, then my point may not apply.)
I would also note that this crucial difference can be missed if one simply “aggregates” suffering across disconnected individuals.[11] (Which highlights the importance of being clear about one’s comparison function in suffering minimisation. For example, minimising the “total suffering” across disconnected minds is quite different from minimising the severity of the worst suffering being experienced in the world. Unspecified “minimising suffering” does a lot of work in one’s argument.)
Moving on, Magnus writes,
Minimizing suffering in expectation would entail abolishing suffering if that were indeed the way to minimize suffering in expectation, but the point is that it might not be. For instance, it could be that the way to reduce the most suffering in expectation is to instead focus on reducing the probability and mitigating the expected badness of worst-case outcomes.
Exclusively focusing on “reducing the probability and mitigating the expected badness of worst-case outcomes'' seems to me like a liability to Pascal’s mugging. For however much importance we assign to suffering abolition, we can always come up with such a hellish scenario that its expected suffering warrants, at face value, the utmost priority. Admittedly I’m not a professional researcher, and what I’m saying probably has been addressed ad nauseam: perhaps one should use a probability threshold to exclude sufficiently unlikely scenarios, or expected value calculations simply isn’t the tool when huge uncertainties are involved.[12]
In the last two paragraphs of the section IV, Magnus raises a concern that complete abolition of suffering can make the broader goal of reducing suffering more vulnerable to objections - “e.g. the objection that completely abolishing suffering seems risky in a number of ways”. He further notes,
… it would be quite a coincidence if the actions that maximize the probability of the complete abolition of suffering were also exactly those actions that minimize extreme suffering in expectation;
As I write above, I have my reservations about the practical significance of the dichotomy of minimising vs. abolishing suffering and about unspecified minimising suffering in expectation. Above I also acknowledge the potential controversy of abolishing all suffering (although I haven’t thought recently about potential spillover effects of such controversy on the broader mission). Hence, again, AP^, a more restricted version of AP.
Middleground
In the corresponding section of his post Magnus gives several pros of AP - namely that it may be more motivating to work on and more inspiring for some people and that it shows the feasibility of abolishing suffering. Given his four main counter-considerations, he concludes that it makes most sense to “include an abolitionist perspective in our “communication portfolio”, as opposed to making it our main focus”.
Until we have more evidence, this may be a reasonable middleground for the community of suffering reducers. Although when it comes to particular individuals and groups, their comparative advantages may justify more specific focus (see e.g. Invincible Wellbeing group). (And we do need trailblazers anyway, to gather that evidence to inform future prioritisation.)
Conclusion
In summary, I agree with many of the points raised by Magnus in his post in principle. Still we don’t seem to converge on our ideal (medium- to long-term) priorities: abolishing intense suffering vs. minimising chances of “worst-case outcomes” (although, again, I’m sceptical that the dichotomy holds in practice). And this is despite our shared view that preventing severe suffering is of the utmost importance and urgency.
Considering Magnus’ concerns and assuming that ...
- the worst forms of suffering that have happened and will happen if we don’t intervene are already absolutely unacceptable
- a robust future without suffering would also preclude “worse than the present” scenarios (while focusing on them directly wouldn’t necessarily prevent existing and projected severe suffering)
- compassion alone isn’t enough: to address the giant scope of wild-animal suffering and deeps of suffering more generally (and to potentially increase compassion as well), we need biotechnological solutions
- even medium-term future is already too uncertain and untestable to try to influence it at the cost of ignoring known causes of suffering
- if abolition is likely to happen even without significant contributions from dedicated suffering reducers, there’s still a risk of various failure modes
- abolition might have a deadline that is close enough that it won’t happen without acceleration
- abolitionist proof-of-concept projects, in addition to advancing the technical side of AP, could spread a message about the problem of suffering and its solvability
… I suggest AP^, a version of AP restricted to severe human suffering, neglected non-humans, and at least preliminary research into compassion bioenhancement. AP^ could also be seen as a first step of AP, even if it happened in parallel with a more mainstream, human-focused version of AP.
Anyhow, despite disagreements on middle-term+ prioritisation, effective suffering reduction is constrained arguably the most by low awareness about (and thus low concern for) suffering of “others”, especially wild-animal suffering. Working on this bottleneck should probably remain our common core priority.
Footnotes
To say that we shouldn’t expect to correctly predict/guess the distant future, isn’t to imply that we cannot prevent or otherwise mitigate future catastrophes and harmful trends. We arguably can, although unmitigated crises and failures of risk management in general suggest that there’s a huge ground for improvement. Also I wouldn’t have proposed building a robust present as a way to preclude horrific futures if I didn’t think we can be prepared for future risks despite the unresolvable uncertainty.
Also, I’m not saying that Magnus’ argument necessarily relies on our being able to predict the distant future. I just want to flag a concern that we might be over theorising about distant futures at the cost of not solving current problems. ↩︎
Later in his post, he similarly writes,
Talking about futures in which advanced civilization phases out the biology of suffering is already to direct our attention toward relatively good outcomes.
As I reply below in the post, I think that the comparison should be rather with futures where AP doesn’t get enough traction or fails in some other way. ↩︎
Judging by his Quora post about s-risks, David is sceptical about s-risks. One reason is that he is not “especially worried about transhumans radiating out across the Milky Way to spread a capacity for suffering to lifeless solar systems'' because “the challenges of colonising alien solar systems and creating pain-ridden ecosystems light-years away are orders of magnitude more formidable than building lunar settlements, or even terraforming Mars”. Still, “if transhumans haven't phased out the biology of suffering on Earth, then maybe these colonisation risks will be real”.
David doesn’t mention the risk of human-launched directed panspermia there, but I would be curious to hear his thoughts on it, as from my superficial understanding it’s much less technically (and financially) challenging than human spacefaring. ↩︎
That doesn’t mean that more direct interventions like proof-of-concept projects don’t have their place at this stage: promoting successful pilot projects, for example, can also raise awareness about the suffering problem and convey the idea that it’s solvable. ↩︎
When this post was being finalised, David published a post on Quora about his views on longtermism of the effective altruism movement. There he too makes a case for abolishing suffering as a “longtermist” (and “near-termist”) cause, on various grounds, not only negative consequentialist’s ones. I recommend reading it. ↩︎
for a priority for dedicated suffering reducers ↩︎
In section “Reasons in favor of prioritizing the Abolitionist Project”, Magnus makes an analogous point about AP’s additionally attracting those who are much more motivated by visions of superhappiness. ↩︎
Interestingly, the woman from Scotland with the FAAH-OUT mutation, which makes them immune to depression, anxiety, and pain, is a vegan. Alas, we cannot draw any conclusions as this is the only reported case of the mutation as far as I know. ↩︎
I’m not suggesting that we should maximise empathy, at least not without improving critical judgement, for example (see e.g. Paul Bloom’s Against Empathy). Not least, in today’s world “too much” empathy would incapacitate one by the force of “others’” suffering. (David: "If one could even glimpse a fraction of the suffering of this world, one would go completely psychotic.") So at least in today’s world we should probably optimise more for compassion, i.e. a concern for “others’” suffering without their raw feels and a desire to help. ↩︎
Cf. David’s concept of “full-spectrum superintelligence”. ↩︎
I elaborate more on the concern about utilitarian aggregationism in my critique of classical utilitarianism. ↩︎
Or one may simply bite the bullet apparently: as Magnus notes:
... the negative potential of trillions of stars combined with an expected value framework, along with marginal thinking, will often suggest rather unintuitive conclusions.