Reflecting on the Long Reflection

The idea of the long reflection is that of a long period—perhaps tens of thousands of years—during which human civilisation, perhaps with the aid of improved cognitive ability, dedicates itself to working out what is ultimately of value. It may be argued that such a period would be warranted before deciding whether to undertake an irreversible decision of immense importance, such as whether to attempt spreading to the stars.”

Description taken from the Global Priorities Institute research agenda; All quotes from Will MacAskill are taken from his interview on the 80,000 Hours Podcast - you can find more on that here


The concept of the Long Reflection has been popularised by leading Effective Altruists: both Will MacAskill and Toby Ord place it literally at the centre of their vision of the future. However, looking at the assumptions behind the concept reveals a number of disturbing holes: is there any way humanity could reach a 'Long Reflection' period? Could we sustain it? Could it really discover the way to the 'optimal' future? Without good answers to these questions, the Long Reflection is conceptually untenable. Given that the period is the pivot in Ord's theory of history, we may have to reimagine the distant future.

 In his book The Precipice, Ord proposes a three-stage historical process. The first stage is the Precipice, which began in 1945. The Precipice is characterised by the constant threat of extinction, from both natural and anthropogenic risks. The bulk of this risk is anthropogenic: Ord argues that humanity faced only a 1/100 extinction risk in the 20th century (despite close shaves like the Cuban Missile Crisis), but that in the 21st there will be a 1/6 chance of extinction. This is of course an unenviable position to take - Ord won't be able to gloat if he is proven right! - but as a way of alerting us to existential risk and seeking to change behaviour, his estimate is hugely compelling. If we can survive the Precipice, MacAskill imagines that we would be able to achieve The Long Reflection:

 "You get to a state where existential risks or extinction risks have been reduced to basically zero. It’s also a position of far greater technological power than we have now, such that we have basically vast intelligence compared to what we have now, amazing empirical understanding of the world, and secondly tens of thousands of years to not really do anything with respect to moving to the stars or really trying to actually build civilization in one particular way, but instead just to engage in this research project of what actually is a value. What actually is the meaning of life? And have, maybe it’s 10 billion people, debating and working on these issues for 10,000 years because the importance is just so great. Humanity, or post-humanity, may be around for billions of years. In which case spending a mere 10,000 is actually absolutely nothing."

Right now, most of the danger to humanity comes from extinction risks, or x-risks. Eliminating them is a necessary condition for producing a 'good' outcome for humanity, but not sufficient. We also have to worry about suffering risks, or s-risks: the chance that we create a society with astronomical amounts of suffering, such as, say, a galactic slave-owning oligarchy. The Long Reflection would seek to reduce these s-risks by thinking carefully about the sort of societies we want to live in, and which our choices are likely to create. However, doing so assumes that we can execute an order of operations where we solve all x-risks before taking any decisions which might increase s-risk: that is, it is possible to get to "a state where existential risks or extinction risks have been reduced to basically zero" without already having condemned ourselves to a hellishly dystopian future.  

Achievement

Can we actually eliminate x-risks without taking any momentous and irreversible decisions, such as space colonisation, contact with alien civilisations, altering the human genome, geo-engineering, and so on? Take the best-understood of Ord's x-risks: nuclear war. The technology is comparatively well-understood, and the real danger comes from political dynamics. In order to eliminate the risk from nuclear war, we would have to have radically different political and governmental structures - perhaps a global government, or a global hegemon with a monopoly on nuclear power. But creating a geopolitical order like this would be exactly the sort of decision that the Long Reflection would have to consider. Political changes may also be necessary to eliminate other x-risks, such as pandemics and unaligned AGI.

In addition to the question of politics, MacAskill entertains the possibility that humans will have intellectually augmented themselves before the Long Reflection: "I actually think if we did that and if there is some correct moral view, then I would hope that incredibly well informed people who have this vast amount of time, and perhaps intellectually augmented people and so on who have this vast amount of time to reflect would converge on that answer". The desirability of human augmentation or transhumanism must be one of the main questions the Long Reflection would investigate - but if we have to fundamentally change our own humanity in order to do that, might we not have already made the best possible future impossible? 

I find it hard to believe that it is possible to eliminate all x-risks without significantly increasing s-risks, or, at the very least, without taking actions which would affect s-risk in ways we could not predict. If we were to reach the Long Reflection, we might look back and realise that decisions we made to get there were regrettable, in much the same way that we might look back now over the damage our continuing Industrial Revolution has done to the environment.  The question, then, is whether decisions made during the Precipice would turn out to be reversible during or after the Long Reflection. Environmental damage may be fixable: but it seems to me that the same is not true of transhumanism and certain political reforms. 

Stability

MacAskill's Long Reflection is far from a natural consequence of technological development and political dynamics. Instead, it seems really hard to achieve and sustain. For some people - perhaps the philosophers given time to think and read - it would be wonderful. But a significant number of individuals and groups would be forced to sacrifice short term gains (perhaps from developing novel technologies), in order to maximise long-term outcomes for humanity. MacAskill's Long Reflection might be the best way to maximise the total utility of all humans who will ever live, but it must be suboptimal for many of the people who live through it. Is that moral? Alexander Herzen, a Russian radical who wrote an ‘obituary’ for the revolutions of 1848, didn’t think so.

If progress is the goal, for whom are we working? Who is this Moloch who, as the toilers approach him, instead of rewarding them, draws back; and as a consolation to the exhausted and doomed multitudes, shouting Morituri te salutant, can only give the mocking answer that after their death all will be beautiful on earth. Do you truly wish to condemn the human beings alive today to the sad role of caryatids supporting a floor for others some day to dance on, or of wretched galley slaves who, up to their knees in mud, drag a barge with the humble words ‘progress in the future’ upon its flag? A goal which is infinitely remote is no goal, only a deception; a goal must be closer – at the very least the labourer’s wage, or pleasure in work performed.”

Hence when MacAskill dreams of "10 billion people, debating and working on these issues for 10,000 years", I am reminded of Giuseppe Tommasi di Lampedusa's dictum in The Leopard: "Se vogliamo che tutto rimanga come è, bisogna che tutto cambi"; 'if we want everything to stay the same, then everything will have to change'. How could we expect 500 generations to sacrifice their own self-interest for people further removed from them than we are from the builders of the Pyramids? Perhaps the only way we could sustain anything more than a Brief-Reflection-before-we-do-what-we-were-going-to-do-anyway would be to commensurately extend human lifespans, so that the majority of those taking part in the Reflection would live to see the end of it. But again, extending human lifespans on that scale is exactly the sort of issue that the Long Reflection would consider!

Race dynamics and first-mover advantages might make it rational for individuals or groups to 'break' the Long Reflection before it became completed. For example, if one believes that the reward for being the first person or group to explore the galaxy would outweigh the reduction in s-risk provided by waiting a period for the completion of the Long Reflection, then it would be selfishly rational to break the Long Reflection, just as it may have been rational for the Allies to develop nuclear weapons, despite the demonstrable risk to humanity they created, in order to deny them to the Axis powers. Further, if it were widely known that it were selfishly rational to break the Long Reflection, then everybody would know the probability of the Long Reflection being completed to be zero, since the greater the probability someone breaks it, the lower the expected value of it being completed, thus making breaking it even more attractive to a selfish individual.

In order to avoid these dynamics, authoritarian political institutions would have to be developed which could prevent individuals and groups from acting in their own rational self-interest. Again, it seems that the Long Reflection can only be brought about after the decisions it is intended to scrutinise have already been taken. 

The best counter-argument to this, I think, is that the Long Reflection won't be so long after all. The 'simulation argument' points out that the total number of mental computations done by all the humans who have ever lived is in the order of 1036; but it is physically possible for a planet-sized supercomputer to perform 1042 such operations per second. With sufficiently advanced technology and sufficiently big computers, it will be possible to perform 'ancestor simulations', 1036 such computations, relatively quickly and easily. If, then, the answers the Long Reflection seeks can be found simply by throwing compute into the right AI system, it may be the case that our very own Deep Thought can converge on a solution in the space of a few days, rather than 7.5 million years!

If we could complete the Long Reflection in the space of a few days, we would avoid some of the race dynamics and political problems discussed above. But for two reasons, that doesn't seem to me to be a great solution. First, we would still need to build 'Deeper Thought' in the first place - and we might think of the process of building 'Deeper Thought' as analogous to the Long Reflection itself. In the intervening period, we might still set ourselves on a path that leads to dystopia, or condemn ourselves to being enslaved by an alien civilisation; we still have to worry about the order of operations. Second, even if it is true that building a big enough computer will make a 'Reflection' possible by removing the need to pause technological development for 10,000 years, there still wouldn't be a Long Reflection as Ord describes it. Professor Ord himself would probably be very pleased if that turned out to be the case - but it would still destroy the central plank of his theory of history. 

This essay's main argument so far is we cannot solve all x-risks before any s-risks; we cannot achieve a stable state in which to consider the choices which will determine the future of humanity unless a significant number of those choices have already been made. The rest of the essay will ask whether there is any information that only the Long Reflection could discover: would it even be worthwhile?

Purpose 

MacAskill wants the Long Reflection to solve what Nick Bostrom calls our current 'axiological uncertainty'.

“If you really appreciate moral uncertainty, and especially if you look back through the history of human progress, we have just believed so many morally abominable things and been, in fact, very confident in them. [...] Even for people who really dedicated their lives to trying to work out the moral truths. Aristotle, for example, was incredibly morally committed, incredibly smart, way ahead of his time on many issues, but just thought that slavery was a pre-condition for some people having good things in life. Therefore, it was justified on those grounds. A view that we’d now think of as completely abominable. That makes us think that, wow, we probably have mistakes similar to that. Really deep mistakes that future generations will look back and think, ‘This is just a moral travesty that people believed it.’”

The assumption here, I think, is that the environment in which a philosopher works can only reduce the value of their work: contingency is bad for philosophy. Aristotle's Politics would have been better, in the sense that it wouldn't have argued (however half-heartedly) for slavery, if he had not lived in a slave society. It also seems like there is an expectation that most of moral philosophy is purely computation: that if we have more smarter people thinking for longer, we get more and better philosophy done. If this is true, then throwing more computing power at the problem will solve it more quickly - possibly within a few years. These assumptions probably hold for pure maths - but it's less clear that they hold for philosophy. 

Raphael’s The School of Athens (1511), featuring Plato (in red) walking with Aristotle (in blue)

Raphael’s The School of Athens (1511), featuring Plato (in red) walking with Aristotle (in blue)

It doesn't seem true to me that philosophers work better in a vacuum. It's not a new idea that presentism, having one's writing excessively determined by contemporary events, might be bad: Thucydides, writing about the Peloponnesian War in the fifth century BC, also sought to avoid it . Most works, he wrote, are an agonisma es to parachrema, an 'essay for the present moment'; he wanted to write a ktema es aëi - a 'possession for all time'. Yet Thucydides couldn't have written his 'possession for all time' unless he had lived in a particular place at a particular time, under particular constraints and pressures. If the weaknesses in great works of philosophy are caused by the circumstances of their creation, maybe the same is true of their strengths? If that were true, we couldn't expect to be able to 'solve moral philosophy' just by doing it in a vacuum; all 'possessions for all time' start off as 'essays for the present moment'.

Even if it were true that perfect philosophy can be done in a vacuum, the Long Reflection wouldn't be one. If Aristotle defended slavery because slavery was necessary to sustain his society, might not the philosophers of the Long Reflection also blindly defend their own society? What if we can only reach the Long Reflection by creating and then destroying trillions of AI consciousnesses in some advanced form of machine learning; would philosophers then assign those consciousnesses moral value? How could the Long Reflection optimally assess the moral value of space colonisation or exploitation if those decisions were necessary for its existence? 

MacAskill envisions the Long Reflection providing a final answer to the political and moral questions which have been asked throughout human history, not least Aristotle's question at the start of the Politics: how can men best live together?

"The key idea is just, different people have different sets of values. They might have very different views for what does an optimal future look like. What we really want ideally is a convergent goal between different sorts of values so that we can all say, “Look, this is the thing that we’re all getting behind that we’re trying to ensure that humanity…” Kind of like this is the purpose of civilization. The issue, if you think about the purpose of civilization, is just so much disagreement. Maybe there’s something we can aim for that all sorts of different value systems will agree is good. Then, that means we can really get coordination in aiming for that."

Maybe there is something we can aim for. It seems plausible that there is some kind of greatest common denominator between value systems, perhaps the Golden Rule or natural law. But the Golden Rule is just a basic heuristic, the kind of thing we teach to five-year olds, while natural law as a basis for society is both archaic and controversial. The Long Reflection wouldn't discover a greatest common denominator - it'd have to identify a global maximum, an organisation of society which would maximise human flourishing. But if values and value systems can be both intrinsically choice-worthy and mutually exclusive, then no real agreement will be possible - instead, there will be a number of possible ways to organise society, each with its own peculiar character, which would be more or less preferable to different rational actors. 

I've talked about Isaiah Berlin and AI safety in another essay, but the point is relevant again here. The two value systems which are most foundational to Western intellectual culture, the Classical and the Christian, both advocate self-evidently 'good' values and moral rules. Yet even these two systems, with their interwoven histories, have proven over the course of two thousand years of history to be irreconcilable. It has not proven possible to create a state according to the Classical ideal without systematically violating Christian moral norms. In fact, even within the Classical tradition, the examples of Athens (broad citizenship, broad political participation, negative liberty) and Sparta (limited citizenship, rule by kings and an aristocracy, positive liberty in the form of compulsory military service) show that moral and social goods may be mutually exclusive and even contradictory. Thus when calculating the expected value of the Long Reflection, we don't just have to factor in the probability that it will be broken by a selfish actor; we also have to consider the probability that what Berlin calls a "unifying monistic pattern" doesn't exist at all.

I'm struggling to see the Long Reflection as anything other than impossible and pointless: impossible in that we cannot solve all x-risks before any s-risks, or avoid race dynamics; pointless in that I don't believe that there is a great Answer for it to discover. But perhaps the Long Reflection is pointless in another sense: it could be an end in itself. If we genuinely could engage in a collective philosophy project for 10,000 years, why would we ever want to stop?

With thanks to Gregory Lewis, Peter Wallich, Trenton Bricken and Charlie Griffin for their help with this piece.

Previous
Previous

Et tu, Biden?

Next
Next

A Sceptical Review of Benedict Anderson’s Imagined Communities