"[C]onsider the practical implications of the following two moral principles: 1) we will not allow the creation of a single instance of the worst forms of suffering [for] any amount of happiness, and 2) we will allow one day of such suffering for ten years of the most sublime happiness. What kind of future would we accept with these respective principles? Imagine a future in which we colonize space and maximize the number of sentient beings that the accessible universe can sustain over the entire course of the future, which is probably more than 1030. Given this number of beings, and assuming these beings each live a hundred years, principle 2) above would appear to permit a space colonization that all in all creates more than 1028 years of [extreme suffering], provided that the other states of experience are sublimely happy. This is how extreme the difference can be between principles like 1) and 2); between whether we consider suffering irredeemable or not. And notice that even if we altered the exchange rate by orders of magnitude — say, by requiring 1015 times more sublime happiness per unit of extreme suffering than we did in principle 2) above — we would still allow an enormous amount of extreme suffering to be created; in the concrete case of requiring 1015 times more happiness, we would allow more than 10,000 billion years of [the worst forms of suffering]."
January 1, 1970
https://en.wikiquote.org/wiki/Effective_altruism