> The problem within that is whether the universe invariably selects for power and the ability to occupy matter and space. If the answer is yes, it seems like “goodness,” as we conceive of it, might not have a place in the limit. This is because any resource dedicated to goodness—things like rest, beauty, or joy—is a resource that could have been dedicated to power, control, or expansion. If you have two systems starting out with equal resources, the one that invariably chooses power and growth will, in the fullness of time, eclipse any system that chooses to allocate even a few resources towards goodness (or indeed anything that’s not power-and-growth).
I think this assumes a zero-sum framework. What if the maximally power-seeking strategy is mostly good as well? (I recently read The Evolution of Cooperation, which is an older game theory book that argues for cooperation being the winning strategy in iterated prisoner's dilemmas / certain kinds of multipolar traps, so that's the framework that I'm operating from.)
Hmm idk. I still think that if moral realism is true, it’s likely to be motivating, meaning that you get it in the limit (if you have rational agents, which I think you should expect - as Joe notes, the locusts are gonna need to be really smart to capture the universe). If moral realism is false/ not motivating (I never really understood what is meant by real and not motivating but whatever), it doesn’t matter very much. Nothing does. Especially after you die.
The idea Joe responds with (that weakening that motivation could be selective) just doesn’t make that much sense to me. The typical story for moral realism is that it follows from being rational (word rational is doing a bunch of work but whatever). I find it hard to conceive of a way for it to be good to be less rational in the limit — isn’t that just meta rationality? Shouldn’t that just then be the good stuff?
If you want to say, on the other hand, that moral motivation isn’t required for rationality, I think you’re going to have a really hard time explaining why the moral thing is the thing that you “ought” to do.
I just have preferences about how things how things should be (incl. after I die) and think I can help minds similar to mine realize that we want the same thing; may not apply to minds very unlike mine.
For Joe's point, you could have incredibly smart beings who are the equivalent of heroin addicts. The material (biological) conspiring against the rational. Not sure how to think about it against the backdrop that moral realism is doing at least some equivocation on "rationality."
> The problem within that is whether the universe invariably selects for power and the ability to occupy matter and space. If the answer is yes, it seems like “goodness,” as we conceive of it, might not have a place in the limit. This is because any resource dedicated to goodness—things like rest, beauty, or joy—is a resource that could have been dedicated to power, control, or expansion. If you have two systems starting out with equal resources, the one that invariably chooses power and growth will, in the fullness of time, eclipse any system that chooses to allocate even a few resources towards goodness (or indeed anything that’s not power-and-growth).
I think this assumes a zero-sum framework. What if the maximally power-seeking strategy is mostly good as well? (I recently read The Evolution of Cooperation, which is an older game theory book that argues for cooperation being the winning strategy in iterated prisoner's dilemmas / certain kinds of multipolar traps, so that's the framework that I'm operating from.)
Hmm idk. I still think that if moral realism is true, it’s likely to be motivating, meaning that you get it in the limit (if you have rational agents, which I think you should expect - as Joe notes, the locusts are gonna need to be really smart to capture the universe). If moral realism is false/ not motivating (I never really understood what is meant by real and not motivating but whatever), it doesn’t matter very much. Nothing does. Especially after you die.
The idea Joe responds with (that weakening that motivation could be selective) just doesn’t make that much sense to me. The typical story for moral realism is that it follows from being rational (word rational is doing a bunch of work but whatever). I find it hard to conceive of a way for it to be good to be less rational in the limit — isn’t that just meta rationality? Shouldn’t that just then be the good stuff?
If you want to say, on the other hand, that moral motivation isn’t required for rationality, I think you’re going to have a really hard time explaining why the moral thing is the thing that you “ought” to do.
I just have preferences about how things how things should be (incl. after I die) and think I can help minds similar to mine realize that we want the same thing; may not apply to minds very unlike mine.
For Joe's point, you could have incredibly smart beings who are the equivalent of heroin addicts. The material (biological) conspiring against the rational. Not sure how to think about it against the backdrop that moral realism is doing at least some equivocation on "rationality."