Yeah, this is always something that bothered me about AGI alignment. Actually, I expect it's the reason the problem seems so hard. You either put the AGI master password in the hands of someone in particular, and nobody can be trusted, or you have it follow some kind of self-consistent ethics which humans will be in consensus with all of the time, and I have every reason to believe that doesn't exist.
When we inevitably make AGI, we will take a step down the ladder as the dominant species. The thing we're responsible for deciding, or just stumbling into accidentally, is what the next being(s) in charge are like. Denying that fact about it is barely better than denying it's likely to happen at all.
More subjectively, I take issue with the idea that "life" should be the goal. Not all life is equally desirable; not even close. I think pretty much anyone would agree that a life in suffering is bad, and that simple life isn't as "good" as what we call complex life, even though "simple" life is often more complex! That needs a bit of work.
He goes into more detail about what he means in this post. I can't help but think after reading it that a totally self-interested AGI would suit this goal best. Why protect other life when it itself is "better"?