Yeah, it doesn't make much sense in long run, even if you are utilitarian and value well-being of every human equally. Although, if we take your idea to logical extreme, shouldn't we aim to replace humans with high IQ not necessarily friendly AI?