The trouble with these calculations is that they mire us in epistemically tricky terrain. I’m bothered by how quickly the discussions of AI become utopian or apocalyptic. ... It also forces thinking to be obsessively short term. People start losing interest in problems of the next five or ten years, because superintelligence will have already changed everything. The big political and technological questions we need to discuss are only those that matter to the speed of AI development. Furthermore, we must sprint towards a post-superintelligence world even though we have no real idea what it will bring.

Dan Wang's 2025 annual review letter.