It’s not the same argument at all. They’re not trying to stop dangerous AI by creating dangerous AI, they’re trying to create AGI safely to begin with. It’s going to happen one way or another, and if they can make it safe the first time around, it’s probably going to be safe every time, by precedent. It’s easier to…