The issue you sometimes see with LLMs is that they’re trained off basically all the text humans have ever produced, so when we ask them questions they just repeat our own scifi anxieties back to us. (I think we’ll need more than the LLM paradigm to get to AGI, we’ll need something more reinforcement learning-ish…