A friend made a fairly common speculation about LLMs eventually being good enough to replace or obsolete most information-industry workers. I am not too concerned about this. I think that LLMs will evolve to the point where they become a ubiquitous and indispensable tool, but there's a limit to how good LLMs can be based on the current technology level of our civilization.
Microsoft is literally investing in nuclear power - that gives you an idea of the energy requirements for training these models. The required amount of training data itself is also scaling geometrically, as the models get even more complex, and if you feed AI-generated training data to these models instead of genuine human data, the models quickly degenerate into essentially garbage output. Given that AI is already being used to generate vast amounts of public content on the web, which is the primary source for training data, this is a serious problem. Finally, the legal framework for training data is getting tighter (the NYT lawsuit against OpenAI is a masterpiece). The best training data will end up being licensed - and not for cheap.
The practical upshot of all of this is that LLMs (and for similar reasons, their generative image counterparts) have a ceiling beyond which it won’t make economic sense to develop bigger and bigger models. There may be a ChatGPT 6 but there won’t be a ChatGPT 10.
The underlying technology is ripe for disruption itself, however. The entire industry is only 10 years old! Whatever replaces LLMs will have to be faster and cheaper to train, and trainable on far lesser amounts of data. Or the technology will stagnate and end up at a equilibrium governed more by economics than by technological advancement.
Human knowledge workers are probably safe for a generation, at least. In the meantime, AzizGPT will always beat ChatGPT.