And in case this post wasn't clear: I'm all-in on large language models: they confidently pass my personal test for if a piece of technology is worth learning:
"Does this let me build things that I could not have built without it?"
What I find interesting is that - on the surface - they look like they solve a lot more problems than they actually do, partly thanks to the confidence with which they present themselves
Figuring out what they're genuinely good for is a very interesting challenge