Skip to Content Skip to Navigation

One of the things I'm finding so interesting about large language models like GPT-3 and ChatGPT is that they're pretty much the world's most impressive party trick

All they do is predict the next word based on previous context. It turns out when you scale the model above a certain size it can give the false impression of "intelligence", but that's a total fraud

It's all smoke and mirrors! The intriguing challenge is finding useful tasks you can apply them to in spite of the many, many footguns

And in case this post wasn't clear: I'm all-in on large language models: they confidently pass my personal test for if a piece of technology is worth learning:

"Does this let me build things that I could not have built without it?"

What I find interesting is that - on the surface - they look like they solve a lot more problems than they actually do, partly thanks to the confidence with which they present themselves

Figuring out what they're genuinely good for is a very interesting challenge

@simon Ha! Super true. People incorrectly put faith in ChatGPT for the same reason they incorrectly put faith in me when I'm on pub quiz teams. We just can't help but speak with undue confidence. I'd imagine that ChatGPT would also be a frustrating quiz team mate.

@wichitalineman Hah, I really love that model of ChatGPT where it's a pub quiz team member who's often right, sometimes wrong but expresses complete confidence in their answer either way!

@simon @wichitalineman I work in ML/AI for my full-time job and the authoritative tone that these LLMs communicate in coupled with the lack of proper sourcing is truly worrying for many people on my team. It will be interesting how Google goes about injecting LaMDA into search results with the lessons they have learned over the course of their company life.

@arthur @simon

Based on Google and other big tech firms previous behaviour, my guess would be:

1) LaMDA will have hard-to-predict consequences
2) Some of those consequences will be harmful
3) Google will drag their feet when acknowledging/taking accountability for the harm

@wichitalineman @simon 4) Google will never really take accountability and responsibility for the harm and the world will keep on turning and they will keep on selling ads.