A major theme in dismissing LLMs as “AI” has been their transactional nature. And it’s quite reasonable to correct popular understanding over how these large models learn and their read-only nature in inference!

But RAG techniques are already well-established and there’s no reason to expect we won’t find even more useful architectures, so the “they don’t remember anything so can’t be smart” arguments feel naive or disingenuous. Of course, ChatGPT is the only service that matters in public opinion, so maybe it’s new memory will put these to rest.

I posted this in February 2024 during week 2606.

For more, you should follow me on the fediverse: @hans@gerwitz.com