Discussion about this post

User's avatar
James's avatar

Isn’t the real answer — for humans, for models, for any good simulators — just to have a simplicity prior?

Silas Abrahamsen's avatar

Very interesting post! These problems remind me a lot of the problem of induction/underdetermination of evidence. And as James suggests, it seems to me that some kind of rational restriction on priors is probably what should do the work (I don't know how that'd necessarily translate into LLMs).

Perhaps a deeper worry is related to the point you make about separating rules from meaning. I don't have very strong views on this, and haven't read too much about it, but it seems to me that meaning somehow consists in representing a way the world should be for a statement to be true. And I think that might be quite closely tied to sensory experience. (When saying "the cat is on the mat" I represent some sort of set of observations that would make me deem that true. That's probably not the whole story, but it seems like it's an aspect, and maybe a necessary condition for meaning.) So I wonder whether LLMs will ever get the ability to get "out there" and represent the world in any way, so long as they don't have any sensory input.

Just a few loose thoughts, don't know how plausible it is.

10 more comments...

No posts

Ready for more?