> It was remarkable to see how many errors could be stuffed into 5 minutes of vacuous conversation. What was even more striking was that the errors systematically pointed in a particular direction. In every instance, the model took an argument that was at least notionally surprising, and yanked it hard in the direction of banality.
This regression towards the mean is still very much a feature of the newer models, in my experience. I don't see how a model that predicts the most likely word based on previous context + corpus data could possibly not have some bias towards non-novelty / banality.
> Let's follow one example: Nigeria is the most populous country in Africa. In Abstract Wikipedia, this might be stored as: Z27243(Q1033, Q138758272, Q6256, Q15, Z27243K5)
Haha that's like John Wilkins' "Real Character, and a Philosophical Language"
It's not that different from how LLM tokens work, only in a tree structure as opposed to a plain sequence. Having a tree structure makes it easier to formally define rewrite rules (which is key for interpretability), as opposed to learning them from data as LLM do.
Also tokens don't represent meaning in themselves, but are assigned points in a multidimensional space, they can only represent meaning in the network as a whole when combined with other tokens in context and order.
And the abstract concepts of Abstract Wikipedia are human-defined, top-down ways of carving the world into distinct categories which make some kind of logical sense, whereas LLM's work bottom-up and create overlapping, non-hierarchical, probabilistic networks of connections with nearly no imposed structure except the principle that you shall know a token by the company it keeps.
But you can type them both out with keys on a keyboard so in that sense I guess they're not that different.
> “Any distributed system based on exchanging data will be replaced by a system based on exchanging programs.”
So distributed systems tend to converge towards being more and more mystifying? Cf. the mythical mammoth:
> Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious.
I've tried figuring out what the big deal about cybernetics was, but I always come away with a feeling of it being a bit wish-washy. Is it a bit like Philosophy in that it birthed individual fields that were inspired by and made applications of the thoughts, models and ideas laid out by its forebears? Or were there actual proofs, discoveries or applications in the field itself?
The problem isn't getting an AI agent running in a sandbox. That's trivial. The problem is getting an existing enterprise project runnable inside the sandbox too, with no access to production keys or data or even test-db-that-is-actually-just-a-copy-of-prod, but with access to mock versions of all the various microservices and api's that the project depends on.
reply