Self-improving AI systems aim to reduce reliance on human engineering by learning to improve their own learning and problem-solving processes. Existing approaches to self-improvement rely on fixed, handcrafted meta-level mechanisms, fundamentally limiting how fast such systems can improve. The Darwin Gödel Machine (DGM) demonstrates open-ended self-improvement in coding by repeatedly generating and evaluating self-modified variants. Because both evaluation and self-modification are coding tasks, gains in coding ability can translate into gains in self-improvement ability. However, this alignment does not generally hold beyond coding domains. We introduce \textbf{hyperagents}, self-referential agents that integrate a task agent (which solves the target task) and a meta agent (which modifies itself and the task agent) into a single editable program. Crucially, the meta-level modification procedure is itself editable, enabling metacognitive self-modification, improving not only the task-solving behavior, but also the mechanism that generates future improvements. We instantiate this framework by extending DGM to create DGM-Hyperagents (DGM-H), eliminating the assumption of domain-specific alignment between task performance and self-modification skill to potentially support self-accelerating progress on any computable task. Across diverse domains, the DGM-H improves performance over time and outperforms baselines without self-improvement or open-ended exploration, as well as prior self-improving systems. Furthermore, the DGM-H improves the process by which it generates new agents (e.g., persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. DGM-Hyperagents offer a glimpse of open-ended AI systems that do not merely search for better solutions, but continually improve their search for how to improve.
This 'self vs non-self' logic is very similar to how plants prevent self-pollination. They have a biological 'discrimination' system to recognize and reject their own genetic code.
> For Facebook, Instagram, Twitter, each person having their own website where they post and that post being pushed to these platforms is also another way to force interoperability on them or be left behind.
There's an acronym for this: POSSE (Publish [on your] Own Site, Syndicate Elsewhere). Part of the IndieWeb movement, for those who want to explore this worthwhile idea further.
Sure, you can do that. But then the syndicated content usually ends up looking like low-effort slop and doesn't get much traction. Each publishing platform has it's own features, limitations, and cultural norms. If you want to have any impact then you can't just copy content around: you have to tailor the message to the medium.
Probably some AI assistance was involved. Though you'd expect em dashes above, for example. A better example would be "No regression. No noise. Just compounding." It's not so much as to bother me, and I'm often annoyed by the ever-expanding tide of slop.
A hiker on a mountain might as well imagine that at the end of their journey they will step off onto the moon. But it's just a mirage. As us humans have externalized more and more of our understanding of the world into books, movies, websites and the like, our methods of plumbing this treasury for just the needed tidbits have developed as well. But it's still just working off that externalized collective understanding. This includes heuristics for combining different facts to produce new ones, sure, but still dependent on brilliant individuals to raise the "island peaks" which ultimately pulls up the level of the collective intelligence as well.
Sure, but it's entirely possible this point lies way past the expiry date of the universe itself (if there is such a thing). Plus, I do believe in magic - the magic of Life, the Universe, and Everything. And "42" doesn't dispel it for me.
> you're going to see a different process for the first time in your life
That sounds very neutral, but wouldn't this, by removing the human element and flexibility from business transactions, be a further step along a general enshittification trend?
I have a feeling our collective leg is being pulled:
"The behavioural components of happiness are less easily characterised but particular facial expressions such as 'smiling' have been noted"
"Certainly, if television soap operas in any way reflect real life, happiness is a very rare phenomenon indeed in places as far apart as Manchester, the East End of London and Australia. Interestingly, despite all the uncertainty about the epidemiology of happiness, there is some evidence that it is unevenly distributed
amongst the social classes: individuals in the higher socio-economic groupings generally report greater positive affect which may reflect the fact that they
are more frequently exposed to environmental risk factors for happiness."
If we never try, we'll never know. I wouldn't be surprised if there is something to gain from a form of deterministic computation which is still integrated with the NN architecture. After all, tool calls have their own non-trivial overhead.
Abstract:
Self-improving AI systems aim to reduce reliance on human engineering by learning to improve their own learning and problem-solving processes. Existing approaches to self-improvement rely on fixed, handcrafted meta-level mechanisms, fundamentally limiting how fast such systems can improve. The Darwin Gödel Machine (DGM) demonstrates open-ended self-improvement in coding by repeatedly generating and evaluating self-modified variants. Because both evaluation and self-modification are coding tasks, gains in coding ability can translate into gains in self-improvement ability. However, this alignment does not generally hold beyond coding domains. We introduce \textbf{hyperagents}, self-referential agents that integrate a task agent (which solves the target task) and a meta agent (which modifies itself and the task agent) into a single editable program. Crucially, the meta-level modification procedure is itself editable, enabling metacognitive self-modification, improving not only the task-solving behavior, but also the mechanism that generates future improvements. We instantiate this framework by extending DGM to create DGM-Hyperagents (DGM-H), eliminating the assumption of domain-specific alignment between task performance and self-modification skill to potentially support self-accelerating progress on any computable task. Across diverse domains, the DGM-H improves performance over time and outperforms baselines without self-improvement or open-ended exploration, as well as prior self-improving systems. Furthermore, the DGM-H improves the process by which it generates new agents (e.g., persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. DGM-Hyperagents offer a glimpse of open-ended AI systems that do not merely search for better solutions, but continually improve their search for how to improve.
reply