I worked on this problem for more than a decade, as co-founder of what became the Center for Humane Tech, and in other roles.
One way to break it down is different users, but as Ivan notes towards the end, we all have Marl in us. So, another way to get at the same thing is to exclude certain engagements from the metrics — the things we click on but would not reflectively endorse as meaningful. You get a lower engagement number that represents meaningful choice, rather than just "revealed preference" / engagement.
This is what I work to align LLMs with at the Institute for Meaning Alignment[1], and Ivan is helping! I also have a paper[2] on the difference between revealed preference and meaningful choice.
(It's also worth noting that this process of enshittification doesn't just happen in software. Markets and voting also have this revealed preference vs meaningful choice problem. So, making this distinction is a chance to upgrade all of our large-scale systems.)
Yes, for sure. This is a known weakness, mentioned on the page. We hope to test ways to make turtleocracy robust against sociopathic conspiracies and fake turtles over the next year.
I'm not sure where people are getting the consultingware idea. We don't have any consulting relationships with businesses at all. Guess our copy is misleading somewhere?
Yes, right, Turtleocracy is partly modeled on PARC, Bell Labs, etc. But I don't begin to know how to answer this question about "how much their working-like-this contributed to their success or lack thereof" -- are you an organizational sociologist? If you have some methodological ideas here, I'd love to hear them.
I'm not sure where people are getting the consultingware idea. We don't have any consulting relationships with businesses at all. Guess our copy is misleading somewhere?
One way to break it down is different users, but as Ivan notes towards the end, we all have Marl in us. So, another way to get at the same thing is to exclude certain engagements from the metrics — the things we click on but would not reflectively endorse as meaningful. You get a lower engagement number that represents meaningful choice, rather than just "revealed preference" / engagement.
This is what I work to align LLMs with at the Institute for Meaning Alignment[1], and Ivan is helping! I also have a paper[2] on the difference between revealed preference and meaningful choice.
(It's also worth noting that this process of enshittification doesn't just happen in software. Markets and voting also have this revealed preference vs meaningful choice problem. So, making this distinction is a chance to upgrade all of our large-scale systems.)
[1] https://meaningalignment.org/ [2] https://github.com/jxe/vpm/blob/master/vpm.pdf