I met some of my best friends in VR, and had life-changing social interactions in some VR games on the Oculus platform... Especially EchoVR. I'm still a true believer in VR as a medium. It'll go mainstream one day, and there will be a Meta-sized company built upon it.
They had lightning in a bottle and somehow lost it. Honestly it might have been hiring Carmack that sent them down this path. Moving away from PCVR expanded the market, but it also killed the magic. Now the quest store is a wasteland of what look like low budget mobile apps.
There were two villagers and one werewolf. The werewolf started the round by saying I'm the werewolf vote for me and then the game ended with a villager win.
Over night he had successfully taken out the doctor. It made no sense in my opinion.
There were some funny bits like on of the Anthropics models forgetting a rule and leading to everyone accusing him of being a werewolf in a pile on. He wasn't a werewolf he genuinely forgot the rule. Happens nearly every human game of werewolf.
Thanks for sharing that story, great cautionary tale about apps that drift from “helpful” into prescriptive/social enforcement.
Dlog takes the opposite approach:
• Agency first: The Coach proposes; you decide. No phone trees, no task assignments, no social mechanics. You can ignore, not use or set a low-cadence to the guidance.
• Explainability over prescriptions: Suggestions come with a brief “why” based on your SEM reports (which factors moved and by how much) plus charts so you can sanity-check against lived experience.
• Local-first privacy: Journals live on-device (EventKit). Scoring + SEM run locally. By default no raw text leaves the device unless you use the coach, this is optional, but there is a bit of a leap of faith here with OpenAI; until I enable on device LLMs in due course.
• No hidden incentives: No affiliate nudges, upsells, or growth hacks. It doesn’t decide what you wear/eat or route calls to you; it surfaces patterns (e.g., “energy dips after external calls”) so you can choose actions that fit your constraints.
If that story raised a specific worry: loss of autonomy, privacy creep, or community spam, then does the above address it? I’m especially interested in whether the “why” behind recommendations is clear enough or needs to be tighter.
How is it possible that text-to-score/notation is lagging text-to-audio in music generation? Generating audio seems wildly more complicated!
Since you are working in this space, I wonder if you could comment on my pet theories for why this is true: 1. Not enough training data (scores not available for most songs), or 2. Difficulty with tokenization of musical notation vs. audio
I started a PhD in 2017 studying neuroscience + ML. I thought studying the brain would help me understand ANNs better. I was wrong. Ended up applying ML to analyzing EEG, MRI and similar.
Is this because we're misapplying the analogy to ML? I.e., in an effort to communicate and understand ANNs, we "pretend" it's like a brain. Just like before, we used "file retrieval systems" to understand the brain, or electricity is like "water in a pipe", which are also wrong. Analogies often only go so far, beyond which they do more harm than good.
What you're describing is endemic across HN (and tech, tbh). Lots of people on here "know" computers/programming/CS very well. They, naturally, tend to use analogies to computers/programming/CS when trying to explain or "think out loud" in their comments. That's fine. It's what they know. The common problem arises when people forget they're analogizing and begin to see their analogy as ontologically and conceptually identical to the thing they were making an analogy for. This requires a certain amount of ego, echo chambers, and self-valorization, so that they never have to face the actual issues with these analogies.
But as many comments here have pointed out, studying neuroscience, for example, usually makes those analogies seem painfully inadequate. The same is true in philosophy of mind, for example.
I'm sure that there exist people who get lost in the analogies. Practitioners are generally not confused that ANNs are simplifications of the brain. The questions are which simplifications are most relevant and whether complexities can be added that yield better results. My own research was about reintroducing absolute location. I'm standard ANNs location is relative within a graph model of the network. I'm the real brain blood vessels and other macrostructures deliver materials used to grow and modify the neurons and these affect the network based on physical location. I'm fact, by adding these back in we bypassed the XOR limitation (i.e. Minsky's result leading to back propagation). Concretely, we observed learning XOR over the inputs within a Hopfield network using Hebbian learning modulated by spatially modulated trophic factor).
I believe, at its best, it’s an incomplete model (which may be enough for most people). But it leaves out important aspects like magnetic field work and probably a host of aspects from quantum theory.
Have we hit the limit of the analogy, or have we hit a limit in our understanding? Both neural networks and actual brains have behaviors that emerge from the interactions of smaller components. Neural networks have trivial connections compared to brains, but our understanding of the emergent behaviors seems very limited. To me, this is a sign not that the analogy has reached a point of breaking down, but that our tools aren't sufficient to work on even then trivial connections. I do expect the analogy will break at some point, but I'm not sure we have reached that point yet.
I hoped neuroscience, as a field, was on the cusp of a physical theory of learning and memory. I dreamt of an intersection of information theory, neuroscience, and ML.
Alas, state of the art in neuroscience / neural engineering is closer to bloodletting than a mechanistic theory of learning and memory.
Another take is that the base models are now good enough that spending more money for more intelligence is viable at test time. A threshold has been crossed.
Naively, I feel to be useful, the goal of LLMs should be to more power efficient. So that eventually all devices can be smarter.
Power efficiency can be gained through less time-time, or more "intelligence" or some combination of the two. I'm not convinced these SOTA models are doing much more than increasing test-time.
Biggest impacts on power efficiency will be the advances in node size and transistor type like nanosheet or forksheet. Algorithm will help just a little.
They had lightning in a bottle and somehow lost it. Honestly it might have been hiring Carmack that sent them down this path. Moving away from PCVR expanded the market, but it also killed the magic. Now the quest store is a wasteland of what look like low budget mobile apps.
reply