Hacker Newsnew | past | comments | ask | show | jobs | submit | Nevermark's commentslogin

Some of us have been waiting our whole lives for a comprehensive DWIM command.

> DWIM is an embodiment of the idea that the user is interacting with an agent who attempts to interpret the user's request from contextual information. Since we want the user to feel that he is conversing with the system, he should not be stopped and forced to correct himself or give additional information in situations where the correction or information is obvious. [0]

— Teitelman and his Xerox PARC colleague Larry Masinter, Xerox PARC, in 1981

[0] https://en.wikipedia.org/wiki/DWIM


IMO, many such people, unfortunately, are prone to forget to state the addendum NWIS to express their true hope: Do What I Mean, Not What I Said.

And this is kind of terrifying to me, in the context of an LLM that is working completely based on What You Said and any ability to Do What You Mean has to come from murky associations in the training data.

It is more than kind of terrifying when this is then extended to scenarios requiring novel analysis and problem solving, rather than just performing a repetitive, idiomatic task for the N+1th time.


I don't mind them getting uberwealthy.

I mind the tolerance of society when some of these billionaires make their money on the back of negative externalities.

When "small" conflicts, like unpermissioned surveillance they use to psychological leverage against us, literally paying for content that gets eyeballs without taking any responsibility for the misinformation and hate they are financing to get produced, actively algorithmically pushing attention getting material without taking any responsibly for misinformation or hate material they are actively promoting, when they get paid for ads, but take no responsibility for taking money from scams and promoting them, and all the other seemingly "minor" but pervasive negative externalities that they hyper scale, people get hurt, and all of society gets degraded.

As everyone points out: incentives. If you don't take perverse incentives away from billionaires, or continue to give them perverse safe harbors, then those billionaires will relentlessly reinvest and innovate, in more harms, at ever greater scales. Things we still think are minor ethical issues, are not when they are hyper scaled.

This isn't some passive, life is rough sometimes situation, that people should be expected to weather. This highly financed, highly managed psychological, social and political harm, for profit. Even if the harm is distributed and seemingly low in any given incident. It adds up to a visibly degraded society.

Somehow social media gets treated with all the lack of responsibility of a neutral web site server. But they are highly active in how they operate. They should be responsible for their very active choices.


I like this a lot. The Innovators Dilemma for science.

The new simpler tool always competes with highly adapted complex tools to get to a region of value generation.

Starting where it’s greater simplicity, despite less complementary adaptations, is of great advantage.

Then slowly accumulates its own version of practical complements that let it excel overall.


Models are not trained to self-evaluate. Or only as an afterthought during tuning. So they are poor at it.

It isn’t mysterious.

Humans are trained incrementally, educated informally and formally, and along the way tested by context and classroom. Evaluated by people in their circle and strangers. The training to evaluate ourselves is near constant.

Even then, many people habitually believe they understand things they clearly don’t.

And can even be hostile to feedback.

Many forms of hallucination are canonized, or socially encouraged at varying demographic scales. While others are idiosyncratic.

Once self-examination is a first-class part of models training process, I expect they will respond like they have to being trained on vast troves of information, which is far excel human beings.

But, until then, their poor self-assessment isn’t mysterious, it is a mundane result of being trained not to do that. Or only as a tuning after thought.

Not as different from humans as we might like to think.

And models are noticeably improving on this measure. Humans in my experiencing may be regressing.


The amount of code changes I find acceptable, to simplify and shrink my code base, is now almost unbounded.

Overstating things of course. But paying off technical debt never felt so good. And the expected decrease in forward friction has never been so achievable so quickly.


Machines spec’d and priced for education? Support for businesses?

I remember this!


There isn’t an alternative to allocating resources with money because money is a just measure of value.

Things will get valued, relative to each other. Because different things are harder to make, or needed more. And it’s a whole lot better to measure that and make decisions informed than to not measure properly, or ignore those measurements, and watch resources get misdirected in a way that shrinks the economy.

You can radically change the economy. But it’s going to either use money in the open or some much less efficient warped backroom version of money.

You can’t avoid having to pay for valuable things with valuable things. Money is just a ledger. But you can always add inefficiencies to transactions, or mismanage money, and make any problem worse.

My point is, there is probably something to what you are thinking but you are misframing it in a way it won’t work, unnecessarily. Consider what you really think should happen and what might be a better way to frame it.

Most likely, that means focusing less on money, and more on how resources cycle to create more resources, as apposed to less. And matching that to a problem where you can find reciprocal improvements if it is solved. Some waste is avoided. Some fraud or unchecked damage is eliminated. Some mutual arrangements are magnified, etc. There has to be a resource return cycle of some kind.

(Replacing every mention of “money” with “resources” tends to clarify what can work or not quickly.)


Fiat is actually a warped backroom version of money. It's a measure of trust I think? You could replace it with something that represents resources, perhaps even [future] labor.

I think this is a good way of thinking, and it suggests that breaking up large clumps of money and resources is a reasonable way forward

The problem is currency is inherently clumpy. While value is always judged and assigned to things, the existence of a static, cumulative ledger of it is not a requirement.

It doesn't take a lot to recreate the capitalism to feudalism pipeline. If you have currency, small imbalances in resources and needs compound over time, creating imbalances in wealth. Imbalances in wealth provide the opportunity to leverage that imbalance for further wealth by way of rentseeking. Wealth provides power which provides more wealth and more power. Eventually your landlords drop the "land" prefix and simply become nobility.

Prior to the invention of currency, we had reputation economies. One might be tempted to model such economies as just money economies with implicit ledgers, but that isn't how reputation works in the real world. Being implicit, reputation captures a lot of activity that doesn't warrant an overt exchange of currency. Think of all the things that you appreciate, and make you value a relationship with someone more, that would be terribly inappropriate to pay them for: the friendly guy at the pub who tells you stories of questionable accuracy, a fellow parent watching your kid during a playdate, anything in the romantic sphere at all. Reputation also doesn't add up in anything close to a linear way: The guy who did something really big once and the guy who did something small with extreme regularly over a long period of time both likely have stronger ties with others in their community than the one who sporadically provided middling value. Reputation also isn't particularly inheritable: I might feel some obligation to someone's kid because of my relationship with their father, but that obligation fades rapidly as they entire adulthood, and nobody owes you shit for who your grandfather was. Likewise, gifts from someone who has an embarrassment of excess are valued much less than the same thing offered by someone who has barely enough.

All told, reputation economies act as a damping function on wealth and power accumulation, whereas currency economies provide positive feedback on the same.


you give a nod to the solution. If we have an undamped oscillator, or a system with a tendency in an undesirable direction, we can damp it.

And currency (given that we make it up and have a reasonable degree of control over its worth and distribution) does not have to be a static cumulative ledger


Any solution needs the damping function to be intrinsic to the system, rather than tacked on as policy. Policy ends up being dictated by the powerful, so if your system's only check against runaway wealth accumulation is policy, eventually your guardrails will be demolished. It might not be today, it might not be tomorrow. But eventually, self-propelled wealth wins.

There are models of currency that try to include such dampening intrinsically (Tankies love talking about various experimental forms of currency as "labor vouchers" to try and sidestep the "moneyless" pitch of Communism), but I've yet to see one that really addresses the "wealth begets wealth, hierarchy begets hierarchy" problem.


The problem is just how far USD has departed from the value. There are some funny tricks people pulled with abstract concepts and now people have found a way to "print" their own money. It's created a new power and influence shift because you can just go to the Finance or Tech worlds and get money instead of producing actual value to other humans.

I mean, if any argument on why AI would extinct mankind, this is the most likely. Humans make no economic sense to an AI that controls all intellectual and manual labor. What do most humans have to reciprocate? Why not use the resources it gets to build more AI?

> You will not wake up on any server.

Interesting! Which atoms do you consider to be your identity? That demonstrate someone is the "same" person for a lifetime?

And more importantly, why?

If our identity involves any abstraction whatsoever, any independence from particular material constituents (whatever dependency could possibly mean in a universe where particles of a type are indistinguishable (i.e. can appear in different contexts but do not have identities), then we are not substrate bound. We just require isomorphism.

(Any assumptions that there can only be one future "self", that isomorphic copies are neither inheritors or branches of our identity, require some clear explanation. To separate solid reasoning from our intuitions which are often strongly biased by a lack of prior experience.)


Indeed, the incentives to goof off, fail and flail are unrelenting.

My compliance is complete.


the important thing is for you to think you have the options, and that when you do them, you get the whole benefits and the simulation pays the whole cost. they could easily put precalculated memories in your address space and save the compute.

Several years ago, I moved to twin university towns, where I can walk everywhere including between towns.

Funny thing about distances in small towns. It doesn't take long to start perceiving a ten or fifteen minute drive as a "long" drive. But a two hour walk while I turn over a difficult design problem goes by in an instant.

The difference between time that saps or renews our energy.

And I am off for a walk...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: