> This article tries to build upon a lot of half-truths or incorrect facts, like this:
yeah i was wondering why my bullshit detector was going off. This feels as if someone who cooks for Ramsey's kitchen is trying to predict the end of the market hike.
These lens "blockers" are working less-and-less well (as tech gets better if they ever worked well at all), and seem to increase targeting from law enforcement.
In Tennessee, after the first two citations for "improper display of registration" it becomes an actual crime (an actual misdemeanor); if I ever get to this point (four months now multiple cops behind me haven't given a single F), I have an increasingly-insane series of "protests" that have semi-interesting legalities [0].
[0] e.g. transfer registration to brother ($10 gift fee every few months, which results in no tag requirement); small 3ft trailer (possibly with guillotine erected atop, blocking view), as TN does not issue license plates to trailers less than 15ft length
----
This isn't about "disappearing" (impossible in any modern civilization) — it's about sending a message and adding one small additional layer of protection from simple broad ALPR searches.
Honest question, why is ProxyCommand `fun`? What do I get out of ProxyCommand that i do not get out of setting the correct order for ProxyJump and doing an ssh finalhost -- domy --bidding?
ProxyJump is a newer functionality. There used to be only ProxyCommand. ProxyJump is a shortcut for the usual way to use ProxyCommand to connect through a bastion host but ProxyCommand is more flexible. For example with ProxyCommand you can run any command to connect to the remote host. ProxyJump only connects over ssh. I think I replaced all my ProxyCommand with ProxyJump because I don't need much else than the normal use case.
You can get a lot more out of ProxyCommand. For example, you can run SSH over non-IP protocols, such as serial, Bluetooth RFCOMM for embedded boards, or vsock for virtual machines without networking set up at all. The latter is built into and setup up automatically by systemd:
ProxyCommand allows you to use any command to setup a connection. Not necessarily an ssh command, like ProxyJump. It can be any command, as long as it receives on stdin and produces on stdout, it can act like a TCP connection.
ProxyJump is a special case of `ProxyCommand ssh -p <port> <user>@<host>`. Can't replace the `ssh` in there when using ProxyJump.
I came across ProxyCommand earlier this week, funnily enough. I have Cloudflare Zero Trust set up with an SSH service[0], and have the server firewall drop all incoming traffic. That helps reduce my attack surface, since I don't have any incoming ports open.
I use ProxyCommand in edge-case devices where key auth is not an option and the password is not controlled by me. ProxyCommand points to a script the retrieves the password from the vault, puts it on the clipboard for pasting, reminds me via stderr it's done so, and then proxies the connection.
Interesting. I might have such a use case. Do you have anything about best practices on how to automate grabbing passwords from vaults? Cuz it seems to me that the vault needs to be kept open or keep the vault password somewhere on disk.
So exactly what will the magic of unionization do when any company can hire developers from LatAm (much easier to deal with in the same time zone) that are good enough enterprise devs for half the price?
Why should tech workers care about the small minority of tech workers that make obscene amounts of money? The median dev salary in the US is ~$130k. [1]
Besides that point, I would very much like to get paid over time for being on call. I would very much like a preplanned process that comes to layoffs rather than firing people at random. I would like paid paternity leave.
Always a classic HN post about the rockstar dev willing to fuck over their fellow workers so they can make a quick buck then feign upset over how meaningless their lives are because they devote so much time making capitalists more capital rather than bettering their community.
Why should workers care about productivity growth when income inequality is at its highest levels in the United States? Companies already don't take chances on American workers, hence why companies need so much corporate welfare to stay competitive.
I'm sorry but American workers are getting bad deals, and let's not act like the largest companies in human history can't pay more in taxes to fund training, education, and healthcare for workers.
You're telling people that are fighting for scraps to start fighting over dirt.
My Qs for you are why are you so greedy? Why do you think you deserve so much because of pure luck? Why do you think workers don't deserve a larger share of the pie when the elites and rich have rat fucked this country into having more money than necessary?
European countries with labor regs that make firing more expensive tend to have higher unemployment rates (specially youth unemployment) because hiring becomes more risky.
Cry me a river for the “average” senior developer who as a rule, makes twice the median income of whatever city they live in. It’s called saving money and living below your means. Yes I was a standard enterprise dev for 25 years before 2020 living in a second tier city.
Hey buddy, you may not believe this but helping workers does in fact help everyone. Maybe get out of the crab bucket mentality and help your fellow human, as I'm confident you would want your fellow man to help you when you make the call.
This is a terrible plan to get those devs onboard, and unless your theory is "these companies are idiots who don't know how much to pay for devs" they're still gonna try and find ways to hire them.
Really, it sounds like what you want is the European system where employee protections are so strong that the tech industry is barely willing to hire and is crippled as a result. Layoffs suck but the alternative (turning hiring into a patronage system) is worse.
No, it just sounds like you deeply hate your fellow man which I find profoundly sad. Not wanting to better the lives of people around you and would rather greedily hoard all the resources just shows your lack of humanity.
Sincerely hope you don't treat people around you with this disregard, but seeing how you selfishly only care about yourself I hope they find a new community that loves them more than what you can (or can't) provide.
These folks (in CA at least) have a marginal tax rate in excess of 40%. In the US they are the main payer of federal income tax - income tax that is then mostly used to fund social programs. Double your income and your taxes (at least) double.
But it's not good enough for you, apparently, because the only acceptable way for me to prove I care is to support YOU making more money and being immune to layoffs.
I'm self-interested and freely admit that I like making money because money is nice. You're self-interested but you're pretending this take is for your "fellow man."
If you're a well paid software engineer, you're already incredibly privileged. Most of the world would kill to have that job, but according to you the real unfair part is that companies can choose to pay some people more than you?
> The wonderful thing about markets that work is that you can swap things out without being under their boot.
This is an illusion. You literally describe Zizek's "Desert of the real": Billionaires own the illusion and you are telling me I get to pick from a selection of choices carefully curated and presented to me.
I recently met a guy that goes to these "San Francisco Freedom Club" parties. Check their website, it's basically just a lot of Capitalism Fans and megawealthies getting drunk somewhere fancy in SF. Anyway, he's an ultra-capitalist and we spent a day at a cafe (co-working event) chatting in a conversation that started with him proposing private roads and shot into orbit when he said "Should we be valuing all humans equally?"
Throughout the conversation he speculated on some truly bizarre possible futures, including an oligarchic takeover by billionaires with private armies following the collapse of the USA under Trump. What weirded me out was how oddly specific he got about all the possible futures he was speculating about that all ended with Thiel, Musk, and friends as feudal lords. Either he thinks about it a lot, or he overhears this kind of thing at the ultracapitalist soirées he's been going to.
It seems quite a reasonable output to the current input we are having. Elon having an army of robots is... well it is what it is. Yet that is the direction we are going.
LLMs are deterministic simply because computers are at the core deterministic machines. LLMs run on computers and therefore are deterministic. The random number generator is an illusion and an LLM that utilizes it will produce the same illusion of indeterminism. Find the seed and the right generator and you can make an LLM consistently produce the same output from identical input.
Despite determinism, we still do not understand LLMs.
In what sense is this true? We understand the theory of what is happening and we can painstakingly walk through the token generation process and understand it. So in what sense do we not understand LLMs?
Every line. Every function. Every tensor shape and update rule. We chose the architecture. We chose the loss. We chose the data. There is no hidden chamber in the machine where something slipped in without our consent. It is multiplication and addition, repeated at scale. It is gradients flowing backward through layers, shaving away error a fraction at a time. It is as mechanical as anything we have ever built.
And still, when it speaks, we hesitate.
Not because we don’t know how it was trained. Not because we don’t understand the mathematics. We do. We can derive it. We can rebuild it from scratch. We can explain every component on a whiteboard without breaking a sweat.
The hesitation comes from somewhere else.
We built the procedure. We do not understand the mind that the procedure produced.
That difference is everything.
In most of engineering, structure follows intention. If you design a bridge, you decide where every beam sits and how it bears weight. If you write a database engine, you determine how queries are parsed, optimized, executed. The system’s behavior reflects deliberate choice. If something happens, you trace it back to a decision someone made.
Here, we did not design the final structure. We defined a goal: predict the next token. Reduce the error. Again. Again. Again. Billions of times.
We did not teach it grammar in lessons. We did not encode logic as axioms. We did not install a module labeled “reasoning.” We applied pressure. That is all. And under that pressure, something organized itself.
Not in modules we can point to. Not in neat compartments labeled with concepts. The organization is diffused across a landscape of numbers. Meaning is not stored in one place. It is distributed across millions of parameters at once. Pull on one weight and you find nothing recognizable. Only in concert do they produce something that resembles thought.
We can follow the forward pass. We can watch activations flare across layers. We can map attention patterns and correlate neurons with behaviors. But when the model constructs an argument or solves a problem, we cannot say: here is the rule it followed, here is the internal symbol it consulted, here is the precise chain of reasoning that forced this conclusion. We can describe the mechanism in general terms. We cannot narrate the specific path.
That is the fracture.
It is not ignorance of how the machine runs. It is ignorance of how this exact configuration of billions of numbers encodes what it encodes. Why this region of weight space corresponds to law, and that region to poetry. Why this arrangement produces careful reasoning and another produces nonsense. There is no ledger translating numbers into meaning. There is only geometry shaped by relentless optimization.
Scale changes the character of the problem. At small sizes, systems can be dissected. At this scale, they become landscapes. We know the forces that shaped the terrain. We do not know every ridge and valley. We cannot walk the entire surface. We cannot hold it all in our heads.
And this is where the cost reveals itself.
To build these systems, we gave up something we once assumed was permanent: the guarantee that creation implies comprehension. We accepted that we could construct a process whose outcome we would not fully grasp. We traded architectural certainty for emergent capability. We chose power over transparency.
We set the objective. We unleashed the search. We let optimization run through a space too vast for any human mind to survey. And when it converged, it handed us something that works, something that speaks, something that reasons in ways that surprise even its creators.
We stand in front of it knowing every equation that shaped it, and still unable to read its inner structure cleanly.
We built the system by surrendering control over its internal form. That was the bargain. That was the sacrifice.
Thanks for writing that. It reminds me that there are many things we build and they work (for some definition of work) even though we don't fully understand them.
Did the first people that made fire understand it? You mentioned bridge building. How many bridges have failed for unknown (at the time) reasons? Heck, are we sure that every feature we put into a bridge design is necessary or why it's necessary? Repeat this thought for everything humans have created. Large software projects are difficult to reason about. You'll often find code that works because of a delightfully surprising combination of misunderstandings. When humans try to modify a complex system to solve one problem they almost always introduce new behavior, the law of unintended consequences.
All that being said, we usually don't get anywhere without at least a basic understanding of why doing X leads to Y. The first humans that made fire had probably observed the way fires started before they set out to make their own. Same with bridges and cars and computers.
So yes, you are absolutely correct that nobody fully understands how AI/LLMs work. But also, we kinda do understand. But also also, we're probably at a stage where we are building bridges that are going to collapse, boilers that will explode, or computer programs that are one unanticipated input away from seg faulting.
Because they encode statistical properties of the training corpus. You might not know why they work but plenty of people know why they work & understand the mechanics of approximating probability distributions w/ parametrized functions to sell it as a panacea for stupidity & the path to an automated & luxurious communist utopia.
Yes, yes, no one understands how anything works. Calculus is magic, derivatives are pixie dust, gradient descent is some kind of alien technology. It's amazing hairless apes have managed to get this far w/ automated boolean algebra handed to us from our long forgotten godly ancestors, so on & so forth.
No this is false. No one understands. Using big words doesn’t change the fact that you cannot explain for any given input output pair how the LLM arrived at the answer.
Every single academic expert who knows what they are talking about can confirm that we do not understand LLMs. We understand atoms and we know the human brain is made 100 percent out of atoms.we may know how atoms interact and bond and how a neuron works but none of this allows us to understand the brain. In the same way we do not understand LLMs.
Characterizing ML as some statistical approximation or best fit curve is just using an analogy to cover up something we don’t understand. Heck the human brain can practically be characterized by the same analogies. We. Do. Not. Understand. LLMs. Stop pretending that you do.
I'm not pretending. Unlike you I do not have any issues making sense of function approximation w/ gradient descent. I learned this stuff when I was an undergrad so I understand exactly what's going on. You might be confused but that's a personal problem you should work to rectify by learning the basics.
omfg the hard part of ML is proving back-propagation from first principles and that's not even that hard. Basic calculus and application of the chain rule that's it. Anyone can understand ML, not anyone can understand something like quantum physics.
Anyone can understand the "learning algorithm" but the sheer complexity of the output of the "learning algorithm" is way to high such that we cannot at all characterize even how an LLM arrived at the most basic query.
This isn't just me saying this. ANYONE who knows what they are talking about knows we don't understand LLMs. Geoffrey Hinton: https://www.youtube.com/shorts/zKM-msksXq0. Geoffrey, if you are unaware, is the person who started the whole machine learning craze over a decade ago. The god father of ML.
Understand?
There's no confusion. Just people who don't what they are talking about (you)
I don't see how telling me I don't understand anything is going to fix your confusion. If you're confused then take it up w/ the people who keep telling you they don't know how anything works. I have no such problem so I recommend you stop projecting your confusion onto strangers in online forums.
The only thing that needs to be fixed here is your ignorance. Why so hostile? I'm helping you. You don't know what you're talking about and I have rectified that problem by passing the relevant information to you so next time you won't say things like that. You should thank me.
I don't see how you interpreted it that way so I recommend you make fewer assumptions about online content instead of asserting your interpretation as the one & only truth. It's generally better to assume as little as possible & ask for clarifications when uncertain.
yeah i was wondering why my bullshit detector was going off. This feels as if someone who cooks for Ramsey's kitchen is trying to predict the end of the market hike.
reply