Yeah, it seems like one would have to either store it physically (for which the battery technology doesn't exist. Or sell it and buy it again in winter, where you lose some money as the energy is worth more in the winter.
It's an interesting blog though, I switched to Octopus Energy a few years ago and it's been the best electricity company I've ever had, far better than the others here in Spain.
> Turns out that the logic in consoles of the time was tied to the speed of the beam, which in turn used alternating current’s frequency as a clock. This means that since European current changes 50 times per second rather than 60, our games played in slowmo (about 0.8x). American sonic was so much faster! And the music was so much more upbeat!
Wasn't this the reason behind different versions of the game for PAL and NTSC etc.? So I imagine the games would play quite similarly, just with a lower refresh rate in Europe?
If you have an original copy of Grim Fandango, the elevator-and-forklift puzzle is impossible without a patch, since the scene moves at (iirc) the processors clock speed, so modern CPUs ran too quickly to make the action possible to solve the puzzle.
This is obviously fixed in the remastered version, though
> Wasn't this the reason behind different versions of the game for PAL and NTSC etc.? So I imagine the games would play quite similarly, just with a lower refresh rate in Europe?
Yes and no. Some games play at a similar speed but some (most if I recall correctly) weren't modified for the PAL market so they play slow and the image is squashed down. Street Fighter II on the SNES (PAL) is a classic example of this.
It was simultaneously such a delight and a frustration.
It was a treat to play without having to empty your pockets (where I am it cost the equivalent of 120 arcade credits) but compared to the arcade it wasnt as good as it could have been.
If you had an action replay or a game genie you could use codes to speed it up though.
The vertical resolution was also different. Some games developed for NTSC got black bars, or a silly banner in the PAL version. Many PAL games were not ported for NTSC regions at all.
Aren't age limits already in the ToS of most social media platforms? If the parents/children break the ToS their accounts should be deleted and their emails or even IPs banned.
I don't really see why we need more government involvement here. It's just going to be ham-fisted and create unintended consequences like the kids in Australia having to use adult YouTube because they can't have a kids account anymore.
I agree that all the AI doomerism is silly (by which I mean those that are concerned about some Terminator-style machine uprising, the economic issues are quite real).
But it's clear the LLM's have some real value, even if we always need a human-in-the-loop to prevent hallucinations it can still massively reduce the amount of human labour required for many tasks.
NFT's felt like a con, and in retrospect were a con. The LLM's are clearly useful for many things.
Those aren’t mutually exclusive; something can be both useful and a con.
When a con man sells you a cheap watch for an high price, what you get is still useful—a watch that tells the time—but you were also still conned, because what you paid for is not what was advertised. You overpaid because you were tricked about what you were buying.
LLMs are useful for many things, but they’re also not nearly as beneficial and powerful as they’re being sold as. Sam Altman, while entirely ignoring the societal issues raised by the technology (such as the spread of misinformation and unhealthy dependencies), repeatedly claims it will cure all cancers and other kinds of diseases, eradicate poverty, solve the housing crisis, democracy… Those are bullshit, thus the con description applies.
These are not independent hypotheses. If (b) is true it decreases the possibility that (a) is true and vice versa.
The dependency here is that if Sam Altman is indeed a con man, it is reasonable to assume that he has in fact conned many people who then report an over inflated metric on the usefulness of the stuff they just bought (people don’t like to believe they were conned; cognitive dissonance).
In other words, if Sam Altman is indeed a con man, it is very likely that most metrics of the usefulness of his product is heavily biased.
Yes, that’s the point I’m making. In the scenario you’re describing, that would make Sam Altman a con man. Alternatively, he could simply be delusional and/or stupid. But given his history of deceit with Loopt and Worldcoin, there is precedent for the former.
It would make every marketing department and basically every startup founder conmen too.
While I don’t completely disagree with that framing it’s not really helpful.
Slogans are not promises, they are vague feelings. In the case of Coca-Cola, I know someone who might literally agree with the happiness part of it (though I certainly wouldn’t).
The promises of Theranos and LLMs are concrete measurable things we can evaluate and report where they succeed, fall short, or are lies.
Sure but equating Theranos and LLMs seems a bit disingenuous.
Theranos was an outright scam that never produced any results, whereas LMMs might not have (yet?) lived up to all the marketing promises (you might call them slogans?) they made, but they definitely provided some real measurable value.
I disagree with this perspective. Human labour is mostly inefficiency from habitual repetition from experience. LLMs tend not to improve that. They look like they do but instead train the user into replacing the repetition with machine repetition.
We had an "essential" reporting function in the business which was done in Excel. All SMEs seem to have little pockets of this. Hours were spent automating the task with VBA to no avail. Then LLMs came in after the CTO became obsessed with it and it got hit with that hammer. This is four iterations of the same job: manual, Excel, Excel+VBA, Excel+CoPilot. 15 years this went on.
No one actually bothered to understand the reason the work was being done and the LLM did not have any context. This was being emailed weekly to a distribution list with no subscribers as the last one had left the company 14 years ago. No one knew, cared or even though about it.
And I see the same in all areas LLMs are used. They are merely pasting over incompetence, bad engineering designs, poor abstractions and low knowledge situations. Literally no one cares about this as long as the work gets done and the world keeps spinning. No one really wants to make anything better, just do the bad stuff faster. If that's where something is useful, then we have fucked up.
Another one. I need to make a form to store some stuff in a database so I can do some analytics on it later. The discussion starts with how we can approach it with ReactJS+microservices+kubernetes. That isn't the problem I need solving. People have been completely blinded on what a problem is and how to get rid of it efficiently.
That is not necessarily true. That would be like arguing there is a finite number of improvements between the rockets of today and Star Trek ships. To get warp technology you can’t simply improve combustion engines, eventually you need to switch to something else.
That could also apply to LLMs, that there would be a hard wall that the current approach can’t breach.
The "walls" that stopped AI decades ago stand no more. NLP and CSR were thought to be the "final bosses" of AI by many - until they fell to LLMs. There's no replacement.
The closest thing to a "hard wall" LLMs have is probably online learning? And even that isn't really a hard wall. Because LLMs are good at in-context learning, which does many of the same things, and can do things like set up fine-tuning runs on themselves using CLI.
Hallucinations are IMO a hard wall. They have gotten slightly better over the years but you still get random results that may or may not be true, or rather, are in a range between 0-100% true, depending on which part of the answer you look at.
OpenAI's o3 was SOTA, and valued by its users for its high performance on hard tasks - while also being an absolute hallucination monster due to one of OpenAI's RLVR oopsies. You'd never know whether it's brilliant or completely full of shit at any given moment in time. People still used o3 because it was well worth it.
So clearly, hallucinations do not stop AI usage - or even necessarily undermine AI performance.
And if the bar you have to clear is "human performance", rather than something like "SQL database", then the bar isn't that high. See: the notorious unreliability of eyewitness testimonies.
Humans avoid hallucinations better than LLMs do - not because they're fundamentally superior, but because they get a lot of meta-knowledge "for free" as a part of their training process.
LLMs get very little meta-knowledge in pre-training, and little skill in using what they have. Doesn't mean you can't train them to be more reliable - there are pipelines for that already. It just makes it hard.
I do think though that lack of online learning is a bigger drawback than a lot of people believe, because it can often be hidden/obfuscated by training for the benchmarks, basically.
This becomes very visible when you compare performance on more specialized tasks that LLMs were not trained for specifically, e.g. playing games like Pokemon or Factorio: General purpose LLMs are lagging behind a lot in those compared to humans.
But it's only a matter of time until we solve this IMO.
The wall is training data. Yes, we can make more and more of post training examples. No, we can never make enough. And there are diminishing returns to that process.
I didn’t say that is the case, I said it could be. Do you understand the difference?
And if it is the case, it doesn’t immediately follow that we would know right now what exactly the wall would be. Often you have to hit it first. There are quite a few possible candidates.
And there could be a teapot in an orbit around the Sun. Do we have any evidence for that being the case though?
So far, there's a distinct lack of "wall" to be seen - and a lot of the proposed "fundamental" limitations of LLMs were discovered to be bogus with interpretability techniques, or surpassed with better scaffolding and better training.
He’s not saying there is a hard wall he’s saying there’s a point where we’ll need new techniques or technologies not just refine the current one. Less of a hard barrier like the speed of light than an innovative one like creating artificial ammonia to make industrial amounts of fertilizer to support increasing crop amounts
pole-vaulting records improve incrementally too. and there is finite distance left to the moon. without deep understanding and experience and numbers to back up the opinion, any progress seems about to reach arbitrary goals.
AI doomerism was sold by the AI companies as some sort of "learn it or you'll fall behind". But they didnt think it through, now that AI is widely seen as a bad thing by general public (except programmers who think they can deliver slop faster). Who would be buying $200/month sub when they get laid off, I am not sure the strategy of spreading fear was worth it. I also don't think this tech can ever be profitable. I hope it burns more money at this rate.
The employer buys the AI subscription, not the employee. An employee that sends company code to an external AI is somebody looking for troubles.
In the case of contractors, the contractors buy the subscription but they need authorization to give access to the code. That's obvious if the property of the code is of the customer but there might be NDAs even if the contractor owns the code.
If companies have very little no of employees, AI companies are expecting regular people to pay for AI access. Then who would be buying $200/month for a thing that took their job? By cutting employees strat, the AI companies also lose much more in revenue.
> it can still massively reduce the amount of human labour required for many tasks.
I want to see some numbers before I believe this. So far my feelings is that the best case scenario is that it reduces the time it needs to do bureaucratic tasks, tasks that were not needed anyway and could have just been removed for an even grater boost in productivity. Maybe, it seems to be automating tasks from junior engineer, tasks which they need to perform in order to gain experience and develop their expertise. Although I need to see the numbers before I believe even that.
I have a suspicion that AI is not increasing productivity by any meaningful metric which couldn’t be increased by much much much cheaper and easier means.
I don't think that's of any doubt. Even beyond programming, imo especially beyond programming, there are a great many things they're useful for. The question is; is that worth the enormous cost of running them?
NFT's were cheap enough to produce and that didn't really scale depending on the "quality" of the NFT. With an LLM, if you want to produce something at the same scale as OpenAI or Anthropic the amount of money you need just to run it is staggering.
This has always been the problem, LLMs (as we currently know them) they being a "pretty useful tool" is frankly not good enough for the investment put into them
All of the professions its trying to replace are very much bottom end of the tree, like programmers, designers, artists, support, lawyers etc. While you can easily already replace management and execs with it already and save 50% of the costs, but no one is talking about that.
At this point the "trick" is to scare white collar knowledge workers into submission with low pay and high workload with the assumption that AI can do some of the work.
And do you know a better way to increase your output without giving OpenAI/Claude thousands of dollars? Its morale, improving morale would increase the output in a much more holistic way. Scare the workers and you end up with spaghetti of everyone merging their crappy LLM enhanced code.
"Just replace management and execs with AI" is an elaborate wagie cope. "Management and execs" are quite resistant to today's AI automation - and mostly for technical reasons.
The main reason being: even SOTA AIs of today are subhuman at highly agentic tasks and long-horizon tasks - which are exactly the kind of tasks the management has to handle. See: "AI plays Pokemon", AccountingBench, Vending-Bench and its "real life" test runs, etc.
The performance at long-horizon tasks keeps going up, mind - "you're just training them wrong" is in full force. But that doesn't change that the systems available today aren't there yet. They don't have the executive function to be execs.
> even SOTA AIs of today are subhuman at highly agentic tasks and long-horizon tasks
This sounds like a lot of the work engineers do as well, we're not perfect at it (though execs aren't either), but the work you produce is expected to survive long term, thats why we spend time accounting for edge cases and so on.
Case in point; the popularity of docker/containerization. "It works on my machine" is generally fine in the short term, you can replicate the conditions of the local machine relatively easily, but doing that again and again becomes a problem, so we prepare for that (a long-horizon task) by using containers.
Some management would be cut off when the time comes, Execs on the other hand are not there for work and are in due to personal relationships, so impossible to fire. If you think someone like lets say Satya Nadella can't be replaced by a bot which takes different input streams and then makes decisions, then you are joking. Even his recent end of 2025 letter was mostly written by AI.
If an AI exec reliably outperformed meatbag execs while demanding less $$$, many boards would consider that an upgrade. Why gamble on getting a rare high performance super-CEO when you can get a reliable "good enough"?
The problem is: we don't have an AI exec that would outperform a meatbag exec on average, let alone reliably. Yet.
Yeah. Obviously. Duh. That's why we keep doing it.
Opus 4.5 saved me about 10 hours of debugging stupid issues in an old build system recently - by slicing through the files like a grep ninja and eventually narrowing down onto a thing I surely would have missed myself.
If I were to pay for the tokens I used at API pricing, I'd pay about $3 for that feat. Now, come up with your best estimate: what's the hourly wage of a developer capable of debugging an old build system?
For the reference: by now, the lifetime compute use of frontier models is inference-dominated, at a rate of 1:10 or more. And API costs at all major providers represent selling the model with a good profit margin.
So could the company hiring you to do that work fire you and just use Opus instead? If no, then you cannot compare an engineers salary to what Opus costs, because the engineer is needed anyway.
> And API costs at all major providers represent selling the model with a good profit margin.
Though we don't know for certain, this is likely false. At best, it's looking like break even, but if you look at Anthropic, they cap their API spend at just $5,000 a month, which sounds like a stop loss. If it were making a good profit, they'd have no reason to have a stop loss (and certainly not that low).
> Yeah. Obviously. Duh. That's why we keep doing it.
I don't think so. I think what is promised is what keeps spend on it so high. I'd imagine if all the major AI companies were to come out and say "this is it, we've gone as far as we can", investment would likely dry up
But now instead of spending 10 hours working on that, he can go and work on something else that would otherwise have required another engineer.
It's not going to mean they can employ 0 engineers, but maybe they can employ 4 instead of 5 - and a 20% reduction in workforce across the industry is still a massive change.
Thats assuming a near 100% success rate from the agent, meaning it's not something he needs to supervise at all. It also assumes that the agent is able to take on the task completely, meaning he can go do something else which would normally occupy the time of another engineer, rather than simply doing something else within the same task (from the sounds of things, it was helping with debugging, not necessarily actually solving the bug). Finally, and most importantly, the 20% reduction in workforce assumes it can do this consistently well across any task. Saving 10h on one task is very different from saving 10h on every task.
Assuming all the stars align though and all these things come true, a 20% reduction in workforce costs is significant, but again, you have to compare that to the cost of investment, which is reported to be close to a trillion. They'll want to see returns on that investment, and I'm not sure a 20% cut (which, as above, is looking like a best case scenario) in workforce lives up to that.
Yeah, but we also haven't seen what making actually decent music or movies or whatever with AI will look like. Maybe it simply won't be possible and there will not be a market for it.
But if it is possible it's probably going to be a lot more involved than just '"video of cute cartoon cat, Pixar style" into a prompt'.
Though relatively old in the AI world (2023), it's still quite interesting.
In case you can't access the article, the prompt used is:
> 35mm, 1990s action film still, close-up of a bearded man browsing for bottles inside a liquor store. WATCH OUT BEHIND YOU!!! (background action occurs)…a white benz truck crashes through a store window, exploding into the background…broken glass flies everywhere, flaming debris sparkles light the neon night, 90s CGI, gritty realism
I felt a lot safer when I was a young grad than now that I have kids to support and I can't just up and move to wherever the best job opportunity is or live off lentils to save money or whatever.
Yeah, kids change the landscape a lot. On the other hand, if you don't have any personal ties, its easier to grab opportunities, but you are unlikely to build any kind of social network when chasing jobs all over the country/world.
Either way, there is very little to no path toward "family + place to live + stable job" model.
When I was single with no kids, I felt pretty comfortable leaving a good job to join a startup. I took a 50% pay cut to join when the risk seemed high, but the reward also seemed high.
It paid off for me, but who knows if I would have taken that leap later in life.
There must be "dozens of us" with this fear right now. I'm kinda surprised there isn't a rapid growing place for us to discuss this... (Youtube, X account, Discord place..)
I don't have a college degree either. I am about 50. I have never been unemployed and have had high paying software dev jobs my entire adult life. Your claim that the lack of degree is the only thing holding you back is very much incorrect.
I suspect the problem is elsewhere and you are unwilling or uncomfortable to discuss it.
I'm confused as to why someone who freely admits they have been broke & unemployed for 15 years feels they are qualified to provide "advice", make critical judgement calls about others and brag about their awesomeness.
>> My actual accomplishments in the world of computing ... are the stuff of legends
> "going back to school" to learn what I already know pretty damn well already, given that I've been programming since I was 8
It's small consolation if sitting in a classroom is something you truly hate, but the guys who are programming pros before they go into a CS program are very often the ones who do really well and get the most out of it.
I created my first Linux from scratch when I was a freshman in college in a third world country (not India). Fast forward few years later, I now write Linux kernel code for a living. Not sure what you did wrong, bud, to end up miserable like this.
Protip: When you consistently present yourself as somebody with a massively inflated ego who will be a constant pain to interact with, nobody's going to hire you, skills or not.
I left high school with average results and immediately got a job as a junior web developer, and I’m nothing special. I feel there must be more to this story… You don’t come off very well in your post, I imagine it could be the same in person and perhaps therein lies the issue?
> There is MUCH you still have to learn about life.
This response, along with your OP, it’s so pretentious and condescending. It seems you feel that you’re superior to everyone intellectually. I assume that you hold the same attitude in person and this is not helping your situation.
The irony is that I’ve done exactly this. I tried to start a business in my early 20’s and failed dramatically. I stopped developing altogether for a decade while I did minimum wage jobs and struggled to find a career. I started developing again in my early 30’s and half a decade later I’m running a software business.
You may well be intelligent but severely lacking in other necessary areas. It seems it is you who has much to learn.
I'm on the flip side of this - not exactly young but no dependants which is making me a little bit less nervous. Seems like the next 20 years will be a wild ride & it doesn't seem optional so lets go I guess
True. This is one of the best arguments for not having kids. I could never imagine putting myself in that uncertain situation. Much better to reduce those risks, and focus on yourself.
Having kids is a personal choice. The stress of having to support them is real and it might mean, at times, you sacrifice more than you would have without kids.
It's been entirely worth it for me and I cannot imagine my life without kids. But it's a deeply personal choice and I am not buying or selling the idea. I would just say nobody is ever ready and the fears around having them probably are more irrational than rational. But not wanting them because of how it might change your own life is a completely valid reason to not have kids.
> the fears around having them probably are more irrational than rational
My $0.02 is that if anything, the fears people have about how much their lives would be transformed are significantly lacking, and a lot of the "it's not so bad" advice is post-hoc rationalization. I mean, it's evolutionarily excellent that we humans choose to have kids, but it's very rational to be afraid and to postpone or even fully reject this on an individual basis. And as an industry and as a society, we should probably do a lot more to support parents of young children.
Ya, this is a fair callout. I moreso meant fears around being a bad parent. If anything, people experiencing those fears will be fine parents because they've got the consideration to already be thinking about doing a good job for their newly born.
I mean it's useful for some things, mainly as a complement to Stack Overflow or Google.
But the hallucination problem is pretty bad, I've had it recommend books that don't actually exist etc.
When using it for studying languages I've seen it make silly mistakes and then get stuck in the typical "You´re absolutely right!" loop, the same when I've asked it about how to do something with a particular Python library that turns out not to be possible with that library.
But it seems the LLM is unable to just tell me it's not possible so instead goes round and round in loops generating code that doesn't work.
So yeah, it has some uses but it feels a long way off of the revolutionary panacea they are selling it as, and the issues like hallucinations are so innate to how the LLMs function that it may not be possible to solve them.
reply