I'm not the person you asked this of, but I've worked in museums and research settings and can lob a response your way.
Ultimately, it's that scientists are humans, too. Despite some of them really making their research data-forward, things like tenure, career, funding, and even who would publish your work now and in the future all create normal human environments that reward small, incremental changes to a body of knowledge that don't upset the apple cart, not discoveries that suggest huge changes. In fact, large changes and discoveries can be resisted and denied further research in favor of the status quo.
This is not a new phenomenon by any means:
Both warm-blooded dinosaurs and the Chicxulub impact were both theories dismissed as fringe for decades before overwhelming evidence led to them being accepted as likely. In no small way thanks to Jurassic Park.
Recall that eugenics and phrenology both used to be widely accepted scientific "fact."
100 fairly prominent scientists signed a letter stating emphatically that Einstein's Theory of Relatively was categorically wrong and should be retracted.
Plate tectonics was seen as fanciful crackpot musings for decades. The author of the original theory died 30 years before plate tectonics was even considered possible.
Germ theory was dismissed for most of Louis Pasteur's lifetime, despite being able to literally show people yeast in a microscope.
Helicentrism has a storied past.
Quantum theory was also denied heavily at first. Now it saves photos to our hard drives.
And how many times has the earliest dates of hominids and tool use and human thresholds of development been pushed back by tens of thousands of years?
This is not an exhaustive list, by any means.
So we have ancient examples and modern ones - and everything in between. So the level of education or scientific progress or equipment are not the cause. Humans are. Humans do this all the time. So until overwhelming evidence surfaces, which can take decades or longer, claims like this shouldn't be dismissed out of hand until proven solidly in error. A theory is a theory, so let it be a theory.
> Both warm-blooded dinosaurs and the Chicxulub impact were both theories dismissed as fringe for decades before overwhelming evidence led to them being accepted as likely. In no small way thanks to Jurassic Park.
The main rejection of the impact hypothesis was that the dinosaurs had already died off by the time of the impact, the idea that the iridium in the layer came from an impact was reasonably well received. In 1984 a survey found 62% of paleontologists accepted the impact occurred, but only 24% believed it caused the extinction. The Alvarez duo, who proposed the impact hypothesis, were proposing to redefine where the cretaceous ended based on a new dating method (at the time the end of the cretaceous was believed to be a layer of coal a few meters off from the now accepted boundary), and fossil evidence at the time seemed to show gradual decline. A big part of the acceptance of the theory was the development of new analysis methods that showed the evidence for a gradual extinction prior to the impact to be illusory. By the time the impact crater was identified, it was already the dominant theory. Actually in the early 90s major journals were accused of being unfairly biased in favor of the impact hypothesis, with many more papers published in favor than against.
Completely coincidentally, the theory that the chixulub structure was an impact crater was initially rejected and it wasn't until 1990 that cores sampled from the site proved it was.
Dinosaurs being warm blooded was well accepted by the late 70s.
You've worked in those settings, and you think archaeologists reject tool use older than 1 mya?
Also, you don't understand that science is a process, based on evidence, and revision is an essential part of that process? Archaeology especially advances regularly, because evidence can be relatively very rare. If they weren't revising it, it would mean the whole research enterprise - to expand knowledge - was failing.
> how many times has the earliest dates of hominids and tool use and human thresholds of development been pushed back by tens of thousands of years?
I don't know, how many times? Tool use is universally believed, in the field, to have begun at least 2.58 million years ago, and with strong evidence for 3.3 mya. Tens of thousands of years isn't in the debate. See this subthread:
>Also, you don't understand that science is a process, based on evidence, and revision is an essential part of that process?
I do, and the process is exactly the point. That human emotions affect the process far more often than we like to admit. Not always, but it's not completely removed from the process by any means.
In each of those cases, it's that no one says, "Oh, new theory, new evidence. Cool, let's test the hell out of it!"
People in positions of relative power sometimes say, "New theory? Nope. Not even going to look at it. No, in fact, you're crazy and you're wrong and get outta here!"
In each of those examples, to some degree the eventual more accurate theory met emotional resistance by people adhering to the status quo, not resistance because of questionable data or poor research methods or non-reproducibility.
>So until overwhelming evidence surfaces, which can take decades or longer, claims like this shouldn't be dismissed out of hand until proven solidly in error. A theory is a theory, so let it be a theory.
I like how the word “overwhelming” is doing a lot of heavy lifting here.
Imagine if those 100 scientists had gotten their way and Einstein had retracted his Relativity paper. It would have taken decades of observations of gravitational lensing before someone else proposed gravity affects light and why, and then said "huh.... yeah, I guess this other guy had a similar theory a while back."
Imagine if 100 scientists had gotten together to refute the theory of Yakub. Yet many just dismiss it out of hand. Guess it’s a valid theory until such a time comes that science devotes sufficient attention to it that an overwhelming amount of scientists spend their time specifically proving it wrong or right
>Both warm-blooded dinosaurs and the Chicxulub impact were both theories dismissed as fringe for decades before overwhelming evidence led to them being accepted as likely. In no small way thanks to Jurassic Park.
I mean that's how science works. Things can be dismissed until they're proven true. If there's a valid path to finding out it's true then you can try to get funding, it just takes work and convincing people as you're competing for sparse resources. And getting egg on your face is also part of the process.
So you're saying it's a good thing to dismiss potential new discoveries because of feels? Not investigate further, not look for additional data to refute the theory or not. Just dismiss as crackpot BS? IIRC, that's not how science works.
Yes you can dismiss things when a theory doesn't have any evidence and also doesn't work with current evidence. Like you can dismiss my theory of the moon being made of cheese, there might be some under the crust, we haven't looked.
It took about 30 years for every geologist to reach consensus on tectonic plates and continental drift. Old heads who'd invested a lot of their credibility arguing against it had a lot to lose by admitting they were wrong, so they refused to do it.
Bill Bryson's book A Short History of Nearly Everything is where I'm taking that from. It's a great read and shows all the ways in which scientists failed to see what was under their nose for decades before finally figuring out, which makes one wonder what's currently ripe for the picking.
I think it just doesn't fit into the accepted timeline so it's mostly ignored. This is a common pattern with scientific discovery where evidence that contradicts the prevailing paradigm is ignored and builds up until it can no longer be ignored and causes a paradigm shift. This idea comes from The Structure of Scientific Revolutions by Thomas Kuhn.
True. But the systems are more and more breaking down. Its unsustainable. At least what I can tell from Germany and the Netherlands.
to see a healthcare specialist, you wait 3-6months in some cases.
Not talking about the trains. Germany DB runs on time in only 50% of the cases.
So thats a big problem
My partner has had three extensive cancer treatments in the Netherlands. She has had dietary and psychological specialists help her during and after each one.
All of this was just on normal health insurance and with normal clinics and hospitals.
Never did she have to wait more than perhaps 3 weeks tops for an appointment.
The medical system here is world class.
However Germany and it's infrastructure can not be compared to the Netherlands. I refuse to take trains through that country anymore.
I was talking about Germany's infrastructure. Last year I had 3x separate trips turn into chaos due to how broken their system is. Broken trains, broken track infrastructure etc. Think multiple hours on each trip rather than just 10 minutes delay.
The trains that are 10 min late in Germany mostly not exist in many other countries. Sure Switzerland is the best, but Germany is pretty high up. It’s just less good than it used to be. Oh and you can ride almost everywhere for 60 EUR / month.
For healthcare if you get an IT salary you can either move to private insurance, or buy additional insurance, or just pay a consultation yourself for a fee that US people won’t believe.
Last 7 times i took the ICE, i had 5 delays. 3 times the restaurant wasnt available. 2 times they didnt stop at my destination and I had to rent a car.
so yeah. I try to travel now either by car or plane. But even by car is terrible, especially in the south. More construction sites every year and none are finishing.
.
Health care is totally broken if you dont have private insurance. My step dad, who has, gets an appointment 1 day after he calls. my grand ma, who worked all her life and is now on public needs to wait 5 months IN PAIN.
the system is breaking down in front of our very eyes.
i am not living in Germany. i moved to fthe NL, but the situation is very similiar.
That's very alarmist, sensational and dramatic. The systems are going though some tough times, but they are not breaking down, that's what children would say to make their life more like a Hollywood movie.
My father had to go though multiple appointments and analysis to get his prostate and hernia checked. Never waited more than a week and paid 0 in total. Before, he'd probably only have to wait a couple days for appointments, but the stress the healthcare system is currently undergoing is abnormal due to the more aggressive cases of flue this season. All things considering, things are not "breaking down" (I'm even getting some second hand embarrassment reading those words).
> At least what I can tell from Germany and the Netherlands. to see a healthcare specialist, you wait 3-6months in some cases.
Same in France, it can take a while to get an appointment to see some specialists nowadays. There's a clear decline there.
But if you have something bad, they'll treat you in time. Actually, a relative of mine has been diagnosed with cancer a not long ago. She got several surgeries and all the treatments with no wait, and at not cost.
There's no reason why it shouldn't be sustainable.
> Science is characterized by objective empiricism; it relies on third-person observation, quantifiable data, and the principle of falsifiability to build a predictive model of the external, material world. Its goal is to establish "public" knowledge that remains true regardless of who is observing it.
That's only really true of the natural sciences. Cultural sciences (humanitas) are of a different kind. Here, we don't look for universal truths and laws but for meanings and interpretations. And they come from the Western philosophical tradition.
And conversely, Eastern philosophy often centers on phenomenology, using first-person introspection to realize the nature of existence and consciousness itself.
I see the same thinking in philosophy. We know a lot about the great thinkers of the West, from Plato to Aristoteles, to Jesus, to Thomas van Acquin, to Descartes, to Kant, to Hegel, to Nietsche, to Heidegger, to Foucoult, and so on... Its one western-european based lineage. And many of the western philosophers were supremacists indeed. They saw western philosophy as the pinaccle of human thought. The most advanced way of reasoning and understanding . This mindset obviously got them trapped.
But there is much to learn from other philosophies. China is the worlds oldest continuous civilization. Surely there were some great thinkers besides Konfuzius. Same with India. I attended last week a lecture about the Upanishads. And so much of the wisdom in there can be mapped, more or less specifically, to wisdom from Western philosophy.
There is an interesting field of study emerging: Comparative Philosophy. ith the aim to bring it all together. (See for instance, https://studiegids.universiteitleiden.nl/courses/133662/comp...).
Do you feel like you begin to _really_ understand React and Tailwind? Major tools that you seem to use now.
Do you feel that you will become so well-versed in it that you will be able to debug weird edge cases in the future?
Will you be able to reason about performance? Develop deep intuition why pattern X doesn't work for React but pattern Y does. etc?
I personally learned for myself that this learning is not happening. My knowledge of tools that I used LLMs for stayed pretty superficial. I became dependent on the machine.
These are things I pull out like 2-3 times a year normally, so I don’t feel like that makes a huge difference.
I’ve been learning zig and using LLMs clearly did hamper my ability to actually write code myself, which was the goal of learning zig, so I’ve seen this too.
It is important to make the right choice of when/how to use these tools.
The number of use cases for which I use AI is actually rapidly decreasing. I don't use it anymore for coding, I don't use it anymore for writing, I don't use it anymore for talking about philosophy, etc.
And I use 0 agents. even though I am (was) the author of multiple MCP servers.
It's just all too brittle and too annoying. I feel exhausted when talking to much to those "things".... I am also so bored of all those crap papers being published about LLM. Sometimes, there are some gems but its all so low-effort. LLM papers bore the hell out of me...
Anyway, By cutting out AI for most of my stuff, I really improved my well-being.
I found the joy back in manual programming, because I am one of the few soon that will actually understand stuff :-).
I found the joy in writing with a fountain pen in a notebook and since then, I retain so much more information. Also a great opportunity for the future, when the majority will be dumbed down even more.
And for philosophical interaction. I joined an online University and just read the actual books of the great thinkers and discuss them with people and knowledgable teachers.
For what I use AI still is to correct my sentences (sometimes) :-).
It's kinda the same than when I cut all(!) Social Media a while ago. It was such a great feeling to finally get rid ot all those mind-screwing algorithms.
I don't blame anyone if they use AI. Do what you like.
> Typewriters and printing presses take away some, but your robot would deprive us of all. Your robot takes over the galleys. Soon it, or other robots, would take over the original writing, the searching of the sources, the checking and crosschecking of passages, perhaps even the deduction of conclusions. What would that leave the scholar? One thing only, the barren decisions concerning what orders to give the robot next!
From Issac Asimov. Something I have been contemplating a lot lately.
I technically use it for programming, though really for two broad things:
* Sorting. I have never been able to get my head around sorting arrays, especially in the Swift syntax. Generating them is awesome.
* Extensions/Categories in Swift/Objective C. "Write me an extension to the String class that will accept an array of Int8s as an argument, and include safety checks." Beautiful.
That said I don't know why you'd use it for anything more. Sometimes I'll have it generate like, the skeleton of something I'm working on, a view controller with X number of outlets of Y type, with so and so functions stubbed in, but even that's going down because as I build I realize my initial idea can be improved.
I've been using LLMs as calculators for words, like they can summarize, spot, correct, but often can be wrong about this - especially when I have to touch language I haven't used in a while (Python, Powershell, Rust as recent examples), or sub-system (SuperPrefetch on WIndows, Or why audio is dropping on coworker's machines when they run some of the tools, and like this... don't ask me why), and all kinds of obscure subjects (where I'm sure experts exists, but when you need them they are not easy (as in "nearby") to reach for, and even then might not help)
But now my grain of salt has increased - it's still helpful, but much like a real calculator - there is limit (in precision), and what it can do.
For one it still can't make good jokes :) (my litmus test)
This is also my experience with (so called) AI. Coding with AI feels like working with a dumb colleague that constantly forgets. It feels so much better to manually write code.
I started to code with them when Cursor came out. I've built multiple projects with Claude and thought that this is the freaking future.
Until all joy disappeared and I began to hate the whole process. I felt like I didn't do anything meaningful anymore, just telling a stupid machine what I want and let it produce very ugly output.
So a few months, I just stopped. I went back to VIM even....
I am pretty idealistic coder, who always thought of it as an art in itself. And using LLMs robbed me of the artistic aspect of actually creating something. The process of creating is what I love and like and what gives me inspiration and energy to actually do it. When a machine robs me of that, why would I continue to do it? Money then being the only answer... A dreadful existence.
I am not a Marxist, probably bceause I don't really understand him, but I think LLM is "detachment of work" applied to coders IMHO. Someone should really do a phenomenological study on the "Dasein" of a coder with LLM.
Funnily, I don't see any difference in productivity at all. I have my own company and I still manage to get everything done on deadline.
I'll need to read more about this ("Dasein") as I was not aware of it. Yesterday our "adoptive" family had a very nice Thanksgiving, and we were considered youngesters (close to our 50s) among our hosts & guests and this came multiple times when we were discussing AI among many other things - "The joy of work", the "human touch", etc. I usually don't fall for these "nice feel" talks, but now that you mentioned this it hit me. What would I do if something like AI completely replace me (if ever).
If you speak fluent japanese, and you dont practice, you will remember being fluent but no longer actually be able to speak fluently.
Its true for many things; writing code is not like riding a bike.
You cant not write code for a year and then come back at the same skill level.
Using an agent is not writing code; but using an agent effectively requires that you have the skill of writing code.
So, after using a tool that automatically writes code for you, that you probably give some superficial review to, you will find, over time, that you are worse at coding.
You can sigh and shake your head and stamp your feet and disagree, but its flat out a fact of life:
If you dont practice, you lose skill.
I, personally found, this happening, so I now do 50/50 time: 1 week with AI, 1 week with strictly no AI.
If the no AI week “feels hard” then I extend it for another week, to make sure I retain the skills I feel I should have.
Anecdotally, here at $corp, I see people struggling because they are offloading the “make an initial plan to do x that I can review” step too much, and losing the ability to plan software effectively.
Dont be that guy.
If you offload all your responsibilities to an agent and sit playing with your phone, you are making yourself entirely replacable.
I cannot talk for OP, but I have been researching ways to make ML models learn faster, which obviously is a path that will be full of funny failures. I'm not able to use ChatGPT or Gemini to edit my code, because they will just replace my formulas with SimCLR and call it done.
That's it, these machines don't have an original thought in there. They have a lot of data so they seem like they know stuff, they clearly know stuff you don't.But go off the beaten path and they gently but annoyingly try to steer you back.
And that's fine for some things. Horrible if you want to do non-conventional things.
I liken it to a drug that feels good over the near term but has longer term impacts.. sometimes you have to get things out of your system. It's fun while it lasts and then the novelty wears off. (And just as some people have the tolerance to do drugs for much longer periods of time than others, I think the same is the case for AI)
You managed to move the goalposts in two sentences; if you realized that your first claim is wrong you probably should have rewrote it rather than try to save it at the end.
The economics of the force multiplier is too high to ignore, and I’m guessing an SWEs who don’t learn how to use it consistently and effectively will be out of the job market in 5 or so years.
Back in the early 2000s the sentiment was that IDEs were a force multiplier that was too high to ignore, and that anyone not using something akin to Visual Studio or Eclipse would be out of a job in 5 or so years. Meanwhile, 20 years later, the best programmers you know are still using Vim and Emacs.
But the vast majority are still using an IDE - and I say this as someone who has adamantly used Vim with plugins for decades.
Something similar will happen with agentic workflows - those who aren't already productive with the status quo will have to eventually adopt productivity enhancing tooling.
That said, it isn't too surprising if the rate of AI adoption starts slowing down around now - agentic tooling has been around for a couple years now, so it makes sense that some amount of vendor/tool rationalization is kicking in.
It remains to be seen whether these tools are actually a net enhancement to productivity, especially accounting for longer-term / bigger-picture effects -- maintainability, quality assurance, user support, liability concerns, etc.
If they do indeed provide a boost, it is clearly not very massive so far. Otherwise we'd see a huge increase in the software output of the industry: big tech would be churning out new products at a record rate, tons of startups would be reaching maturity at an insane clip in every imaginable industry, new FOSS projects would be appearing faster than ever, ditto with forks of existing projects.
Instead we're getting an overall erosion of software quality, and the vast majority of new startups appear to just be uninspired wrappers around LLMs.
I'm not necessarily talking about AI code agents or AI code review (workflows which I think are difficult for agentic workflows to really show a tangible PoV against humans, but I've seen some of my portfolio companies building promising capabilities that will come out of stealth soon), but various other enhancements such as better code and documentation search, documentation generation, automating low sev ticket triage, low sev customer support, etc.
In those workflows and cases where margins and dollar value provided is low, I've seen significant uptake of AI tooling where possible.
Even reaching this point was unimaginable 5 years ago, and is enough to show workflow and dollar value for teams.
To use another analogy, using StackOverflow or Googling was viewed derisively by neckbeards who constantly spammed RTFD back in the day, but now no developer can succeed without being able to be a proficient searcher. And a major value that IDEs provided in comparison to traditional editors was that kind of recommendation capability along with code quality/linting tooling.
Concentrating on abstract tasks where the ability to benchmark between human and artificial intelligence is difficult means concentrating on the trees while missing the forest.
I don't foresee codegen tools replacing experienced developers but I do absolutely see them reducing a lot of ancillary work that is associated with the developer lifecycle.
> I've seen significant uptake of AI tooling where possible.
Uptake is orthogonal to productivity gain. Especially when LLM uptake is literally being forced upon employees in many companies.
> I do absolutely see them reducing a lot of ancillary work that is associated with the developer lifecycle.
That may be true! But my point is they also create new overhead in the process, and the net outcome to overall productivity isn't clear.
Unpacking some of your examples a bit --
Better code and documentation search: this is indeed beneficial to productivity, but how is it an agentic workflow that requires individual developers to adopt and become productive with, relative to the previous status quo?
Documentation generation: between the awful writing style and the lack of trustworthiness, personally I think these easily reduce overall productivity, when accounting for humans consuming the docs. Or in the case of AI consuming docs written by other AI, you end up with an ever-worsening cycle of slop.
Automating low sev ticket triage: Potentially beneficial, but we're not talking about a revolutionary leap in overall team/org/company productivity here.
Low sev customer support: Sounds like a good way to infuriate customers and harm the business.
Hard agree on documentation. In my view, generated documentation is utterly worthless, if not counterproductive. The point of documentation is to convey information that isn't already obvious from the code. If the documentation is just a padded, wordy text extrapolated from the code, reading it is a complete waste of time.
Another thing here is that LLMs don't have to be a productivity boost if it lets you be lazier. Sometimes I'll have an LLM do something and it doesn't save time compared to me doing it but I can fuck off while it's working and grab a drink or something. I can spend my mental energy on hard problems rather than looking through docs to find all of the right functions and plumb things in the code.
OK, but LLMs are being valued as if they are one of the most important technologies ever created. How much will companies pay for a product that doesn't boost productivity but allows employees to be lazier?
I think no one can predict what will happen. We need to wait until we can empirically observe who will be more productive on certain tasks.
Thats why I started with AI coding. I wanted to hedge against the possibility that this takes off and I am useless. But it made me sad as hell and so I just said: Screw it. If this is the future, I will NOT participate.
The good thing is that the selling point of LLM tools is that they're dead easy to use, so even if you find yourself having to do them in the future, it won't be an issue. I know the AI faithful love talking about how non-believers will be "left behind", and stylize prompt engineering as some kind of deeply involved, complex new science, it really isn't. As more down-to-earth AI fanatics have confirmed to me, it'll probably take you an afternoon of reading some articles on best practices and you'll be back amongst the best of them. This isn't like learning a new language or framework.
That’s fine, but you don’t want to be blind sided by changes in the industry. If it’s not for you, have a plan B career lined up so you can still put food on the table. Also, if you are good at old fashioned SE and AI, you’ll be OK either way.
As someone that uses vim full time all that happened is people started porting all the best features of IDEs over to vim/emacs as plugins. So those people were right it's just the features flowed.
Pretty sure you can count the number of professional programmers using vanilla vim/neovim on one hand.
It depends where you work. In gaming, the best programmers I know might not even touch the command-line / Linux, and their "life" depens on Visual Studio... Why? Because the eco-system around Visual Studio / Windows and how game console devkits work is pretty much tied - while Playstation is some kind of BSD, and maybe Nintendo - all their proper SDKs are just for Windows and tied around Visual Studio (there are some studios that are the exceptions, but rare).
I'm sure other industries would have their similar examples. And then the best folks in my direct team (infra), much smaller - are the command-line, Linux/docker/etc. guys that use mostly VSCode.
Ya, you have to shape your code base, not just that but get your AI to document your code base and come with some sort of pipeline to have different AI check things.
It’s fine to be skeptical, and I definitely hope I’m wrong, but it really is looking bad for SWEs who don’t start adopting at this point. It’s a bad bet in my opinion, at least have your F-u money built up in 5 if you aren’t going full in on it.
The learning curve is actually huge. If you just vibe code with AI, the results are going to suck. You basically have to reify all of your software engineering artifacts and get AI to iterate on them and your code as if it were am actual software engineering (who forgot everything whenever you rebooted it, so that’s why you have to make sure it can re-read artifacts to get its context back up to speed again). So a lot more planning, design, and test documentation than you would do in a normal project. The nice thing is that AI will maintain all of it as long as you set up the right structure.
We are also in the early days still, I guess everyone has their own way of doing this ATM.
By this point you've burnt up any potential efficiency gains. So you spent a lot of hours learning a new tool which you then have to spend a lot of additional hours to babysit and correct, so much that you'll be very far from those claimed productivity gains. Plus the skills you need to verify and fix it will atrophy. So that learning curve earns you nothing expect the ability to put "AI" somewhere on your CV, which I expect will lose a lot of its lustre in 1-2 years time when everybody has made enough experiences with vibe coders who don't, or no longer can, enusre the quality of their super-efficient output.
Speaking as someone with a ton of experience here.
None of the things they do can go without immense efforts in validation and verification by a human who knows what they're doing.
All of the extra engineering effort could have been spent just making your own infrastructure and procedures far more resilient and valuable to far more people in your team and yourself going forward.
You will burn more and more and more hours overtime because of relying on LLMs for ANYTHING non-trivial. It becomes a technical debt factory.
That's the reality.
Please stop listening to these grifters. Listen to someone who actually knows what they're talking about, like Carl Brown.
That’s interesting but how much of this if written down, documented and made into video tutorials could be learnt by just about any good engineer in 1-2 weeks?
I don’t see much yet, maybe everyone is just winging it until someone influential gives it a name. The vibe coding crowd have set us back a lot, and really so did the whole leetcode interview fad that are just throwing off. It’s kind of obvious though: just tell the AI to do what a normal junior SWE does (like write tests), but write a lot more documentation because they forget things all the time (a junior engineer who makes more mistakes, so they need to test more, and remembers nothing).
The concepts in the LLMs latent space are close to each other and you find them by asking in the right way, so if you ask like an expert you find better stuff.
For it to work best you should be an expert in the subject matter, or something equivalent.
You need to know enough about what your making not just to specify it, but to see where the LLM is deviating (perhaps because you needed to ask more specifically).
There is, effectively, a "learning curve" required to make them useful right now, and a lot of churn on technique, because the tools remain profoundly immature and their results are delicate and inconsistent. To get anything out of them and trust what you get, you need to figure out how to hold them right for your task.
But presuming that there's something real here, and there does seem to be something, eventually all that will smooth out and late adopters who decide want to use the tools will be able onboard themselves plenty fast. The whole vision of them is to make the work easier, more accessible, and more productive, after all. Having a big learning curve doesn't align with that vision.
Unless they happen to make you more significantly productive today on the tasks you want to pursue, which only seems to be true for select people, there's no particular reason to be an early adopter.
- we are far removed from “early adopter” stages at this point
- “eventually all that will smooth out…” is assuming that this is eventually going to be some magic that just works - if this actually happens both early and late adopters will be unemployed.
it is not magic, it is unlikely to ever be magic. but from my personal perspective and many others I read - if you spend time (I am now just over 1,200 hours spent, I bill it so I track it :) ) it will pay dividends (and also will feel like magic ocassionally)
been hacking 3 decades so exponentially north of 1,200 hours ... in my career the one trait that always seems to differentiate great SWEs from decent/mediocre/awful ones is laziness.
the best SWEs will automate anything they have to manually do more than once. I have seen this over and over and over again. LLMs have take automation to another level and learning everything they can be helpful with to automate as much of my work will be worth 12,000+ hours in the long run.
What is this fantasy about people being unemployed? The layoffs we’ve seen don’t seem to be discriminating against or in favor of AI - they appear to be moves to shift capital from human workers to capex for new datacenters.
It doesn’t appear like anything of this sort is happening and the idea that good employer with a solid technical team would start firing people for not “knowing AI” instead of giving them a 2 week intro course seems unrealistic to me.
The real nuts and bolts are still software engineering. Or is that going to change too?
I don't think their will be massive unemployment based on actual "AI has removed the need for SWEs of this level..." kind of talk but I was specifically commenting on eventually all that will smooth out and late adopters who decide want to use the tools will be able onboard themselves plenty fast. ... If this actually did happen (it won't) then we'd all have to worry about being unemployed
I mean, yeah they did, in this sense literally all the time. The people who generated crap copy pasting from stack overflow or generated scaffold with tools without understanding that were literally the kind of programmers you tried to weed out.
It’s the opposite. The more you know to do without them the more employable you are. AI has no learning curve, not at the current level of complexity anyway. So anyone can pick it up in 5 years and if you’ve used it less your brain is better.
With all due respect, claiming “AI has no learning curve” can be an effective litmus test to see who has actually dig into agentic AI enough to give it a real evaluation. Once you start to peel back the layers of how to get good output you understand just how much skill is involved. Its very similar to being a “good googler”. Yeah on its face it seems like it shouldn’t be a thing but absolutely there are levels to it, and its a skill that must be learned.
Good. The smartest and best should be cutting out middlemen and selling something of their own instead of keep shoveling all the money up the company pyramids. I think the pyramids will become easier and easier to spot their trash and avoid
> ... an SWEs who don’t learn how to use it consistently ...
an SWE does not necessarily need to "learn" Claude Code any more than someone who does not know programming at all to be able to use the tool effectively. What actually matters is that they know how things should be done without coding assistants, they understand what the tools may be doing, and then give directions/correct mistakes/review code.
In fact, I'd argue tools should be simple and intuitive for any engineer to quickly pick up. If an engineer who has solid background in programming but with no prior experience with the tools cannot be productive with such a tool after an hour, it is the tool that failed us.
You don't see people talk about "prompt engineering" as much these days, because that simply isn't so important any more. Any good tool should understand your request like another human does.
Having A2A is much more efficient and less error prone. Why would I want to spend tons of token on an AI „figuring it out“, if I can have the same effect for less using A2A?
we can even train the LLMs with A2A in mind, further increasing stability and decreasing cost.
A human can also figure everything out, but if I come across a well engineered REST API with standard oauth2 , I am productive within 5 minutes.
When I entered university for my Bachelors, I was 28 years old and already worked for 5 or 6 years as a self-taught programmer in the industry. In the first semester, we had a Logic Programming class and it was solely taught in Prolog.
At first, I was mega overwhelmed. It was so different than anything I did before and I had to unlearn a lot of things that I was used to in "regular" programming. At the end of the class, I was a convert! It also opened up my mind to functional programming and mathematical/logical thinking in general.
I still think that Prolog should be mandatory for every programmer. It opens up the mind in such a logical way... Love it.
Unfortunately, I never found an opportunity in my 11 years since then to use it in my professional practice. Or maybe I just missed the opportunities?????
Did they teach you how to use DCGs? A few months ago I used EDCGs as part of a de-spaghettification and bug fixing effort to trawl a really nasty 10k loc sepples compilation unit and generate tags for different parts of it. Think ending up with a couple thousand ground terms like:
tag(TypeOfTag, ParentFunction, Line).
Type of tag indicating things like an unnecessary function call, unidiomatic conditional, etc.
I then used the REPL to pull things apart, wrote some manual notes, and then consulted my complete knowledgebase to create an action plan. Pretty classical expert system stuff. Originally I was expecting the bug fixing effort to take a couple of months. 10 days of Prolog code + 2 days of Prolog interaction + 3 days of sepples weedwacking and adjusting the what remained in the plugboard.
Prolog is a great language to learn. But I wouldn't want to use it for anything more than what its directly good at. Especially the cut operator, that's pretty mind bending. But once you get good at it, it all just flows. But I doubt more than 1% of devs could ever master it, even on an unlimited timeline. Its just much harder than any other type of non-research dev work.
reply