Hacker Newsnew | past | comments | ask | show | jobs | submit | danieltanfh95's commentslogin

The most useful thing juniors can do now is use AI to rapidly get up to the speed with the new skill floor. Learn like crazy. Self learning is empowered by AI.

Engineers > developers > coders.


AI has a lot of potential as a personal, always on teaching assistant.

It's also an 'skip intro' button for the friction that comes with learning.

You're getting a bug? just ask the thing rather than spending time figuring it out. You don't know how to start a project from scratch? ask for scaffolding. Your first boss asks for a ticket? Better not to screw up, hand it to the machine just to be safe.

If those temptations are avoided you can progress, but I'm not sure that lots of people will succeed. Furthermore, will people be afforded that space to be slow, when their colleagues are going at 5x?

Modern life offers little hope. We're all using uber eats to avoid the friction of cooking, tinder to avoid the awkwardness of a club, and so on. Frictionless tends to win.


That is quite some wishful thinking there. Most juniors won't care, just vibe-code their way through.

It takes extra discipline and willpower to force yourself do the painful thing, if there is a less painful way to do it.


Because employers famously hire based on skill and not credentials or existence

Scientists > engineers > developers > coders

Mathematicians > scientists > engineers > developers > coders


Why not just let the product manager use some no-code tool?

I think software engineers are having an identity disconnect from their roles as engineers vs coders. Engineering is about solving problems via tools and knowledge through constraints. An engineer is not diminished by having other engineers or better tooling as assistants. If you are having problems understanding your role in the problem, frankly you need to review your skillset and adjust.


You are correct in the abstract, but concretely I contest how useful LLMs are for producing software. I don't doubt their usefulness in prototyping or, say, writing web apps, but I truly do not think they are revolutionary for me, or for software development as a whole.


I hate that he is right. It speaks deeply about how broken the incentives are for humanity and labour and why AI will ultimately destroy jobs, because AI won't need to deal with all the sacred rituals around politics and control and human management. For each stupidity that we worship just to "preserve company culture", we step into the inevitable doom like having a Google principal engineer worship Opus on X like it's the first time they went to prom and saw someone hot.

It is sickening and it is something we have internalized and we will have destroyed ourselves before we settle on the new culture of requesting excellence and clarity beyond the engineers who have to deal with this mess.


Please. Manus had a live demo in Google Expo 2025 in Singapore and they blew it. It was such bad taste.

Manus had 1 marketing gimmick with the agents. That is no longer anything novel.


A failed demo discounted a entire 100mm arr?


they are 8 months old. there is no proof of 100m arr.


> Cynics feel smart but optimists win.

survivorship bias.


context poisoning is a real problem that these memory providers only make worse.


IMO context poisoning is only fatal when you can't see what's going on (eg black box memory systems like ChatGPT memory). The memory system used in the OP is fully white box - you can see every raw LLM request (and see exactly how the memory influenced the final prompt payload).


That's significant, you can improve it in your own environment then.


Yeah exactly - it's all just tokens that you have full control over (you can run CRUD operations on). No hidden prompts / hidden memory.


The reason is quite straightforward. LLMs excel at mapping tasks but suck at first principles reasoning and validation.

When you are working on the AI map app, you are mapping your new idea to code.

When people are working with legacy code and fixing bugs, they are employing reasoning and validation.

The problem is management doesn't allow the engineers to discern which is which and just stuff it down their throats.


https://danieltan.weblog.lol/2025/08/function-colors-represe...

the core problem is that language/library authors need to provide some way to bridge between different execution contexts, like containing these different contexts (sync / async) under FSMs and then providing some sort of communication channel between both.


This would generally just discourage open software in general. Rebble is a non-profit and should not pretend to "own" any software or content. Eric didn't do things the polite way, but either way there's nothing to discuss here. Claiming that someone can steal something that is open source implies that they own said open source code / content. that's not how any of this works.

Reselling open source content is always going to be bad taste.


> young person complains about jobs because automation, outsourcing and immigration

> looks at resume

> garbage formatting that only AI would love, with little substantial content beyond the sea of candidates would offer.

All the talk about humans and yet producing a piece of paper that doesnt respect human time.


> garbage formatting that only AI would love

If AI would truly love it, then that seems like the best-case ATS-optimized resume to me. The right tool for the job. Imagine using real people to review applicants - what is this, the 1800s?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: