Hacker Newsnew | past | comments | ask | show | jobs | submit | thisoneisreal's commentslogin

I've been looking into Ada recently and it has cool safety mechanisms to encourage this same kind of thing. It even allows you to dynamically allocate on the stack for many cases.

You can allocate dynamically on the stack in C as well. Every compiler will give you some form of alloca().

True, but in many environments where C is used the stacks may be configured with small sizes and without the possibility of being grown dynamically.

In such environments, it may be needed to estimate the maximum stack usage and configure big enough stacks, if possible.

Having to estimate maximum memory usage is the same constraint when allocating a static array as a work area, then using a custom allocator to provide memory when needed.


Sure, the parent was commenting more about the capability existing in Ada in contrast to C. Ada variable length local variables are basically C alloca(). The interesting part in Ada is returning variable length types from functions and having them automatically managed via the “secondary stack”, which is a fixed size buffer in embedded/constrained environments. The compiler takes care of most of the dirty work for you.

We mainly use C++, not C, and we do this with polymorphic allocators. This is our main allocator for local stack:

https://bloomberg.github.io/bde-resources/doxygen/bde_api_pr...

… or this for supplying a large external static buffer:

https://bloomberg.github.io/bde-resources/doxygen/bde_api_pr...


> You can allocate dynamically on the stack in C as well. Every compiler will give you some form of alloca().

And if it doesn't, VLAs are still in there until C23, IIRC.


`-Wvla` Friends don’t let friends VLA :)

alloca is certainly worse. Worst-case fixed size array on the stack are also worse. If you need variable-sized array on the stack, VLAs are the best alternative. Also many other languages such as Ada have them.

Daniel Nettle gives a great layperson's explanation of the Five Factor model in his book "Personality," and the first thing he explains is in line with exactly what you ended on. We exhibit a variety of personalities because different personalities are useful in different environments. Sometimes it's GOOD to be highly neurotic, or low extraversion. Natural selection doesn't care about your internal conscious experience of life, it will make you miserable if that helps you survive.


I'm an avid reader (several dozens of books per year at least), and one of the things that bums me out is all of the morality around my hobby. 3 or 4 times out of 5 when I talk to people about it the reaction is "oh man I'm such a bad person because I don't read enough books."

It's fine! The number of books you read is not a reflection on your quality as a person.

Reading absolutely has positive benefits, but really it's exactly what you said. It's just more interesting than other options out there. The tradeoff is yes, it can require some effort, but that's the same as any other effortful activity. You have to get past the cost, but there's a really nice reward on the other side.

And for what it's worth, there ARE television shows, movies, etc. that have more value than many books. ("The Wire" is a prime example, probably better than 70-80% of the books out there.) The point is just generally that more cognitively demanding avocations can have a higher cost-benefit ratio than cheaper ones like TV. On average, books fall more into this category than other media, but that's just on average.

Anyway this is a long way of saying that feeling bad about the media you consume is counterproductive. The message should be that there is potentially a more rewarding experience out there, but whether you pursue it or not is totally up to you and doesn't make you a good or bad person either way.


Yes to all of that. My biggest pet peeve is the Goodreads reading challenge, I cringe at it every year. Imagine that but a 'TV show challenge', it would be absurd. This is the way people think about books.

Read what you want, how you want. Pick up the same book five times. Do whatever. Forget arbitrary challenges.


I always laugh when people say something like oh wow you must be so smart reading all those books. Nah I'm reading about Goblins, Gnomes and vampires in space its really not ground breaking intellectual stuff. I enjoy reading but its similar to sitting down and watching a movie or TV show in my eyes.


I agree. Books have a higher intellectual ceiling than most things, but there is as always a mountain of slop, too. I'd rather someone spend a year interrogating Plato or Moby Dick than read 300 Agatha Christie or Steven King type novels. There is nothing virtuous about reading in itself.

I echo the sentiment of the sibling comment: book count challenges are foolish and missing the point.


I'm sorry for your loss. It sounds like your father was a great man. No need for apologies, I think what you said is very poignant and relevant to the topic at hand. We should all be so lucky to live such full lives.


My take (no more informed than anyone else's) is that the range indicates this is a complex phenomenon that people are still making sense of. My suspicion is that something like the following is going on:

1. LLMs can do some truly impressive things, like taking natural language instructions and producing compiling, functional code as output. This experience is what turns some people into cheerleaders.

2. Other engineers see that in real production systems, LLMs lack sufficient background / domain knowledge to effectively iterate. They also still produce output, but it's verbose and essentially missing the point of a desired change.

3. LLMs also can be used by people who are not knowledgeable to "fake it," and produce huge amounts of output that is basically besides-the-point bullshit. This makes those same senior folks very, very resentful, because it wastes a huge amount of their time. This isn't really the fault of the tool, but it's a common way the tool gets used and so it gets tarnished by association.

4. There is a ridiculous amount of complexity in some of these tools and workflows people are trying to invent, some of which is of questionable value. So aside from the tools themselves people are skeptical of the people trying to become thought leaders in this space and the sort of wild hacks they're coming up with.

5. There are real macro questions about whether these tools can be made economical to justify whatever value they do produce, and broader questions about their net impact on society.

6. Last but not least, these tools poke at the edges of "intelligence," the crown jewel of our species and also a big source of status for many people in the engineering community. It's natural that we're a little sensitive about the prospect of anything that might devalue or democratize the concept.

That's my take for what it's worth. It's a complex phenomenon that touches all of these threads, so not only do you see a bunch of different opinions, but the same person might feel bullish about one aspect and bearish about another.


The first part is surely true if you change it to "the hardEST part," (I'm a huge believer in "Programming as Theory Building"), but there are plenty of other hard or just downright tedious/expensive aspects of software development. I'm still not fully bought in on some of the AI stuff—I haven't had a chance to really apply an agentic flow to anything professional, I pretty much always get errors even when one-shotting, and who knows if even the productive stuff is big-picture economical—but I've already done some professional "mini projects" that just would not have gotten done without an AI. Simple example is I converted a C# UI to Java Swing in less than a day, few thousand lines of code, simple utility but important to my current project for <reasons>. Assuming tasks like these can be done economically over time, I don't see any reason why small and medium difficulty programming tasks can't be achieved efficiently with these tools.


My framing for this is "mass production of stimuli." Before industrialization, the number of things grabbing your attention at any given moment wasn't super high. But once you had mass production, and especially the innovation of extrinsic advertising (associating psychological properties not intrinsic to the product being advertised itself), we were all suddenly awash in stimulating signals. But like this article notes, those stimuli go mostly unfulfilled by the action we take (buying the thing, opening the app), and so we all have this low level background noise of frustration and dissatisfaction.

EDIT: Some later posts mentioned it, but philosophers and religions have contemplated this stuff for centuries. Nevertheless I do think it's an exacerbated problem in the modern world due to technology and scale.


I hit the same roadblock unfortunately. My academic references were all in a different field and I hadn't really stayed in contact except with one professor, who sadly has died. I did see that there's an option to use professional references, so even though I haven't done this myself, one route you could consider taking is to get references from managers, colleagues etc. who can speak to your technical knowledge. I agree though with your general point that after being out of an academic environment for a while that requirement becomes challenging.


The best system I ever worked with looked incredibly simple. Small, clear functions. Lots of "set a few variables, run some if statements." Incredibly unassuming, humble code. But it handled 10s of millions of transactions per day elegantly and correctly. Every weird edge case or subtle concurrency bug or whatever else you could think of had been squeezed out of the system. Everything fit together like LEGO blocks, seamlessly coming together into a comprehensible, functional, performant system. I loved it. After years of accepting mediocre code as the cost of doing business, seeing this thing in a corporate environment inspired me to fall in love with software again and commit to always doing my best to write high quality code.

EDIT: I think what made that code so good is that there was absolutely nothing unnecessary in the whole system. Every variable, every function, every class was absolutely necessary to deliver the required functionality or to ensure some technical constraint was respected. Everything in that system belonged, and nothing didn't.


Was it written by one person?


The majority of it, yes.


I had the pleasure of working with a handful of Pivots for about 2 years, and I have to say that felt like the closest I ever got to a healthy engineering culture. Delightful people, superb engineers, always focused on working and learning together. I feel really privileged to have worked in that environment.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: