Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Obligatory Simple Made Easy link:

https://www.youtube.com/watch?v=SxdOUGdseq4

Simple is a matter of intuition, and that can't be transmitted to others easily, or with a single class or book.

At one particular job we got punished by the business for calling things 'easy' when what we mean is that we understand the problem and all of the steps are (mostly) known. Our boss coached the hell out of us to say 'straightforward' when we meant 'understood', instead of using 'easy' as an antonym for 'quagmire' or 'scary'.



Agreed. But I also think that "simple to implement," "simple to debug," and "simple to test" are different metrics -- and that one has to choose which one to optimize for. This is independent from assessment of "simple" varying with intuition -- "simple" alone isn't a coherent concept.


That's part of the section in Programming Perl that sticks in my memory.

From my copy...

> Efficiency

> ...

> Note that optimizing for time may sometimes cost you in space or programmer efficiency (indicated by conflicting hints below). Them’s the breaks. If program- ming was easy, they wouldn’t need something as complicated as a human being to do it, now would they?

> ...

> Programmer Efficiency

> The half-perfect program that you can run today is better than the fully perfect and pure program that you can run next month. Deal with some temporary ug- liness.1 Some of these are the antithesis of our advice so far.

    • Use defaults.
    • Use funky shortcut command-line switches like –a, –n, –p, –s, and –i.
    • Use for to mean foreach.
    • Run system commands with backticks.
    ...
    • Use whatever you think of first.
    • Get someone else to do the work for you by programming half an implementation and putting it on Github.

> Maintainer Efficiency

> Code that you (or your friends) are going to use and work on for a long time into the future deserves more attention. Substitute some short-term gains for much better long-term benefits.

    • Don’t use defaults.
    • Use foreach to mean foreach.
    ...


I've been dealing with a batch processing task that's written in NodeJS (partly because it was the tool at hand, partly because it does offline a process that can be done online so it's reusing code), and global interpreter locks are definitely introducing some new nuances to my already fairly broad knowledge of performance and concurrency. Broad not in the sense that I am a machine whisperer, but that I include human factors into this and that explodes the surface area of the problem, but also explains quite a lot of failure modes.

In threaded code it's not uncommon to analyze a piece of data and fire off background tasks the moment you encounter them. But if your workload is a DAG instead of a tree, you don't know if the task you fired is needed once, twice, or for every single node. So now you introduce a cache (and if you're a special idiot, you call it Dynamic Programming which it is fucking not) and deal with all of the complexities of that fun problem.

But it turns out in a GIL environment, you're making a lot less forward progress on the overall problem than you think you are because now you're context switching back and forth between two, three, five tasks with separate code and data hotspots, on the same CPU rather than running each on separate cores. It's like the worst implementation of coroutines.

If instead you scan the data and accumulate all the work to be done, and then run those tasks, and then scan the new data and accumulate the next bit of work to be done, you don't lose that much CPU or wall clock time in single threaded async code. What you get in the bargain though is a decomposition of the overall problem that makes it easy to spot improvements such as deduping tasks, dealing with backpressure, adding cache that's more orthogonal, and perhaps most importantly of all, debugging this giant pile of code.

So I've been going around making code faster by making it slower, removing most of the 'clever' and sprinkling a little crypto-cleverness (when the clever thing elicits an 'of course' response) / wisdom on top.


> Programming Perl

That book is one of the most underrated and overlooked works on the philosophy of programming I've ever read. It's ostensibly about best practices in programming Perl (which some people consider a complex language), but in reality this is a very deep book about the best practices for programming in any language.

Note the above excerpt is pretty much universally applicable no matter what the language. Much of the book is written at that level.

https://www.oreilly.com/library/view/programming-perl-4th/97...


I could say a similar thing about Practical Parallel Rendering. Officially it's a book about raytracing CGI in a cluster, but the first half of the book explains queuing theory and concurrency concerns in tremendous detail. It's a thin book to begin with, and you've more than gotten your money's worth if you read the first half and give up when they start talking about trigonometry.


The rules of Chess aren't that hard. The rules of Go are even easier. You can literally spend your whole life unpacking the implications of the rules of either of those games.

Ultimately both are 'too simple', resulting in a combinatorial explosion of states, and at least a quadratic expansion of consequences.

We often write software to deal with consequences of something else. It's possible and not that uncommon for the new consequences to be every bit or more onerous than the originals. I call this role a 'sin eater' because you're just transferring suffering from one individual to another and it sounds cooler and more HR appropriate than 'whipping boy'.


And, to add a bit more nuance, simplicity can also depend on the stage a project is at... It may be really simple to implement core functionality to demonstrate an idea, but developing on that code can add a lot of complexity later. For example, adding security late in a project is almost always much more difficult than adding a small amount up front. Even the simple to implement metric can be a difficult judgement call.


I haven't been able to distill it to first principles yet, but I do have a practice of writing code in such a way that it invites the next step.

I suspect that at first I did this in an attempt to hack my own sense of motivation, like putting the books you need to return next to the front door. But it turned out to be quite handy for seducing junior developers (and sometimes senior developers) into finishing an idea that you started.

They are so proud that they've thought of something you didn't think of, rather than something you were looking for a maintainer/free cycles for.


I am taking my own advice and re-watching this presentation. I'm being surprised enough by parts I don't remember that I've decided that I need to watch this video at least once a year.

Certainly there are some things I've just forgotten, and others I just wasn't ready to hear.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: