Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Soft Orders of Magnitude (belkadan.com)
39 points by frizlab on Oct 10, 2023 | hide | past | favorite | 10 comments


I used to work on hearing aids and there's a similar concept in human audio perception with regard to audio latency. If you add latency to some audio (say because of the hearing aid's processing), the perception that the audio is getting worse doesn't increase smoothly with the latency. Instead there are a few thresholds where the latency seems to be perceptibly worse. The main ones are:

0--2 ms: No perception of latency at all.

2--10 ms: There is a comb filtering effect, so the audio sounds somewhat distorted.

10--30 ms: There's a perception that something is "off" about the audio and it requires greater cognitive effort to listen to. This is partly due to a noticeable desynchronization between the audio and visual cues. Another factor is the "Haas effect." If you have a direct path (audio that goes straight into the ear) and audio coming out of the hearing aid at some latency with respect to the direct path, the arrival of these two separate wavefronts at different times causes a perceptible distortion in the audio.

30+ ms: Beyond this people perceive an actual lag between what they see and the audio they hear. It can be almost nauseating to listen to audio at this latency for long periods of time.

The upshot of all this is that if you look at the latencies that hearing aids on the market have, they all cluster into two groups: one at around 2 ms and another at around 10 ms. If you can fit all your processing into 2 ms that's great, but if you go much longer than that, you may as well take the full 10 ms to do even more processing because people aren't going to be able to tell the difference.


There is more discernible levels of lagging between "instantly" and "1-2s", and more levels above "a weekend".

For example, for about 3 years I worked on a project where I had to send the code I wrote to an external entity to get it compiled and signed before I could run it on my device. It took one week at the minimum. I had to completely change the approach to my personal coding process as well as development process in general. It was so powerful that even to this day I am developing for days without compiling code and people are very surprised it usually runs on the first try -- if you can write in ANSI C for a week and have it running on the first try, doing the same in Java is a piece of cake.

Trust me, waiting for a week for your code to run is very different from waiting for a weekend, where you can leave your computer on Friday and plan to conduct your tests on Monday.

Other even larger orders of delay concern ability to change things in the environment. For example annoying things like broken process, broken automation, etc. In one company you could be able to fix things on your own, sometimes in minutes. In other companies you could face multi-month wait to get even slightest changes. If you have to wait for a quarter to get a new Jira ticket field you are not very likely to do it or care for the process in general.

Then you have the world of shorter delays.

Things that take a millisecond will usually already require you to take this into account when you work with it. Can I do this in a loop? Can I recalculate this whenever I need or do I need to cache the value? Etc. Whatever the solution, it costs you a bit of your focus and as we all know, focus is the main resource of software developer.

Then you go to 20-50 milliseconds and this is where users notice delays. It is not as snappy as it was even if they can't exactly specify what changed.

By the time you get to 200ms, the users will complain the application is slow and irritating, especially if they spend a lot of time in it clicking on things.


> I am developing for days without compiling code and people are very surprised it usually runs on the first try

this is a thing that surprises people? Perhaps I'm not as brave by doing it in Rust, but I can go weeks without compiling code. Rewrite or refactor everything, be done, have it entirely work or only require a couple tweaks.


Welcome to the club. I spent a lot of time thinking why exactly this is surprising and the result improved my hiring game a lot.

So the issue really comes down to the fact that most people do not really have an idea what they are doing. They don't really know what their program is going to do before they actually run it. They don't have a model, an intimate understanding of what is really happening underneath the magic incantation they just laid on the screen. When you can't predict what your program is going to do, you also have a hard time debugging your own code (debugging, after all, is finding out how the program works different in reality from what it should be doing).

When you are this person, a natural tactic is to continuously compile the application and verify it works. You write a function, you check if it works well. If something doesn't work correctly, you know it is most likely code you just wrote, so it makes it easier to fix it. Just keep changing the code you just wrote until it works.

This lets people write code, but there are some huge problems associated. For one, it is very inefficient. Second, it does not work with large sweeping changes -- I many times worked with people who were paralysed by trying to run large refactoring on a code base. Among these types of developers this usually does not happen and rather than keep fixing existing codebase (which requires you to understand the codebase) people will default on rewriting (which is the process they already know).

Third, it does not force you to get better understanding of the application you are working on and the framework you are using. It is a crutch that you never let go of and will forever keep you from being able to run until you do.

Fourth, it hurts ability to design new features and take part in intelligent design discussion.

But most importantly, it leaves a huge trail of bugs. When you write code not knowing what you do, you will generate a huge number of problems, but only fix the ones that presented themselves during testing, deployment or in production.

So now when I hire developers, my coding exercise is specifically designed to, among other things, detect which type of developer you are. If you rely on running the code to tell you what it is doing, you will not pass the interview.

I also thought about causes of this. I think, for the most part, the cause of this is that these people never had a stimulus to learn do better. I did and I am happy for it. Unfortunately, the way people learn is through tutorials and short succinct examples where they are constantly asked to write a short piece of code and then run it to confirm you are still in sync with the tutorial. It is also a natural way of learning a new topic, technology, framework, etc. -- when I learn something new I don't write long pieces of code because I just can't predict what it is going to do. Only through learning I can build my mental model of what the framework does and only then I can achieve the goal of being able to predict what the code does and run for long stretches of time.


> When you can't predict what your program is going to do, you also have a hard time debugging your own code (debugging, after all, is finding out how the program works different in reality from what it should be doing).

This also spoke to me. It seems like half the time I see a bug, I know exactly what it is. I haven't started an actual debugger in months, maybe longer. On other occasions I'll know exactly where a print statement goes.

I think it's because, sure, I know what I want, but I still keep track of exactly what I've written as well. I understand how the program works to achieve that goal.

> they are constantly asked to write a short piece of code and then run it to confirm you are still in sync with the tutorial.

When I first started learning Rust, I opened the Book and Ray Tracing In One Weekend side by side, and implemented an interactive multithreaded path tracer. :)

Sometimes I feel bad about how impressive that sounds when in reality I'm a total ADHD mess. But I guess at least I know what I'm doing.

BTW: Are you hiring remote? :)


Oh man, I was just making an argument for performance goals based off exactly this concept, but I hadn't thought of the framing of "soft orders of magnitude" -- that's a great analogy. Thinking a bit more, I think "paradigm shift" and "discontinuity" also works. Every "soft order of magnitude" is in fact a qualitative shift of the experience --

- "I can reasonably stare at this tool's output and wait for it to finish"

- "I'll get distracted and look outside but I won't switch off task."

- "Time for a coffee."


Really like this human-centered way of thinking about performance in terms of how it maps to our perception and working rhythms


I like this term; I've heard this idea in multiple places before, but this seems like a good way of capturing the essential point (that with performance, a sufficiently large difference in degree can become a difference in kind by enabling new patterns of work).


If you want something a bit more formal, see Turing Award winner Allen Newell's Bands of Cognition. He breaks things up into four major bands, namely biological (milliseconds), cognitive (seconds), rational (minutes to hours), and social (days to months). Each band requires different tools, as well as ways of thinking about and analyzing them.

Here's an example diagram of it: https://www.researchgate.net/profile/Bruno-Emond/publication...


What does "social" mean in this context?

I.e. it means that now the task can be tackled by multiple people or organizations, that this is desirable, or that it should be?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: