Hacker Newsnew | past | comments | ask | show | jobs | submit | Sohcahtoa82's commentslogin

> your things will get cheaper if the cost required to market them reduces.

Prices won't go down. Profits go up. The winners are the shareholders.


Discussions about "DJs" are difficult because there's a WIDE range of skills behind what we call a DJ.

Yes, some have zero skill and will basically just show up with a pre-determined Spotify playlist. They won't even have mixing/transitions between songs.

Some are in the middle and will be able to do basic transitions between songs (ie, just simple beat matching) and know how to carry a vibe.

At the far end of the spectrum are actual composers that are effectively making new mixes of songs on the fly.

And so you have the problem where someone says "Being a DJ takes a lot of skill" because they're thinking of the last category, while the person hearing that message replies with "How does it take skill to just press Play?" because they're thinking of the first.


> I DO think using 64 bits for hosts was stupid but oh well.

Hey man, if I want to assign an address for each individual transistor in my system, that's my business.



Probably someone who writes C/C++ and formats their code that way

    if ( x == ( y + z ) * w ) {
Personally, I find it hard to read.

Agreed. But I find this easier to read:

  if ( x == (y+z) * w )
      {
Spaces help group things.

> people who don't read error messages

One of my pet peeves that I will never understand.

I do not expect users to understand what an error means, but I absolutely expect them to tell me what the error says. I try to understand things from the perspective of a non-technical user, but I cannot fathom why even a non-technical user would think that they don't need to include the contents of an error message when seeking help regarding the error. Instead, it's "When I do X, I get an error".

Maybe I have too much faith in people. I've seen even software engineers become absolutely blind when dealing with errors. I had a time 10 years ago as a tester when I filed a bug ticket with explicit steps that results in a "broken pipe error". The engineer closed the ticket as "Can Not Reproduce" with a comment saying "I can't complete your steps because I'm getting a 'broken pipe error'".


Just today I've had a "technical" dude complain about something "not working".

He even checked "thing A" and "thing B" which "looked fine", but it still "didn't work". A and B had absolutely nothing to do with each either (they solve completely different problems).

I had to ask multiple times what exactly he was trying to do and what exactly he was experiencing.

I've even had "web devs" shout there must be some kind of "network problem" between their workstation and some web server, because they were getting an http 403 error.

So, yeah. Regular users? I honestly have 0 expectations from them. They just observe that the software doesn't do what they expect and they'll complain.


Your “technical guy” sounds a lot like me.

When debugging stuff with the devs at our work, I tend to overexplain as much as I can, because often there’s some deep link between systems that I don’t understand, but they do.

I’m a pretty firm believer in “no stupid questions (or comments)”, because often going in a strange direction that the devs assure me isn’t the problem, actually turns out to be the problem (maybe thing A actually has some connection to thing B in a very abstract way!).

I think just serving a different perspective or theory can help us all solve the problem faster, so sometimes it’s worth to pull that thread, even if it seems worthless in the moment.

Maybe I’m just lucky that my engineering colleagues are very patient with me (and maybe less lucky that some of our systems are so deeply intertwined), but I do hope they have more than zero expectations from me, as we mean well and just want to support where we can, knowing full well that ya’ll are leagues ahead in the smarts department.


Totally on board with this gripe. Absolutely infuriating. But just one minor devil's advocate on the HTTP 403, although this doesn't excuse it at all.

In Azure "private networking", many components still have a public IP and public dns record associated with the hostname of the given service, which clients may try to connect to if they aren't set up right.

That IP will respond with a 403 error if they try to connect to it. So Azure is indirectly training people that 403 potentially IS a "network issue"... (like their laptop is not connected to VPN, or Private DNS isn't set up right, or traffic isn't being routed correctly or some such).

Yeah, I get that's just plain silly, but it's IAAS/SAAS magic cloud abstraction and that's just the way Microsoft does things.


> That IP will respond with a 403 error if they try to connect to it. So Azure is indirectly training people that 403 potentially IS a "network issue"...

You are not describing a network issue. You're sending requests that by design the origin servers refuse to authorize. This is basic HTTP.

https://datatracker.ietf.org/doc/html/rfc7231#page-59

The origin servers could also return 404 in this usecase, but 403 is more informative and easier to troubleshoot, because it means "yeah your request to this resource could be good but it's failing some precondition".


My theory is that the best, absolute best predictor if someone could be a good programmer (or is) is the ability to read exactly what is written.

Is not math, logic or any of that asides. Is the actual ability to read, exactly, without adding or removing anything.


You can test that theory with Magic the Gathering players. Reading exactly what the card says and interpreting it with the exact text of the rules is core to the game.

> I do not expect users to understand what an error means

I'm not sure I agree.

Reason ?

The old adage "handle errors gracefully".

The "gracefully" part, by definition means taking into account the UX.

Ergo "gracefully" does not mean spitting out either (a) a meaningless generic message or (b) A bunch of incomprehensible tech-speak.

Your error should provide (a) a user-friendly plain-English description and (b) an error ID that you can then cross-reference (e.g. you know "error 42" means the database connection is foobar because the password is wrong)

During your support interaction you can then guide the user through uploading logs or whatever. Preferably through an "upload to support" button you've already carefully coded into your app.

Even if your app is targetting a techie audience, its the same ethos.

If there is a possibility a techie could solve the problem themselves (e.g. by RTFM or checking the config file), then the onus is on you to provide a suitably meaningful error message to help them on their troubleshooting journey.


There are people that when using a computer, if anything goes remotely wrong, they completely lose all notions of language comprehension. You can make messages as non-technical as possible and provide troubleshooting steps, and they just throw their hands up and say "I'm not a computer person! I don't know what it's telling me!"

20 years ago, I worked the self-checkout registers in retail. I'd have people scan an item (With the obvious audible "BEEP"), and then stand there confused about what to do next. The machine is telling them "Please place the item in the bag" and they'd tell me they don't know what to do. I'd say "What's the machine telling you?" "'Please place the item in the bag'" "Okay, then place the item in the bag" "Oh, okay"

It's like they don't understand words if a computer is saying them. But if they're coming from a human, they understand just fine, even if it's the exact same words.

"Incorrect password. You may have made a mistake entering it. Please try entering it again." "I don't know what that means, I'm going to call up tech support and just say I'm getting an error when I try to log in."


>completely lose all notions of language comprehension

I see this pretty often. These aren't even what should be called typical users in theory. They are people doing a technical job and were hired with technical requirements, an application will spit out a well written error message in the domain they should be professionals in and their brain turns off. And ya, it ends up in a call to me where I state the same thing and they figure the problem out.

I really don't get it.


I think part of it is that most users at some point encounter an error message that is just straight up wrong. For example, a login page that says "wrong password" when in reality the user is typing EXACTLY what they typed on account creation, but the site silently truncated the password. Even one such frustrating experience is enough to teach many users that as soon as they see any error message, they should stop trusting anything the system tells them, including the error message. It's extremely difficult to rebuild user trust after this sort of UX contract violation, particularly because less technical users don't mentally differentiate separate computer systems. All the systems are just "the computer."

Also arguably the users are kind of right. An error indicates that a program has violated its invariants, which may lead to undefined behavior. Any output from a program after entering the realm of undefined behavior SHOULD be mistrusted, including error messages.


I think it's something to do with the expectations of automation. We seem to be wired or trained to trust the machines fully, and enter a state of helplessness when we think we are driven by a machine.

I've seen this with gnss-assisted driving, or with automated driving, or with aircraft autopilot. Something disengages, gives unwarranted trust, we lose context, training fades ; and when thrown back in control, the avalanche of context and responsibility is overwhelming, compounded by the lack of context about the previous intermediate steps.

One of the most worrying dangers of automation, is this trust (even by supposed knowledgeable technicians) and the transition out of the 'the machine is perfect' and when it hands you back the helm on a failure, an inability to trust the machine again.

The way to avoid entering this state, seems to stay deeply engaged in the inputs and decisions of the system (read 'automation should be like iron man, not like ultron') and have a deep understanding of the moving parts, critical design decisions of the system, and traces/visualization/checklist of the intermediate steps.

I don't know where the corpus of research about this is (probably in safety engineering research tomes), but it crystallized for me when comparing the crew reactions and behaviour of the Rio-Paris Air France crash, and the Quantas A380 accident in Singapour.

For the first one, amongst many, many other errors (be it crew management, taking account of the weather...) and problematic sensor behaviour, the transcript tells a harrowing story of a crew not trusting their aircraft anymore after recovering from a sensor failure (that failure ejecting them from autopilot and giving them back mostly full control), ignoring their training, and many of the actual alarms the aircraft was rightly giving, blaring at them.

In the second case, a crew that tries to piece out what capabilities they still have after a massive engine failure (explosion), wrecking most of the other systems with shrapnel. And keeping enough in the loop to decide when the overwhelmed system is giving wrong sensor instructions (transfering fuel from the unaffected reservoirs to actually destroyed, leaky ones).

Human factor studies are often fascinating.


This is not about understanding the message, but switching user mental activity. I go myself in the similar situations many times. One example: I tried to pay my bills in online bank application, but got into error. After several attempts, I did read message and it say "Header size exceed..." . It give me clue that app probably put too much history into cookies. Clear browser data, log in again, and all got works.

Even when error message was clearly understandable for my expertise, it took surprisingly long tome to switch from one mental activity - "Pay bills", to another - "Investigate technical problem". And you have to throw away all short memory to switch into another task. So all rumors about "stupid" users is direct consequence from how human mind works.


> This is not about understanding the message, ...

99% of the population have no idea what "Header size exceeded" means, so it absolutely is about understanding the message, if the devs expect people to read the error.


Yeah, I would certainly not expect the user to understand what to do about a "Header size exceeded" error.

But I WOULD expect the user, when sending a message to support, to say they're getting a "Header size exceeded" error, rather than just say "an error".


This seems to be missing the point. Sometimes users see error messages. Sometimes they're good, sometimes they're bad; and yeah, software engineers should endeavor to make sure that error behaviors are graceful, but of all the not-perfect things in this world, error handling is one of the least perfect, so users do encounter unfortunately ungraceful errors.

In that case (and even sometimes in the more "graceful" cases), we don't always expect the user to know what an error message means.


If I can victim-blame for a moment, I don't know what my mom is supposed to do when a streaming service on her TV says there's a problem and will she please report a GUID to the support department.

No, my mom is not eidetic, and no, she's not going to upload a photo of her living room.

Totally agree with you, though, when the full error message is at least capable of being copied to the clipboard.


Most (all?) photo apps include a crop function, allowing your mom just crop out everything else.

I hope you’re being sarcastic. If not, expecting someone’s parent to know how to use a photo app’s crop functionality just to communicate an error state is a failure of understanding typical streaming app users.

I wasn't being sarcastic. This is not a case of not being capable of doing something, it's about not knowing the functionality exists. Cropping is very simple. I assumed the GP didn't know about it either or he would have taught his mom already.

Could the manufacturer solve this in a better way? Probably but that won't solve the issue the customer has now.


Poe's Law goes both ways. As a matter of fact, my mom invented digital photo cropping (or "pixel array extent adjustment," because even in her prime she wasn't a marketing genius, bless her heart). We know better than to expect her to submit a bug report once she's settled down to watch TV for the evening.

Jokes aside, "upload a photo of her living room" was meant to highlight the ridiculousness of the UX. I believe the designer of that flow had an OKR to decrease the number of reported bugs.



That solves nothing, just describes the problem.

> Instead, it's "When I do X, I get an error".

Worse still, just “it doesn’t work” without even any steps.

I sometimes gave those users an analogy like going to the doctor or a mechanic and not providing enough information, but I don’t think it worked.


My wife’s a doctor. Trust me, this isn’t unique to technical pursuits.

Patient: My foot hurts.

Wife: Which part of it?

Patient: It all hurts.

Wife: Does your heel hurt?

Patient: No.

Wife: Does your arch hurt?

Patient: No.

Wife: Do your toes hurt?

Patient: This one does.

Wife: Does anything but that one toe hurt?

Patient: No.

Wife: puts on a brave smile


> - aptX can do 44/16 in other devices, Sony has LDAC at 24/96 too

FWIW, 44/16 can still sound like garbage if compressed using lossy compression with a low bitrate.

But aptX is over 300 kbps. That's plenty of bandwidth to sound excellent, and I think anybody who says it doesn't sound good is lying to themselves.


This right here, especially for gaming.

When I use my Sony XM5 Bluetooth headphones, the latency is noticeable. Watching videos, the lips don't match the audio. Playing games, I see things before I hear them. It's probably in the ~150-200 ms range for latency.

While gaming, I use a different set of wireless headphones that use a proprietary dongle. If they have any latency at all, I don't notice it.


On the phone?

> How old a saying is caveat emptor?

Old enough to learn that it's a sociopathic stance that has no business in a well-functioning society.

You're arguing in favor of what's essentially a scam.


No not arguing in favor of it, more pointing out that it's nothing new. People have been scamming each other forever.

> it still ran fine (20+ fps)

20 fps is not fine. I would consider that unplayable.

I expect at least 60, ideally 120 or more, as that's where the diminishing returns really start to kick in.

I could tolerate as low as 30 fps on a game that did not require precise aiming or reaction times, which basically eliminates all shooters.


On 10 years old hardware?

> 1440p and 2160p is a total waste of pixels, when 1080p is already at the level of human visual acuity.

Wow, what a load of bullshit. I bet you also think the human eye can't see more than 30 fps?

If you're sitting 15+ feet away from your screen, yeah, you can't tell the difference. But for most people, with their eyes only being 2-3 feet away from their monitor, the difference is absolutely noticeable.

> HDR and more ray tracing/path tracing, etc. are more sensible ways of pushing quality higher.

HDR is an absolute game-changer, for sure. Ray-tracing is as well, especially once you learn to notice the artifacts created by shortcuts required to get reflections in raster-based rendering. It's like bad kerning. Something you never noticed before will suddenly stick out like a sore thumb and will bother the hell out of you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: