Hacker Newsnew | past | comments | ask | show | jobs | submit | captainkrtek's commentslogin

Will there be an interest in vision based wearables?

Google Glasses - dead

Apple Vision Pro - dead

FB/Meta x RayBan - dead soon(?)

It seems they can’t get over the social hurdle of having a camera strapped to your face, and the effects of that on people around you. I think the tech is neat, but not socially accepted as a concept to make it viable. My sister is big into tiktok and filming all the time, and it personally makes me hesitant to be nearby as I’m not comfortable being filmed all the time.


I don't want people with camera glasses around me either. But the stupid thing is: they don't even need to exist. The Google glass can show its notifications just fine without a camera. My Xreal Air works great without one.

It's the big tech companies that are pushing for pervasive cameras. Not consumers saying they can't live without a camera on their face.


It is almost certainly a problem with size, cost, and features.

The wearables are just too big, too expensive, and the feature set too small.

Much like with VR goggles, every problem they solve is solved far better and more cheaply with another device most people already have and use.

I don't think it has anything to do with the moral or social implications of taking pictures of people privately. The second any of the above are resolved, society will willingly give up even more privacy without a hiccup, as we've done every other time the choice was presented.


Agreed. But perhaps that’s the problem? Instead of trying to go instantly mainstream via the consumer market, perhaps the tie-hold are niche professional / commercial markets? Or niche consumers markets provided by the business (e.g., museums)?

It’s not a tech issue, it’s a marketing issue (and lack of imagination).


I think it goes beyond the social hurdle. I have an Oculus, and I just never use it. A phone or laptop screen generally just feels good enough. It's easier to start and stop using, and it doesn't feel like I'm shutting myself off from the world when I do.

I use my oculuses a LOT. All the time. They're great for gaming and watching content.

50+ weeks? so a year?

I've been in big tech for 12+ years now. The first handful of years are definitely a grind to earn your spot, get a couple promos. After that though, it can become quite a bit easier to coast if that's what you're looking for. People know you, know you're probably valuable cause you're "senior" or "staff" and still here, and likely leave you alone. But yeah, as a newer engineer these days, it still requires the initial commitment to earn the privilege of coasting in a big tech company.


> 50+ weeks? so a year?

Maybe they meant "50+ [hour] weeks"


My biggest problem with usage of an LLM in coding is that it removes engineers from understanding the true implementation of a system.

Over the years, I learned that a lot of one's value as an engineer can come from knowing how things actually work. I've been in many meetings with very senior engineers postulating how something works arguing back and forth, when quietly one engineer taps away on their laptop, then spins it around to say "no, this is the code here, this is how it actually works".


Agreed, seen a number of short form news pieces / docs on the effects of datacenter development across different parts of America. Pollution, noise, lights, water impacts, energy costs, etc. not a lot to like from them, and they create very few jobs in relation to the community.

AI data centers will be the job destroyers, not creators.

100 local people to maintain the data center while it replaces 1 million people with the AIs running inside


If we can deal with the personal economics of the transition, isn’t freeing up human capital to do something else a good thing?

Yes, unfortunately we cannot deal with the personal economics of such a transition :)

The upper class who holds all the power does not want people to have good life. They want to extract as much as possible from most of us.

So, no, because said human capital is holding shorter end of the stick and will be worst off.


CGP Grey once asked "What happens to humans when it becomes uneconomic to employ them?" eg, the value of their economic output is functionally zero.

If you like speculative fiction on this topic, read Manna by Marshall Brain while you still can (the author died not long ago, so it may not stay up).

https://marshallbrain.com/manna1


We should just develop cold fusion. It's gotta be easy, right?

I 100% agree that AI data centers are bad for people.

In my opinion, Compute-related data centers are a good product tho. Offering up some gpu services might be good but honestly I will tell you what happened (similar to another comment I wrote)

AI gave these data centers companies tons of money (or they borrowed) and then they brought gpus from nvidias and became gpu-centric (also AI centric) to jump even more on the hype

these are bad The core offering of datacenters to me feels like it should be normal form of compute (CPU,ram,storage,as an example yabs performance of the whole server) and not "just what gpu does it have"

Offering up some gpu on the side is perfectly reasonable to me if need be perhaps where the workloads might need some gpu but overall compute oriented datacenters seem nice.

Hetzner is a fan favourite now (which I deeply respect) and for good measure and I feel like their modelling is pretty understandable, They offer GPU's too iirc but you can just tell from their website that they love compute too

Honestly the same is true for most Independent cloud providers. The only places where we see a complete saturation of AI centric data centers is probably the American trifecta (Google,azure and amazon) and Of course nvidia,oracle etc.

Compute oriented small-to-indie data centers/racks are definitely pleasant although that market has raced to the bottom, but only because let's be really honest, The real incentives for building softwares happens when VSCode forks make billions so people (techies atleast) usually question such path and non-techies just don't know how to sell/compete in the online marketplaces usually.


I'm no economist, but if (when?) the AI bubble bursts and demand collapses at the price point memory and other related components are at, wouldn't price recover?

not trying to argue, just curious.


I'm no economist either, but I imagine the manufacturing processes for the two types of RAM are too different for supply to quickly bounce back.


IF a theoretical AI bubble bursts sure. However the largest capitalized companies in the world and all the smartest people able to do cutting edge AI research are betting otherwise. This is also what the start of a takeoff looks like


As a customer of GitHub actions, anecdotally feels like Github experiences issues frequently enough to make this not a problem.


I've lived in Seattle my whole life, and have worked in tech for 12+ years now as a SWE.

I think the SEA and SF tech scenes are hard to differentiate perfectly in a HN comment. However, I think any "Seattle hates AI" has to do more with the incessant pushing of AI into all the tech spaces.

It's being claimed as the next major evolution of computing, while also being cited as reasons for layoffs. Sounds like a positive for some (rich people) and a negative for many other people.

It's being forced into new features of existing products, while adoption of said features is low. This feels like cult-like behavior where you must be in favor of AI in your products, or else you're considered a luddite.

I think the confusing thing to me is that things which are successful don't typically need to be touted so aggressively. I'm on the younger side and generally positive to developments in tech, but the spending and the CEO group-think around "AI all the things" doesn't sit well as being aligned with a naturally successful development. Also, maybe I'm just burned out on ads in podcasts for "is your workforce using Agentic AI to optimize ..."


I think the obvious things are:

- Deviation in consistency/texture/color/etc.

- Obvious signs related to the above (eg: diarrhea, dehydration, blood in stool).

Ultimately though, you can get the same results by just looking down yourself and being curious if things look off...

tldr: this feels like literal internet-of-shit IoT stuff.


Do you think they're using the guise of "its solar radiation" as cover to do a software update to fix a more problematic "bug", and perhaps tangentially there are some changes in said-update to improve some error correcting type code (eg: related to detecting spurious bit flips).


Not in aviation.


Counterpoint: Boeing MCAS tho


Does the 737-Max not count as aviation anymore?


It does. It is but the Max issue was well different to this one.


No, that would be straight to jail.


Remind me who from Boeing went to jail?


Airbus is in Europe where the Rule of Law still exists


That’s what we naively thought here too.


Look at how US government treats financial behemoths which actively harm whole mankind vs how EU treats them. There is way more to this topic obviously (who wants to harm their local company), but generally US is pro-companies while Europe is pro-people.


Deutsche Bank and HSBC, two major European banks, have repeatedly admitted they have engaged in money laundering activities for Russia, drug cartels and terrorists and have consistently failed to meet their AML obligations. The US is the only entity that’s going after these banks for these issues winning significant judgments and even with that backdrop you don’t see any EU enforcement.


Yeah I don't buy it either.

If it was really 'solar radiation' there would be more small details.


Reading the Airbus press release, I wonder if this is what happened:

Solar radiation event led to alpha particle induced data corruption in a flight control computer memory (could be DRAM, SRAM, on-chip cache, registers...). These failures are supposed to be transient (reboot and all is well).

This is an anticipated failure mode. Only one (of three?) computers should be affected by such a failure and therefore the remaining two keep on running the plane.

But what happened is <something> went wrong with the failover/voting mechanism (as often happens with one-off seldom-executed failover code). The result was no flight control computer functionality until the entire system was rebooted. Hence the emergency landing.

The fix is to address that software error, with perhaps a secondary fix TBD to harden the hardware (add some shielding perhaps).

The fact that they talk about data corruption and not just a malfunction suggests alpha bit flip rather than latch-up.

Then send the whole statement through a French to English translator to make it a bit more confusing.


I would say its pretty detailed -an unknown interference caused a single crc protected 32 bit word to be corrupted simultaneously, by timestamp, in both the flight controller hardware and the black box data recorder.

My concern would be what error correction mechanism did or did not catch the corruption in memory and why did it not recover without critical impact to operations?


> corrupted simultaneously

This sounds like a software bug.

Something like - {copy a to b, checksum a--b}

Instead of - {copy a to t, checksum a--t, copy t to b, checksum a--b}

I bet the fix is along these lines, with the caveat of real time systems/etc.


My guess is they haven't managed to point to the single memory bit which was flipped to cause this result.

The software update is probably more along the lines of 'lets just introduce a watchdog task which resets the system if the output deviates too far from the input for too long'.


No, because aerospace is not garden-variety Silicon Valley webshittery.

There is a slightly different level of discipline and engineering ethics at play.


This is excellent and aligns with my own experience.

During my day I try to minimize interruptions by batching them. I will largely ignore Slack, and as notifications come in I glance and determine quickly if it really is urgent or if it can wait. If it can wait, I will punt all of those messages to a "remind me later" of a few hours, and get back to my task. I think this keeps my "recovery time" small as I'm not looking too close at these messages. It's not perfect, but definitely helps over pausing my "real work" to fully dive into each notification or ask.


Then in your next performance review you get dinged as "not responsive", "not a team player". Trying to work in peace is a in instant loss nowadays, just play the visibility performative game as all the quickly promoted people in office do. Why do you think your management cares about getting things done? If they did they would reward it.


This has not been my experience at least at the more remote-friendly places I worked. However, I can see this at companies with different culture / pace / attitude.

My most recent role the entire company of ~200 was remote, and so there was rarely the expectation of immediacy in a response. If something was truly urgent you'd be paged.


I have to agree — in general, most people have a good sense of what's urgent or not and with a few kind nudges, they align quickly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: