Hacker Newsnew | past | comments | ask | show | jobs | submit | Klaus23's commentslogin

If you are referring to the alert stage of the emergency braking system, triggering it should be rare if you drive reasonably well. It is also most likely a situation in which you could benefit from a little more braking force.

If you decide to swerve, the additional weight at the front will help you to initiate the turn, and good systems will then reduce the braking force at the right moment to give you the most traction when cornering.


I remember 2 events where the activation was completely out of place and felt it was endangering rather than protecting me.

- Driving from Tahoe to SF where the limited lane visibility due to the slope and a slight twist made the system think I was going to hit the car I was overtaking (from the 2nd and left-most lane). This really felt dangerous since it activated mid-turn and messed with the car's balance.

- The other event was a roundabout where a car yielding to get in behind plants jump-scared the breaking system. At 10-15 mph or so the unexpected breaking wasn't dangerous though, worst-case scenario you get rear ended at low speeds.

Beyond that, overtakes where you slow down as you return to your lane may trip the system, but those cases are fair even though the following distance they intend is a bit too cautious. I reckon my Mom would be holding the roof with both hands if she was there, but my Dad & siblings unfazed.


Yeah, most AEB systems I've used work quite well. I've had a couple false alarms (just at the warning stage) where I can at least understand why they happened from a system perspective, even if there was no risk.

I did find "normal" mode on the Model 3 was way too sensitive, but I set it to "late" and it's been fine ever since.


Perhaps this is what you are looking for: https://www.deepl.com/en/write

It corrects spelling errors and improves awkward wording. You can then go and choose alternative sentences or words. Just don't expect any sort of deeper intelligence.


They are analysing VLM here, but it's not as if any other neural network architecture wouldn't be vulnerable. We have seen this in classifier models that can be tricked by innocuous-looking objects, we have seen it in LLMs, and we will most likely see it in any end-to-end self-driving model.

If an end-to-end model is used and there is no second, more traditional safety self-driving stack, like the one Mercedes will use in their upcoming Level 2++ driving assistant, then the model can be manipulated essentially without limit. Even a more traditional stack can be vulnerable if not carefully designed. It is realistic to imagine that one printed page stuck on a lamppost could cause the car to reliably crash.


> It is realistic to imagine that one printed page stuck on a lamppost could cause the car to reliably crash.

Realistic, yes. But that'd still be a symptom of architectural issues in the software.

Conceptually the priorities of a car are (in order of decreasing importance) not hitting other moving or stationary objects or people, allowing emergency vehicles to pass unhindered, staying on a drivable surface, behaving predictable enough to prevent other road users crashing, following road signs and traffic laws, and making progress towards the destination (you can argue about the order of the last three). Typically you'd want each of these handled by their own subsytem because each is a fairly specialized task. A system that predicts the walking paths of pedestrians won't be good at finding a route to Starbucks.

The "follow road signs and traffic laws" is easily tricked, like in this article or by drawing road lines with salt. But that should never crash the car, because not hitting anything and staying on the road are higher priority. And tricking those systems is much harder


Good, and not because of the diversity drama that the US government wants to shoehorn in here. Any font that makes the uppercase "i" and the lowercase "L" look the same is absolute garbage. Yes, I have a strong opinion about this!


A lot of people installed malware and, to be honest, nothing really happened. They might have had to change their passwords, but it could have been much much worse if Android didn't have good sandboxing.

I hope that Flatpak and similar technologies are adopted more widely on desktop computers. With such security technology existing, giving every application full access to the system is no longer appropriate.


Why do you need Flatpak for sandboxing?

I really dislike Flatpak for installing multiple identical copies of the dependencies.

Just give me some easier to use tools to configure the access that each application has.


> Why do you need Flatpak for sandboxing?

You don't, but as far as I know, Flatpak or Snap are the only practical, low-effort ways to do it on standard distros. There's nothing stopping flatpak-like security from being combined with traditional package management and shared libraries. Perhaps we will see this in the future, but I don't see much activity in this area at the moment.


This is simply not true. Bird flu mainly spreads among wild birds and that is where it has its reservoir. It would still exist even if the world was free of bird farms. It also usually doesn't spread between farms because, in the event of an outbreak, all the animals on the affected farm are culled. At most, bird farms slightly increase overall contact between birds and humans.


I don't have a background in law, but here are some suggestions. The German penal code often imposes harsher punishments for the same offense if a weapon was involved. Rape, for example, carries a minimum sentence of two years. If a weapon is present, it is a minimum of three years. If the weapon is used, the minimum sentence is 5 years.

Before the change, date rape drugs would have fallen under a minimum of three years because of a separate clause.

Classifying them as weapons would also affect crimes other than rape.

Additionally, if legal substances can be used as date rape drugs, classifying them as weapons would give the police more authority to act in certain situations.


It's pretty accurate. I was a bit shocked when I saw that room names were not encrypted. I thought that was such a basic privacy requirement, and it's not hard to implement when you already have message encryption.

Matrix seems to have a lot of these structural flaws. Even the encryption praised in the Reddit post has had problems for years where messages don't decrypt. These issues are patched slowly over time, but you shouldn't need to show me a graph demonstrating how you have slowly decreased the decryption issues. There shouldn't be any to begin with! If there are, the protocol is fundamentally broken.

They are slowly improving everything, with the emphasis on "slowly". It will take years until everything is properly implemented. To answer the question of whether the future of the protocol is promising, I would say yes. This is in no small part because there are currently no real alternatives in this area. If you want an open system, this is the best option.


The decryption problems I've experienced have a been fixed a while ago. There was a push to fix these last year or the year before that, and at this point I'm pretty sure only some outdated or obscure clients with old encryption liberties still suffer from these problems.

The huge amount of unencrypted metadata is pretty hard to avoid with Matrix, though. It's the inevitable result of stuffing encryption into an unencrypted protocol later, rather than designing the protocol to be encrypted from the start.

I've had similar issues with other protocols too, though. XMPP wouldn't decrypt my messages (because apparently I used the wrong encryption for one of the clients), and Signal got into some funky state where I needed to re-setup and delete all of my old messages before I could use it again. Maintained XMPP clients (both of them) seem to have fixed their encryption support and Signal now has backups so none of these problems should happen again, but this stuff is never easy.


Yes, messaging protocols, especially federated ones, are never easy. I just wish we could have skipped the three or four years when Matrix was basically unusable for the average user because end-to-end encryption was switched on by default. Perhaps a clean redesign would have been better. Now they have to change the wheels on a moving car.


> These issues are patched slowly over time, but you shouldn't need to show me a graph demonstrating how you have slowly decreased the decryption issues. There shouldn't be any to begin with! If there are, the protocol is fundamentally broken.

This is wrong, because afaik these errors happen due to corner cases and I really don't like the attitude here.


It's not just a corner case. The issue was so prevalent for years that if it was limited to just a few corner cases, the entire protocol must consist of nothing but corner cases.

It frequently occurred on the "happy path": on a single server that they control, between identical official clients, in the simplest of situations. There really is no excuse.

I'm not saying that building a federated chat network with working encryption is easy. On the contrary, it is very hard. I'm sure the designers had the best intentions, but they simply lacked the competence to overcome such a challenge and ensure the protocol was mostly functional right from the outset.


> The issue was so prevalent for years that if it was limited to just a few corner cases, the entire protocol must consist of nothing but corner cases.

for me it wasn't really; occasionally it would hit me, but mostly it worked, and I have been using it for encrypted communication since 2020.

> It frequently occurred on the "happy path": on a single server that they control, between identical official clients, in the simplest of situations. There really is no excuse.

There still can be technical corner cases in the interaction of clients

a talk for details: https://www.youtube.com/watch?v=ZUSucR2axWI

> I'm sure the designers had the best intentions, but they simply lacked the competence to overcome such a challenge and ensure the protocol was mostly functional right from the outset.

well, even if this was true, they still were brave enough to try and eventually pull it off eventually. Perhaps complain to the competent people who haven't even tried.


> for me it wasn't really; occasionally it would hit me, but mostly it worked, and I have been using it for encrypted communication since 2020.

I think the statistic said that around 10% of users receive at least one "unable to decrypt" message on any given day. That's a lot. Perhaps not for devs who are accustomed to technical frustrations, but for non-technical people, that's far too frequent. Other messaging systems worked much better.

> There still can be technical corner cases in the interaction of clients

> a talk for details: https://www.youtube.com/watch?v=ZUSucR2axWI

You linked to a German political talk show. If you wanted to show me the talk in which the guy listed reasons such as "network requests can fail and our retry logic is so buggy that it often breaks" and "the application regularly corrupts its internal state, so we have to recover from that, which is not always easily possible", let's just say I wasn't that impressed.

> well, even if this was true, they still were brave enough to try and eventually pull it off eventually. Perhaps complain to the competent people who haven't even tried.

It isn't a problem that the Matrix team are not federated networking experts. At the time, they had already received millions in investment. That's not FAANG money, but it's still enough to contract the right people to help design everything properly.

I'm not mad at them. Matrix was a bold effort that clearly succeeded in its aims. I'm just disappointed that it was so unreliable for such a long time, and still is to some extent.


Correct link: https://www.youtube.com/watch?v=FHzh2Y7BABQ

> I wasn't that impressed.

If you think, I want to impress you, you are wrong.


Once again, we have the situation where someone uses an Apache or BSD licence, only to then wonder why others do exactly what the licence allows. If you want others, especially companies, to play nice, you have to make them do so. Use GPL or AGPL.

Let's hope Rebble doesn't get steamrollered. They did good work when the original company failed its users.


Perhaps a trusted execution environment based anti-cheat system could be possible.

I think Valve said something about working with anti-cheat developers to find a solution for the Steam Deck, but nothing happened. Perhaps they will do something this time.

With a TEE, you could scan the system or even completely isolate your game, preventing even the OS from manipulating it. As a last resort, you could simply blacklist the machine if cheats are detected.

There would probably still be some cheaters, but the numbers would be so low as to not be a problem.


Maybe the user friction would be too much, but I'd be happy for the system to just straight up reboot for games which require anti cheat. So while that game is running, the system is in a verified state. But once you close the game all of your mods and custom drivers can be loaded just fine.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: