So what ingenious method should they have deployed to prevent all programming errors beforehand, and how should they have handled it? Advice users to fall-back to non E2EE SMS?
The team deployed logging as fast as they could, successfully detected the issue as soon as it happened again, and deployed fix as fast as possible. What should they have done?
If you only recommend chat apps with perfect track record, you're basically recommending chat apps with internal policy of not disclosing vulnerabilities, and ones that downplay any revealed vulnerabilities.
I don't think anyone takes issue with the fact that the mistake was made or that it was really hard to track down. Shit happens.
How you handle it is everything. No communication on it and issuing no warning to their users for a critical bug that risked user privacy in a substantial way for 7 months is unacceptable for an app that calls itself secure, full stop.
> just pointing out that "there are audits, why does it have such bugs" doesn't tell the entire story.
So? Isn't that the point though? Having regular audits should have caught this issue? I thought this being 'open source' this would made this even easier.
Which leads me to believe a team that has $60M~ in funding is unable to fix this issue in a matter of urgency.
Remember this issue was open for half a year with users noticing this, no matter how you slice this, this issue does not give me any more confidence in Signal being secure.
>So? Isn't that the point though? Having regular audits should have caught this issue? I thought this being 'open source' this would made this even easier.
You have it the wrong way. Testing, audits, and open source are all best practices. They should be done. None of them are guarantees of security.
Open source is not guarantee of finding all bugs, it's a necessity to allow anyone to look for bugs (and backdoors).
Audits can not be passed. They can only be failed. Kind of like how RNG tests can not be passed, they can only be failed. Example: Use SHAKE256 to extrude any keystream on initial value 0x00. It will not be secure, but it will pass any statistical test.
>this issue does not give me any more confidence in Signal being secure.
No application can actively prevent a bug like this. As an author of high assurance comms system, see what I wrote under threat model:
"If hardware such as computers/optocouplers user has bought is pre-compromised to the point it actively undermines the security of the user, TFC (or any other piece of software for that matter) is unable to provide security on that hardware."
This also applies to software issues that actively undermine the security of the user. So the thing is, a software bug that outputs sensitive data to wrong contact, can not be absolutely prevented. You would need a friendly MITM-guard node that runs a Google-grade image recognition algorithm that detects you're trying to output a legal document to the wrong client, or a nude to not-your-SO.
Again, bugs are unavoidable, what matters is the incident response, and is Signal actively trying to protect you from everyone, including themselves.
Another PoV: If you punitively fire people that get caught in social engineering pentests, you're replacing a person who now has real-life experience with social engineers, with someone who may or may not have such experience.
Sure, if the person fails multiple times, it's time to let them go, but Signal's reaction is indication of a good employee who takes personal responsibility in making sure it won't happen again.
I'm extremely careful about what I recommend, and I have serious trouble finding a way to agree with your assessment that just because a rare bug is open 6 months is of serious concern. It wasn't being sat on for six months. But you're very keen on giving that idea. Would you care to elaborate?
>If security was really that important to Signal, where was the urgency there?
If the cause is a random database key collision you can't immediately discover it obviously. You have no idea what was causing it, so you'd have to do logging.
>and what were they doing? Testing cryptocurrency payments.
Yeah I'm sure they just decided to abandon their core value because they wanted to hurry a feature they had advertised to no-one, and were thus in no rush to deploy.
If this was an actual issue I wouldn't care if it was my own app, I would pour a truck load of bricks on top of that.