> I'd expect a decent food journalist to ask if their conditions of payment for endorsement required their support in these times.
From the article:
> Mr. Chang and Mr. Samuelsson did not respond to request for comment. In a text, Ms. Ray said she had written her letter to the California legislature “after giving the issue much thought” and that she stood by it.
I've been following this on X/Twitter and I think one of the most egregious things that's important to point out is that folks from Phrack reached out to Proton in private multiple times, and Proton ghosted them. Proton only engaged with them and then reinstated the accounts after Phrack went public and their X/Twitter post went viral.
It also looks like one of the writers filed an appeal with Proton and Proton denied the appeal, so they manually investigated the incident and refused to reinstate the account and then only did after this got attention on X/Twitter.
So make no mistake about it: Proton didn't just disable the accounts after whatever CERT complained, which would have been bad enough - they also didn't do anything about it until this started getting lots of eyes on social media.
Proton does not require a shred of proof that you are a real human being either, fyi. I'm not actually attacking them for this specifically, because I feel that we need privacy focused tools, however the fact that I was able to create a few hundred proton email addresses in seconds by injecting usernames/passwords was scary, even to me. I'm surprised they aren't on spam block lists worldwide. Their captcha is child's play that a script can defeat with simple image examination. i encourage them to buff up their spam controls, just a bit, and decrease moderation by a lot unless they can promptly deal with cases such as this.
Their controls are buffed up: all of those accounts are linked due to having been created with the same IP address. If one is blocked, they all are. If you try to circumvent this with a well-known proxy (such as Tor or a V"P""N") you will find that captcha activation will not exist as an option.
That definitely doesn't look good for privacy POV. If they do not want abuse, they ought to use other means. They should not associate IPs with account creation. That is kind of scary. In fact, if what you have said is true, then one's account can be blocked by someone else's mischief on the same IP, which is not very uncommon at all i.e sharing the IP.
Ever heard of linkable systems? They can detect when multiple proofs come from the same person, even if they can't identify who that person is. The system can also force reuse of the same secret, which stops the "infinite proof factory" problem.
Unique secrets can also be tied directly to identity. For example, if the ZKP is about knowledge of a secret key bound to your identity, then you can't just mint 5000 independent proofs unless you also have 5000 identities.
There's also the concept of nullifiers, used in privacy-preserving identity protocols. A nullifier is basically a one-time marker derived from your identity secret that prevents double-use of a proof.
On top of that, zk-SNARK-based credentials or verifiable credentials can prove "I am a unique registered person" without revealing which one. These systems enforce uniqueness at registration, so you can't magically spawn 5000 ZKPs that all look like 5000 humans. Similar ideas exist with linkable ring signatures and even biometric-based ZK proofs.
So there are plenty of ways to counteract your "5000 ZKPs per human" story (what's usually called a Sybil attack).
If you're being pedantic, yes: a bare ZKP alone doesn't enforce "one proof = one person", but ZKP + uniqueness enforcement (nullifiers, credentials, commitments, etc.) does, and that's what I had in mind. I thought it was obvious, but then again, nothing is obvious, and I should have specified. My bad.
In any case, people ought to know just how powerful and useful these ZKP-based systems can be when designed properly. I think this is the only way forward if we want to preserve our privacy, and at the same time we want to prove we're human without sacrificing anonymity, or verify we know the password without revealing it, or prove we're eligible to vote without revealing our identity, or demonstrate we meet age requirements without showing our birthdate, or verify we have sufficient funds without disclosing our balance, or show we're authorized to access something without revealing our credentials, or verify our qualifications without exposing personal details, and so on.
Edit: excuse the technical brain dump, I literally just woke up. I hope this helps to clear up some things, however.
I dropped Proton when a ton of services (all the major A and B tier cloud providers I tried for starters) could not/would not activate an account with a proton email.
Email is a critical infrastructure these days. Most people have neither the time nor the will to deal with emails failing to send and/or be delivered. (Send or receive)
I'll go out on a limb and say it: it's an American cybersecurity agency. Proton's CEO/Proton[1] loves the current US admin. I wouldn't be surprised if they comply now and ask questions later, if at all.
1. According to the now-deleted Reddit comment from the official Proton account glazing Republicans, so I assume they were speaking on behalf of all of Proton. https://theintercept.com/2025/01/28/proton-mail-andy-yen-tru.... I have zero evidence except for the CEOs questionable public statements, but I wouldn't be surprised if Proton turned out to be the 21st century Crypto AG.
So clear that you can present the least evidence for it aside from the CEO's saying a thing or two that doesn't automatically spit on the current administration?
> Proton's CEO/Proton[1] loves the current US admin
The CEO once expressed support for Gail Slater as head of antitrust and subsequently criticized lack of effective work towards tech regulation on the Democratic side in the same social media thread.
Calling that "love for the current US admin" (which hadn't even taken office when those statements were made) is pure disinformation.
Half the American tech landscape is either running toward Trumps bed or bending right down and making all the right mating signals in hopes of some interest, but a few pro-republican comments from the Proton CEO should be held as immediately and deeply suspect of this company being a honeypot?
People of all kinds can say certain positive things about the Republican Party for different reasons in specific contexts and not be fanatics you know. That's how using actual reasoning and nuanced discourse works in the world of not throwing your brain in the garbage through ideological rigidity.
Why should there be fallout from supporting the current admin? Tech companies colluded with the government during the biden administration to censor American citizens.
I never saw any outrage. Only memory holing and denial
> Why should there be fallout from supporting the current admin?
Well, why or why not doesn't matter; there _was_ backlash. And to my recollection, he made some rather bizarre defensive posts on Reddit that were later deleted and replaced with a corpo response.
Ideological rigidity or not, I'll bet dollars to donuts that Proton disabled the accounts at the behest of an American agency. All the highfalutin talk is missing my main point.
Many companies are getting only bigger and more global so it is easier for them to ignore the complaints until it catches the media. Since the scale is getting so big, complaints do not risk the revenue until it hits the media. Ecosystem wasn’t so global and instant in the past.
Sadly, Proton was, until now, a serious and perhaps leading contender for where I might migrate my email as I reduce my dependence on Google. They felt more credible then Tutanova, and less mainstream corporate than Fastmail. Not sure where to look now.
They say: "Regarding Phrack’s claim on contacting our legal team 8 times: this is not true. We have only received two emails to our legal team inbox, last one on Sep 6 with a 48-hour deadline. This is unrealistic for a company the size of Proton, especially since the message was sent to our legal team inbox on a Saturday, rather than through the proper customer support channels."
You'll note that Proton's PR only mentions the second date - " last one on Sep 6 with a 48-hour deadline."
Proton doesn't mention that the first email from Phrack which Proton ignored was weeks prior to that, which is what led to the second email in the first place.
You'll also note that Proton doesn't mention that their Abuse Team refused to re-anable the account after the article author did the appeals process, as per Phrack's timeline at the top of their article.
That's a great point. I guess at this point it'd be ideal for them to treat this an incident and do a proper postmortem with timelines and decision calculus.
But that would be contrary to their clear intention thus far: to sweep this under the rug. /s
I had previously liked Proton. I started seeing bits and pieces of info about their security being lackluster over the past year or so, causing doubt about their credibility. I'm definitely done with them after this.
This is honestly sad to see. I use Proton and advocate it to others. This does make me rethink my position somewhat - although I’d argue it’s still better than Google / Microsoft-owned email services.
To be honest, I've found Proton's public customer service representatives to be very duplicitous, so it's hard to take their word at face value. It's pretty ridiculous to see their response to legitimate concerns start with: "That doesn't sound right..." 80-90% of the time.
The whole "we have only received two emails" is a classic move of every company caught with their pants down. Considering Proton's history, they don't get the benefit of the doubt on this one.
As for the "company size excuse" sorry but considering the business you claim to be in (the private and secure email), having an on-call skeleton crew legal team available over the weekend for urgent requests is a bare minimum (and I'm pretty sure they have people available to hand over everything the cops request if "the proper process is followed").
Remember that they have turned over information in less than 24 hours before (for what they call an extreme case of course). So the "size" excuse doesn't hold. Doesn't matter how urgent it is, if they are the small bean they claim they are, there is no chance they can have a turnaround of less than 24 hours.
Again, it's not what they did that's the biggest issue, it's the coverup. Just like last time they got in hot water. Because the coverup raises a lot more questions.
If you don't have enough people to run your business you're doing it wrong. If you don't have enough money to hire people for your business, it's not a viable business.
> having an on-call skeleton crew legal team available over the weekend for urgent requests is a bare minimum
I don't know about Switzerland, but in Germany, no company will be available "over the weekend". Almost everything on the internet in DE is Mo-Fr 9-17.
I'm having the same exact issue. I don't understand the argument that many here seem to implicitly be making, that "businesses should be allowed to ban as many people as they can without using automated technology". First of all, the question is 'why?' Why should a business be allowed to ban a person so long as they're not using technology to do so? The next question is, this would still allow a big business like MSG to ban significantly more people than a smaller business, just by virtue of being able to hire more security staff.
First just a disclaimer that I am not a proponent of facial recognition technology (I think it has a number of significant faults).
That being said, it does not seem like facial recognition technology is the issue here, at all.
The crux of the matter appears to be this:
> Businesses generally have the right to decide whom they want to do business with, as long as they are not discriminating by ethnicity, sex, religion, disability or another protected class.
This seems to be the issue that people are fundamentally disagreeing with. Facial recognition technology is one way of achieving that. If the venue just had all security guards carry a list of the lawyers' photographs, instead of using automated technology, would people be fine with the venue barring access?
Essentially it seems to me like people are disagreeing with the fact that businesses can choose to not to business with unprotected classes. If so, that's fine, but I don't understand why people are just couching this as an attack on facial recognition technology, which seems like in this instance is actually doing its job well.
The article ends with:
> Lawyers may not be the most sympathetic victims and their need to be entertained may not be the most compelling of causes. But their plight, Mr. Greenberg said, should raise alarms about how the use of this technology could spread. Businesses, for instance, might turn people away based on their political ideology, comments they’d made online or whom they work for.
But again...."Businesses, for instance, might turn people away based on their political ideology, comments they’d made online or whom they work for" with or without the use of facial recognition technology, so the fundamental issue appears to be that businesses _shouldn't_ be allowed to turn away unprotected classes?
I think people would have a problem with an army of security guards with pictures. It's just never come up before because it was cost prohibitive.
I think you're right in that people aren't upset with facial recognition per se, but are upset with its efficiency.
However the best way to counteract that at this time is to push for a ban on the technology instead of a ban on the consequences, because enumerating all possible consequences now and in the future is hard.
I agree with your position. The issue is whether unprotected classes are allowed to be turned away for reasons other than visibily disturbing the peace.
However, I think the difference between automated image recognition and security guards with clipboards is one of scale. It would simply be impossible to have images of lawyers from across 90 firms on clipboards and would take long enough to be infeasible for every single patron coming through the doors. Automation makes this possible and easy and thus it’s worth having the discussion.
It’s like Google with self driving cars. While they were technically on public roads, the breadth and scale of PII being captured and displayed changed what was acceptable. Hence Google choosing to instead deploy automated algorithms to blur faces.
Some people tend to change their minds about laws/rules when enforcement becomes automated and scalable. Speeding: Person A can say "Yea, speeding should be against the law" when one cop struggles to pull over one speeder out of 500 on the road every hour. Then, when you put up speeding cameras, Person A's chances of getting caught go from 1/500 to 1/1, and he suddenly changes his mind.
I agree with OP that this is the only reason we're even talking about facial recognition here. Most people are fine with a business kicking someone out that they don't want to do business with. Trust me, we would not want to live in a world where you can't kick someone out of your business. But when the kicking becomes scalable and automated, some people have second thoughts about being fine with this. It's interesting to see people double-check and backtrack their world views when the world becomes automated.
> Trust me, we would not want to live in a world where you can't kick someone out of your business
Can you elaborate on this please? It's not immediately obvious to me that's a bad thing (within reason). I'm not talking about removing all ability nor about all businesses. I'm specifically talking about a severely restricted ability - you can kick people out of a public venue if they're being disruptive, causing a public nuisance of some kind, or not following the publicly posted rules for the venue that you are clearly enforcing evenly. But other than that, you can't just evict people. You also obviously need to make it a crime to hire people to harass the people you dislike to provide cover under the pretense of kicking both parties out which is what will happen with such a policy.
If you narrow the criteria to things like "disruptive" and "nuisance" then you end up constantly defining and clarifying exactly what these things mean. Rules-lawyer-type people will come in and walk right up to wherever the line was established. The whole "nyaa nyaa, I'm not technically touching you!" type. You also might forget things. If you establish a list of attributes/behaviors you can ban over, then people will do the other things that you didn't list. "Your sign says no fighting, but I'm just yelling insanely." "Your sign says no being disruptive but I'm just quietly wearing my pointy white hood." Better to have a list of behaviors you can't ban over (the special, enumerated things in the law) and reserve the right to ban over anything else.
Then you have people who are not disruptive themselves, but you know their presence will invite others who are. The whole "How to not become a nazi bar" example[1].
EDIT: Rules-lawyer example: Do you really want this guy[2] in your store? "Ackshually, it's 8:59 and you close at 9:00!"
> If you narrow the criteria to things like "disruptive" and "nuisance" then you end up constantly defining and clarifying exactly what these things mean. Rules-lawyer-type people will come in and walk right up to wherever the line was established. The whole "nyaa nyaa, I'm not technically touching you!" type
I feel like you're really strawmanning here. I wasn't trying to lay out the exact parameters of the law. See a sibling response to you:
> To the best of my knowledge - this is the case in big chunk Europe, with a "for no reason" added on the end. You can't kick out someone out of business unless they are actively malicious. And no, working for a company that is suing an owner is not considered malicious.
Believe it or not, but that's the whole point of the judge in the legal system. It's to act as the arbiter of are you acting with accordance of the spirit of the law or are you engaging strictly in trying to get as close to the line as possible. And if you show a pattern of acting in bad faith, you get slapped with extra penalties. The American legal system seems to generally prefer "nyaa nyaa, I'm not technically touching you!" interpretations of the law but in reality this kind of interpretation only makes sense in a ridiculously small (if any) set of circumstances - in most others it's better to just let the judge interpret the evidence, figure out who's acting in bad faith (whether through summary judgement or jury if it's really required), and judge accordingly.
As for the closing hours things, that's again a strawman. The business is free to close whenever as long as it's applying this uniformly (i.e. preventing all new public access, trying to get existing public to exit etc). That has nothing to do with ejecting specific individuals selectively.
> Trust me, we would not want to live in a world where you can't kick someone out of your business
To the best of my knowledge - this is the case in big chunk Europe, with a "for no reason" added on the end. You can't kick out someone out of business unless they are actively malicious. And no, working for a company that is suing an owner is not considered malicious.
> However, I think the difference between automated image recognition and security guards with clipboards is one of scale.
This is exactly what it is. There's a non-trivial cost associated with banning someone the "old fashioned" way and it creates a barrier to abuse. It's hard to maintain lists forever and it's hard to share lists because the more people on the list, the more it costs.
There are so many abuses that become viable once the process is low cost and highly scalable that I can't imagine it being left unchecked.
What counts as "abuse"? Besides those enumerated special reasons like ethnicity, sex, religion, disability, you can ban someone from your business for ANY reason or even no reason. If the business owner doesn't like green shirts, he can ban you for wearing a green shirt. Is this abuse? If the business owner doesn't like how you smell, he can ban you. Abuse? They can even make a mistake and ban the wrong person. Is that abuse?
I don't think it can be the same definition for every business. If it's something like a restaurant where the person can go to hundreds of other competitors, I think banning for any reason is ok. If it's large venue where there's only one, or if those restaurants band together to ban a person from the whole class of business, I don't 100% agree with it being unrestricted.
What if all the hospitals are private like in the US and they all share the same ban list? Should they be able to ban you for wearing a green shirt?
Many US hospitals are owned by government entities. Patient access to all hospitals including privately owned ones is controlled by the EMTALA law. That law doesn't apply to other types of businesses.
as another comment points out, that is not the case everywhere, and people citing such "dilemmas" often fail to consider whether that is a desirable thing at all.
why should you be able to kick out all patrons who wear green shirts? is that not against the spirit of offering a public accommodation?
even if your goal is to enforce a dress code, surely that can be done in a "content-neutral" manner similar to the "time, manner, place" regulations.
similarly to the "constitutional fetishism" in the US, where people dig in their heels defending really awful features and design goals of the system, people often really fail to stop and consider the difference between the way the system is and the way it ought to be. Being banned from public society en-masse by a privately-operated "social credit" system is pretty horrible even if it's perfectly legal under the current law!
but it wasn't feasible to do that en-masse back in the 1700s, so nobody wrote a law preventing it, just like one cop could watch traffic in 1780 means that it's legally fine to build a massive database recording everyone's movements in the 2020s...
> I think the difference between automated image recognition and security guards with clipboards is one of scale. It would simply be impossible to have images of lawyers from across 90 firms on clipboards and would take long enough to be infeasible for every single patron coming through the doors. Automation makes this possible and easy and thus it’s worth having the discussion.
Sure, I definitely see that. What I am having a hard time understanding is what, then, are people who are against the use of facial recognition in this case but are not against businesses being able to bar access to unprotected cases in general, arguing here, exactly? That businesses should be allowed to bar as many people as they can manually?
Basically the position that "businesses shouldn't use tech to help them bar people, but should still be able to bar people" seems confusing to me, because then obviously bigger businesses like MSG can still do even manual barring 'at scale' (e.g. hire more security guards and distribute lists of barred individuals across them) that small businesses wouldn't be able to do, etc...
> Basically the position that "businesses shouldn't use tech to help them bar people, but should still be able to bar people" seems confusing to me.
It's a combination of things. The high scalability and lower cost make it much easier to ban people. Big companies are already demonstrating they're willing to ban people for unreasonable things and the continual reduction in competition between businesses means people that get banned have fewer alternatives, so being banned is a much bigger deal than it used to be.
Take it to an absurd extreme and imagine if you walk into a store where they have facial recognition identify you, look up your bank account balance, and decide you don't have enough money to be worth letting in. The potential profit is less than the average cost of serving someone from your demographic, so it makes business sense to refuse service.
The venue matters too. There probably wouldn't be a lot of complaining if a Rolex store discriminated against poor people, but what if grocery stores did it?
I don't want to have some algorithm giving me a pass fail score that determines where I can go and what I can do and it sure feels like that's the direction we're heading.
> then obviously bigger businesses like MSG can still do even manual barring 'at scale' (e.g. hire more security guards and distribute lists of barred individuals across them)
Are you sure about that? Image recognition isn't a horizontally scalable task if you're talking about guys with clipboards. And the larger that clipboard, the lower the accuracy gets. Think of it as you have to match each incoming patron against a database. If each security guard has the entire database, they're doing a costly time-intensive, & inaccurate search through the book (security guards are not only paid too little to care about this, but humans in general are going to perform poorly at a task of "does a picture of this person exist in these other 200 images of banned people taken in alternate lighting conditions, clothing, facial hair, etc". And remember, as a business your goal is to let in all the people who came in a timely fashion. Otherwise your business dies / people complain / laws get changed. So to do this, you'd basically have to have a close to 1:1 relationship between person entering and security guard screening. That doesn't scale, even for larger businesses.
Now you could invert the problem. You have the security guard take an image and distribute it to other guards who have their portion of their database that they're combing through and match against a subset of that. Still, that's a heck of a lot of extra man power that scales with the amount of bans you hand out and there's a natural backpressure for your business.
With automation though, you can scale this for pennies. So you don't have to be very discriminant about who you're banning which is a meaningful distinction. This isn't a bad PR approach. You probably can't muster enough political will to amend the laws to force venue owners to only ban people from public venues who themselves are being a public nuisance of some kind (which I think solves the problems a heck of a lot better than protected classes). But you can make it financially infeasible for them to do this by banning the new tech that lets them go after you.
What I meant was that each security guard has a certain subset of the database, so they're only responsible for scanning for those people. Say, instead of having 10 security guards each with a list of all 100 banned people, each security guard has a list of 10 people.
> Now you could invert the problem. You have the security guard take an image and distribute it to other guards who have their portion of their database that they're combing through and match against a subset of that. Still, that's a heck of a lot of extra man power that scales with the amount of bans you hand out and there's a natural backpressure for your business.
I think that's then an open problem of 1) how many people can one security guard reliably track, 2) how many people are on the ban list, 3) is it cheaper to hire that necessary number of security guards for each event than it is to invest in facial recognition tech? 4) How much cheaper or more expensive is it, exactly?
But my main question wasn't about the technical implementation, it was once again about what the actual argument is. Is it that businesses should be allowed to bar people as long as they don't use technology to do so? How would that account for different businesses having wildly different enforcement capabilities?
> Is it that businesses should be allowed to bar people as long as they don't use technology to do so? How would that account for different businesses having wildly different enforcement capabilities?
The issue is that the larger/cheaper the enforcement capabilities, the more likely frivolous bans are to be handed out.
Being able to implement small scale bans is reasonable including a "refuse the right to bar from service for any reason" type clause (which is obviously overridden by any protected groups legislation), but once implementing a permanent ban for large groups is a few clicks in a dashboard there's an issue.
What happens when businesses create their own shared oisd image recognition list for un-welcome individuals and people start getting blacklisted from everywhere?
Quantity has a quality all its own. When technology makes things possible that were impractical before it changes the underlying social bargain behind the rule.
For example anyone can sue anyone else, but mass filing of lawsuits by computer presents a novel problem.
Someone else mentioned speed limit enforcement (or really a lot of traffic laws generally). There's essentially a contract between people and the state whereby most people accept that there are traffic laws that they'll loosely follow and the state can enforce the laws but will almost universally ignore or miss minor infractions.
The situation would be quite a bit different in the US if automated tickets were sent out (or money were just deducted from a bank account) every time someone went 5 miles per hour over the speed limit.
> There's essentially a contract between people and the state whereby most people accept that there are traffic laws that they'll loosely follow and the state can enforce the laws but will almost universally ignore or miss minor infractions.
Someone should let small town cops know about this contract. I've been ticketed for speeding 7, 5, and even 3 miles over the limit. I've also once had my car searched for running a stop sign (at 2 in the morning, when other than the cop car lurking in the parking lot, there were absolutely no cars anywhere). I was a teenager back then, and didn't know my rights, or had any resources to pursue legal recourse. But the point is that traffic laws are enforced extremely erratically, depending on where you happen to be living in a country.
This was a paper in a journal titled Advances in Materials Science and Engineering. Which doesn't sound like a social science journal, but hard to tell nowadays.
The question the editors should be asking of their peer reviewers is: why was neither this, nor any of the other glaring errors spotted during the process of peer review, a process literally designed to catch things such as this?
This is why I think that peer reviewers should be deanonymized post-review, to allow for accountability and open conflict-of-interest investigations.
It's obvious the bar plot didn't have error bars when it was submitted to the journal. One of the reviewers must have complained about the lack of error bars. So the authors thought "wtf are error bars?", googled, and added some T's...
Deanonymized peer review has no chance of working since identified reviewers would fear retaliation for rejecting someone's manuscript.
From the article:
> Mr. Chang and Mr. Samuelsson did not respond to request for comment. In a text, Ms. Ray said she had written her letter to the California legislature “after giving the issue much thought” and that she stood by it.