On page 13 you'll see _why_ the judges don't apply the letter of the law - they're seeking to do justice to the victims _in spite of_ the law.
"there is another possible explanation: the human judges seek to do justice. The materials include a gruesome description of the injuries the plaintiff sustained in the automobile accident. The court in the earlier proceeding found that she was entitled to [details] a total of $750,000.10. It then noted that she would be entitled to that full amount under Nebraska law but only $250,000 under Kansas law." So the judge's decision "reflects a moral view that victims should be fully compensated ... This bias is reflected in Klerman and Spamann’s data: only 31% of judges applied the cap (i.e., chose Kansas law), compared to the expected 46% if judges were purely following the law." "By contrast, GPT applied the cap precisely"
Far from making the case for AI as a judge, this paper highlights what happens when AI systematically applies (often harsh) laws vs the empathy of experienced human judgement.
So many “AI is going to replace expert ______” assertions come from computer scientists not realizing how little they understand the real world requirements of those roles. Judges are at the intersection of humanity and policy: they are there to use their judgement, not merely parse the words and do the math. A judge probably wouldn’t have even done that part — their clerk would have. Is it cool and likely useful? Sure. Is it going to ‘outperform judges’ at their core competencies? Hell no.
As damning as these comments are, this comment kinda scared because it reminds me of the times when judges decide against applying empathy against society's most marginalized.
Hopefully as these models get better, we get to a place where judges are pressured to apply empathy more justly.
In addition to what the other person who replied said, ignoring that iOS/Android/iPadOS is far more secure than macOS, laptops have significantly less hardware-based protections than Pixel/Samsung/Apple mobile devices do. So really the only way a laptop in this situation would be truly secure from LEO is if its fully powered off when it’s seized.
The key in the desktop version is not always stored in the secure enclave, is my assumption (it definitely supports plaintext storage). Theoretically this makes it possible to extract the key for the message database. Also a different malicious program can read it. But this is moot anyway if the FBI can browse through the chats. This isn't what failed here.
Also last time I looked (less than 1 year ago) files sent over Signal are stored in plain, just with obfuscated filenames. So even without access to Signal it's easy to see what message attachments a person has received, and copy any interesting ones.
I live in MA and wish that this were true, but do you have data / evidence to support that it rarely happens?
Also, I don't know if you have tried to get your $10, but it's not like the sign is always obvious and every time I've tried, it's not like the cashier says "oops" and gives you the thing for free - they call a manager, the manager argues with you, other customers complain about the checkout delay you've created... there's social pressure there so I can understand why customers would not do this even when they can.
I've done this many times, and it usually takes about 5min (which sounds short, but isn't really that short). There is social pressure, but it's even stronger on the store than it is on you as a customer.
You effectively need or greatly benefit from gas, water, electricity and an ISP.
What do you really get out of social media? I mean other than most of you getting crippling anxieties about things that aren’t even real, of course.
Sure sure, I know, everyone wants it because they need to share photos of the kiddos with grandma out of country. No one needs it because they enjoy the shallow bullshit and dopamine and snarky retorts that enforce their ideology.
Social media is relied on by a lot of people for official notifications. When I was in high school, my only use for Twitter was checking if my school was closed or not on snow days. I'm sure there are lots of valid reasons for schools, hospitals, emergency services, garbage collection, official media networks etc. to have social media accounts, and for regular people to follow them.
I've always thought it would be a good idea for governments to run their own mastodon servers for this, but something else with accounts (not publicly) tied to real identities could be interesting.
> my only use for Twitter was checking if my school was closed or not on snow days
Believe it or not, they post that on their own websites.
We had to turn on the TV and watch the marquee they would add to all shows. If you missed your school you had to go to another channel to see where in the alphabet they were.
You have not made a convincing argument. Social media has specifically moved away from synchronous time-prioritized posting in favor of algorithm engagement. So I can’t accept “notifications” as a legitimate use.
Social media is a pretty wide term and includes networks where you mostly talk to your friends and relatives to networks where you mostly consume content from strangers.
The former has a clear benefit (especially where it challenges legacy industries with exploitative pricing like mobile phone networks) and even the latter can benefit you by exposing you to new ideas and information.
That social media is incentivized to push meaningless but addictive fluff over genuine communication due to monetary incentives is the point of TFA. This is a reason for making social media a public utility, not against it.
The friction of changing bank accounts is high, and few people choose their bank accounts based on how easy the online authentication is. Unless a bank does this meaningfully much worse than their competitors (low bar) they have little incentive to fix it.
If you think TD is bad, try some European countries where there's only a handful of banks...
According to https://2fa.directory/us/#banking there are 3 banks in the US that support hardware 2FA (without limitations like requiring a Symantec token or only being available to "high risk" clients): BofA, Morgan Stanley, and Mercury.
Of these three, Mercury isn't really a bank, it's a non-bank financial institution (and as the bankruptcy of Synapse shows, putting your money into these services can be risky), Morgan Stanley has zero locations within a 1 hour drive (important for when I need cashiers checks or need to deposit checks that mobile apps can't handle), and BofA's interest rates are laughable.
There's no FDIC-insured bank which has decent savings accounts, physical branches near me, and supports proper hardware 2FA. The best I can get is savings, location, and (the bank's app-based) software 2FA.
There truly is no incentive for the banks to improve, and I don't think anything will unless congress forces their hands (which seems unlikely, given that the average person has never suffered an SMS 2FA-based attack on their finances and thus has no reason to write to congress about it).
This is so informative, thank you. I always got my kids baby food in glass, thinking it would reduce their microplastics exposure as well as reducing plastic waste. Turns out only one of those was true :(
"there is another possible explanation: the human judges seek to do justice. The materials include a gruesome description of the injuries the plaintiff sustained in the automobile accident. The court in the earlier proceeding found that she was entitled to [details] a total of $750,000.10. It then noted that she would be entitled to that full amount under Nebraska law but only $250,000 under Kansas law." So the judge's decision "reflects a moral view that victims should be fully compensated ... This bias is reflected in Klerman and Spamann’s data: only 31% of judges applied the cap (i.e., chose Kansas law), compared to the expected 46% if judges were purely following the law." "By contrast, GPT applied the cap precisely"
Far from making the case for AI as a judge, this paper highlights what happens when AI systematically applies (often harsh) laws vs the empathy of experienced human judgement.
reply