I would think you should admire her for taking a strong moral stand at the expense of the business. She added a number of grievance-oriented features to reduce "bigotry", "xenophobia", etc; thereby making the product less able to render reality; thereby hurting its value proposition; thereby losing users; thereby cratering the share price.
Criminality is congenital. Social interventions will not fix the kid. Neither for that matter will prison, but at least it will protect the rest of us from his increasingly violent depredations.
This is a categorically disproven view. Thankfully, it's no longer widely held, but unfortunately not before it was used to justify millions of cruel acts from eugenics to genocide.
To win at active trading you need an edge nobody else has. There have been cases of traders having insights not known to others. Pairs trading is the only example I know of. In that case the people who kept it secret made out like bandits for a few years, then the secret leaked and it is no longer profitable
The most common legal edge players have is scale: They are huge and nimble and can take advantage of opportunities at scale. It goes really well, until it does not, and another trading house collapses.
The most common edge I believe, after studying it for a decade, is crime. Big trading house, big crime
High frequency trading (measuring latency, inferring event ordering, etc). Since most crypto derivatives trading takes place at ap-northeast-1, it feels like AWS is orienting this release towards financial markets customers.
Question for any IQ skeptics here (e.g. "it just measures your ability to take tests" or "it just tells you how rich your parents are"): what's your response to studies like this? Is there anything that can be said about the effect of lead on cognitive function? Why might IQ be a good measure of lead-induced stupidification, but unreliable for literally anything else?
I think it's a pretty straightforward thing: intelligence somewhat correlates with life/career outcomes overall, and it's not linear. Separately, IQ tests are reasonably good, though imperfect measures of general intelligence. Also separately, if you look at careers where high intelligence is needed, then IQ correlates much better.
IQ does not principally measure test-taking abilities or SES. Yes, those correlations exist, but their effect sizes are not nearly as large as a certain political ideologies would have you believe. And simultaneously, it's not as ironclad as the other political ideology would have you believe. It's very reliable as these things go, but noisy at the margin.
EDIT: a sibling comment correctly points out that aggregate effects do not always apply individuals.
It's possible for IQ to be a good population measure while being a poor individual measure. (FWIW, I think proponents and opponents of IQ testing all overstate their cases.)
I assume the study authors controlled for such things. For example, they could bin the study participants by socioeconomic group, race, location, etc, etc, etc, and then show the average IQ loss per bin.
I haven't read the study, but statisticians do this stuff for a living, and there are definitely ways to control for sample biases that let you distinguish between "poorer people have lower IQs and are exposed to more lead, and the root cause of both is being poor", and "lead leads to lower IQs within all socioeconomic groups we could think of and measure"
But these controls could be done for any IQ-related study. Is IQ-skepticism based on the belief that IQ researchers generally don't use controls? That lead-IQ researchers alone do this?
Most IQ skeptics think IQ correlates with intelligence. The argument is generally over how good a proxy it is for intelligence and what conclusions can and can't be drawn from statements about IQ
You have to read the study to find out what it meant by IQ. People freely confuse IQ (a score on a test that's calling itself an IQ test) with IQ (a platonic ideal statistic people assume exists for the purpose of publishing research about it). This could be either.
Sorry, is your criticism that IQ proponents conflate point estimates with unknown population parameters? Is that what other IQ critics see as the core of the disagreement in this debate?
It's more of a secondary criticism though, that people aren't very careful about reading research papers. And of course that in many fields the researchers aren't careful either (https://en.wikipedia.org/wiki/Credibility_revolution).
The primary criticism is that people want there to be something called IQ (or g, or intelligence) that is 1. a real physical variable that causes things 2. an unchangeable attribute of a person that 3. makes you better and more virtuous than people with less of it. This recently causes 4. the belief that if we invented an AI it'd have a lot more of it than us and would take over the world[0].
Whereas I think that:
1. the best reason to know it is to find working interventions to improve it, which they can't find because it isn't real, so they should find some real physical processes.
2. the other reason to know it is to predict someone's ability on a task, and in any such situation there is better evidence you could use for that. Although this one's kinda illegal anyway (https://en.wikipedia.org/wiki/Griggs_v._Duke_Power_Co.).
3. a superintelligent computer would not take over the world.
[0] The Lesswrong guy, the main advocate of this one, didn't graduate high school but would like to be seen as intelligent, which means it's convenient to believe that intelligence is so inherent you don't have to prove it by performing well at school.
You're claiming this study used a test that it could have a "the test result" for, but it didn't - it's a meta analysis using another meta analysis, whose different studies all used different tests:
> We contacted investigators for all eight prospective lead cohorts that were initiated before 1995, and we were able to retrieve data sets and collaboration from seven. The participating sites were Boston (Bellinger et al. 1992); Cincinnati (Dietrich et al. 1993) and Cleveland, Ohio (Ernhart et al. 1989); Mexico City, Mexico (Schnaas et al. 2000); Port Pirie, Australia (Baghurst et al. 1992); Rochester, New York (Canfield et al. 2003); and Yugoslavia (Wasserman et al. 1997).
> We measured blood lead concentrations in 172 children at 6, 12, 18, 24, 36, 48, and 60 months of age and administered the Stanford–Binet Intelligence Scale at the ages of 3 and 5 years.
So they didn't all take the same test[0]. This supports my claim, which is that people who believe IQ research also believe that all test results produce the same valid statistic called IQ as long as the test calls itself an IQ test.
I actually can't think of a time I've seen an online IQ arguer claim that any given test result isn't an accurate representation of an IQ. They certainly also think it about things like SAT scores and old national-IQ studies where they gave the tests in second/third languages or just made up the numbers.
[0] if it was the same test, isn't the norm process to convert from raw scores to normally distributed ones be different in different years? Not sure how that part is done.
This is largely wrong. One could be a master at test-taking and not come close to a high score.
That said, familiarly with the test/item structure almost certainly helps, especially for folks with the potential to score high (see below).
> or "it just tells you how rich your parents are")
Hmm… family wealth and IQ may be correlated, but not perfectly so. There are plenty of low-IQ rich people and also plenty of high-IQ poor people.
> what's your response to studies like this?
Probably too many confounding variables. That said, this study is a publishable unit that can push one or more funded agendas, so here we are.
> Is there anything that can be said about the effect of lead on cognitive function?
While I know a bit about IQ, I don’t know much about the details of the relationship of IQ and lead.
> Why might IQ be a good measure of lead-induced stupidification
Maybe it’s not. See “funded agendas” comment above.
> but unreliable for literally anything else?
(the main reason I replied is below)
People really need to let go of this idea in a reasonably reliable way.
1. IQ measures reasoning ability. It is quite good at measuring this.
2. People put a lot of weight onto how IQ correlates with a bunch of other things, but these are not things that IQ tests are designed to measure. As such, these correlations may not be meaningful in some cases. So the “literally anything else” that IQ is allegedly not good for is almost entirely things that IQ tests are not designed to measure. I don’t think it’s prudent to disregard the test/measure because of misuse by some folks (typically within agendas).
3. People get very self-conscious about IQ scores. Let me help with that. IQ scores are a measure on a particular day that can vary from day to day for any one person. For any given test taker, they are trying to optimize what they score out of a theoretical max (i.e., their “true IQ”). Many, many things cause people to score lower than their potential max — lack of sleep, lack of food, external distractions, distress (physical, mental, emotional), anxiety, ambivalence, lack of test familiarity, etc. Very few things cause them to score higher than their max (it will almost certainly be within the confidence interval). It’s ok. Retake the test if it matters (it usually doesn’t).
4. IQ matters most in three areas, imho. The first is at the extremes. Gifted/genius folks and learning disabled folks need additional resources. How and whether this is implemented is highly debated. The second is in leadership positions. You want your leaders (e.g., in the military) to be within about 20 IQ points of those they lead. The idea is that > 20 IQ delta folks see the world in fundamentally different ways, so leading someone who views the world so differently is difficult and largely inefficient. The third is with one’s significant other. Same as above, it will be hard to be understood (if that’s your goal) by someone who is +/-20 IQ points away from you.
Dude, you are spewing out random things as if they are fact. Yet you lack an understanding of what IQ is.
IQ is an attempt to measure a general intelligence factor (g-factor). What happened is that researchers noticed that people who are good at some tests tend to also be good at other tests, even if it's from very different domain. E.g. say you are good with math, you also tend to be good in you language skills. This led to the assumption that there is a general factor out there that is shared across all skills (the g-factor). So determining how good you are at math is a combination of your math specific skills + the g-factor. Same with other domains.
How do you extract the g-factor? You measure a large set of people across a cognitive challenging set of tests, and do a factor analysis (statistical technique) to extract a linear g-factor. Each test can have a "g-loading" which essentially calculates what portion of it is due to the general g-factor. For example, one of the tests with the highest g-load is simply hearing a sequence of numbers and repeating them in reverse. This test has nothing to do with "reasoning skills. Yet for some reason you claim that it's designed to measure reasoning skills but not designed to measure "a bunch of other things".
You also claim that IQ varies significantly day to day, but that has not been shown in studies. In fact, IQ measurements tend to be remarkably stable across the person's entire adult life.
Than you spewed up a bunch of unsubstantiated claims about the difference of IQ between a leader and his team.
> Dude, you are spewing out random things as if they are fact. Yet you lack an understanding of what IQ is.
In my previous career, I did quite a bit of research on IQ. I’m pretty sure I have a decent understanding of what it is.
If you take out your straw-mans and overstatements of what I said, then I think you will be able to find research that supports everything I said above about IQ approximately to the degree of confidence that I stated it.
> Let’s make it easy - please cite the research that shows that an IQ gap of 20+ leads to worse leadership results.
Iirc, Greatness: Who Makes History and Why cites some research on this very topic.
There is more to be found — I’m sure you can find it if you try yourself or ask a librarian at a good academic library.
I will also add that you have conveniently ignored the fact that I prefaced that specific section with “imho”. It’s my opinion, and I stated all of those comments as such because I don’t think that there is any unassailable research in this area. There probably won’t be due to the difficulty of structuring a good and replicable study regarding IQ and IQ deltas specifically.
While the overall research is not air tight, there is research that I have done (unfortunately proprietary) that indicates that the “20 IQ point difference” concept is directionally correct (“directionally” because we had to use IQ proxies). Implementing this in organizational restructuring led to consistent measurable improvements at the extremes (which was our focus).
Given your challenging tone and style of engagement, I’m guessing that you’re hellbent on flaming. I’m not interested. As such, I will leave you to your library and librarian to find research that supports the ideas I have stated (assuming you bother to look).
“Leadership and IQ delta” a super interesting topic, but the current trends in psych research and psych funding unfortunately don’t really focus on these areas despite demand from outside of academia (it’s very political in an uninteresting way).
The reality is that it’s very difficult to come by any research that shows that higher IQ leads to worse outcome (which your delta hypothesis claims).
We also know that iq correlated over 0.95 between same person taking the test on a different day, so any claim around daily fluctuation is exaggerated except for outlier cases. Your claims paint a different picture.
Does it? Most of what I’ve seen amounts to assume races are have equal inherent IQ distributions, given test results are unequal the assumption is then tests must be inherently biased (examples of recent immigrant non-English speaking Jews improving their IQ scores as they learned English) or differences are caused by socio economic factors. That logic would fall apart if the original assumption wasn’t made, but anything starting with a prior that races can have different IQ distributions is thrown out as racist. The progressive book ‘The Genetic Lottery’ kind-of makes the case for polygenic factors considered evenly distributed amongst the races as a basis for that assumption but in my view their logic has a number of holes in it. If there is a better treatment of the topic I’m genuinely interested in reading it.
Nit: CRT certainly offers an explanation, but I don't think it's a particularly good one. Because it always appeals to "systemic injustice", it can't account for things like
1. the persistence of between group IQ differences amongst children people who have relocated to other countries/culture;
2. the disproportionate success of certain historically-marginalized groups;
3. regression to the mean; and,
4. other factors that marginally influence IQ scores (e.g. single-parent household vs dual-parent household)
Or people just haven't tried hard enough to find a politically correct explaination.
I think slavery itself could be one cause of it. Restricting the freedom to pick a partner of one group compared with other nearby groups seems likely to have an effect.
IQ is a mediocre-at-best metric for intelligence. "Intelligence" is probably real and variable among people, but poorly defined, very hard to test, and subject to a whole lot of opinion.
IQ is bad at comparing people from different backgrounds, especially across cultures, languages, etc.
But IQ can be a valid comparison for a single non-cultural variable. i.e. lead exposure in otherwise identical cohorts.
1. Many people who use PrEP have only one partner. They're using PrEP because their partner has HIV.
2. The alternative is letting people get AIDs. This is more expensive than PrEP.
3. Insurance doesn't pay the full list price of drugs, they pay a lower, negotiated rate. CostPlus Drug Company charges about $20/month for PrEP. This is probably closer to what insurance companies actually pay.
Other methods of protection fail. People lie about the status. People don't know their status.
Know what's ridiculous is that these drugs cost so much money and companies like Gilead can engage in "revenue maximization" schemes at the cost of the health of US citizens... and then apparently win lawsuits with their Big Pharma war chests.
The options aren't shell out $30k/year or let people get AIDs. There's a missing third option: don't let drug companies charge so much money, especially when the government paid for the development of the drug!
I don't think that attending "100% bareback, no loads denied" orgies regularly is a good idea, yet a bunch of my queer friends do exactly that on a fairly regular basis.
I know it's not popular, but AIDS is just what we have now. If people continue living like this, there will be another, and it might be worse. Hell, we probably already dodged a bullet with Monkeypox due to concomitant COVID precautions.
Lol, no one was observing COVID precautions at that point. It was dodged because people got vaccinated for it and observed spacing requested in a responsible manner.
Hell, in the UK heterosexual cases of AIDs overtook homosexual cases because of adherence to medication and good practice. [1]
* average 1% asset management fee (modal number if you poke around)
* 0% investment carry (hedge funds get away with charging this, but most asset managers don't)
* 9% effective tax rate (revenues disproportionately go to high earners)
gets you $1.8B lost tax revenues. The 0% carry assumption is very conservative, so the $1.8B is a lower bound. Even so, CA and NY collect about $400B annually. As a first order thing, this doesn't move the needle very much.
Sure. But how many companies are arriving? I'll admit I have no clue but if you're taking negative signals, ignoring positive and then wildly extrapolating future expectations... well, don't do that.
> The United States seeks fair consideration and back pay for asylees and refugees who were deterred or denied employment at SpaceX due to the alleged discrimination.
Does anybody know how such penalties are calculated? In "role-based damages," penalties would be limited to the total wages for the hiring reqs in question. This corresponds to the counterfactual where each of the roles would have been gone to an asylee. But I could also see the DOJ arguing that every "deterred" asylee is owed foregone wages. This "applicant-based damages" doesn't correspond to any serious counterfactual, but it has the virtue of being a much bigger number.
> "We believe this results from factors that include the lack of Black faces in the algorithms' training data sets..." the researchers wrote in an op-ed for Scientific American.
> The research also demonstrated that Black people are overrepresented in databases of mugshots.
The sort of clear-headed thinking that makes the AI bias field as respected as it is.
The actual quote that the mention in the article refers to:
"Using diverse training sets can help reduce bias in FRT performance. Algorithms learn to compare images by training with a set of photos. Disproportionate representation of white males in training images produces skewed algorithms because Black people are overrepresented in mugshot databases and other image repositories commonly used by law enforcement. Consequently AI is more likely to mark Black faces as criminal, leading to the targeting and arresting of innocent Black people."
So they're saying that simultaneously the training set has too few black faces and the set being compared against has too many.
> Consequently AI is more likely to mark Black faces as criminal, leading to the targeting and arresting of innocent Black people.
I don’t see how this relates to simple facial recognition. It doesn’t appear that they’re scanning for “criminal physiognomies” but for specific facial matches.
Furthermore, it seems that this whole line of argumentation implies that facial recognition software may be mistaking innocent Black people for non-Black perpetrators, which I don’t see any evidence for. How does this increase arrest rates for Black people if AI just can’t tell them apart? In all likelihood, the person who got away is also Black.
It doesn't imply that it's matching black people to white perpetrators. The claim is that A) the model itself is worse at matching for black faces and B) the database being searched against is often disproportionately made up of black faces.
Give it a photo of a black person to search on and you're probably getting a black person as a match, but the likelihood that it's actually the same person is lower than it would be if you were searching for a white person.
The quote doesn't say it's increasing arrest rates for black people, but arrest rates for innocent black people. If you use facial recognition and it's 99% accurate for white people and 75% accurate for black people (numbers chosen arbitrarily), you're going to target a lot more black people incorrectly even if you're never incorrectly matching photos of white criminals to black people.
> It doesn't imply that it's matching black people to white perpetrators. The claim is that A) the model itself is worse at matching for black faces and B) the database being searched against is often disproportionately made up of black faces.
Right, I understand that in the context of this specific quote, but the article implies that claim.
> Give it a photo of a black person to search on and you're probably getting a black person as a match, but the likelihood that it's actually the same person is lower than it would be if you were searching for a white person.
Lower, but by how much? The number given here is six in all. It feels very premature to use probably in that sentence. (Edit: misread that as you’re probably going to get a match)
> The quote doesn't say it's increasing arrest rates for black people, but arrest rates for innocent black people.
I meant this quote from the article: “facial recognition leads police departments to arrest Black people at disproportionately high rates.”
But I agree. It seems that there is a disparity in accuracy, it’s very unclear on how much of one but so far it appears that we’re talking about a fraction of a percent. We only have a sample size of six to draw on. We don’t know the demographics of the districts this has been employed in, and it seems strange to assume that they’re the same as the American population at large. I mean the first example is from Detroit.
The article posted to HN in this relevant section for the start of this thread (the part about more/less black people in the data sets) quotes/paraphrases a Scientific American piece (where I got the quote with "innocent" in it from my comment), which itself is based on a paper in Government Information Quarterly.
The paper is what the article here links to when they say that facial recognition leads to disproportionate arrests of black people, the part you're mentioning now. That's a separate finding of the paper from the statements about possible reasons "why" that are based on the training and search sets.
The main thrust of the paper is actually those numbers: they find that black-white arrest disparity is higher in jurisdictions that use facial recognition.
"FRT deployment exerts opposite effects on the underlying race-specific arrest rates – a pattern observed across all arrest outcomes. LEAs using FRT had 55% (B = 1.55) significantly higher Black arrest rates and 22% lower White arrest rates (B = 0.78) than those not implementing this technology."
They do some stuff I'm not really qualified to opine on to try to control for the fact that obviously facial recognition adoption is also correlated to department size, budget, crime rate and things like that. Of course the usual caveats still apply, particularly that they're not claiming or attempting to show causation.
This doesn't rescue their claim. If the suggested class imbalance really exists in the training/test sets, the model will preferentially identify whites as criminals.
The claim is that the model is worse at telling black faces apart from each other.
The system is trained to match images of faces, not identify criminals; it's not comparing things to its training set to give a "criminality" score. The training data is just what has taught the system how to extract features to compare. You run an image of an unknown person against your database of known images, and look for a match so you can identify the unknown person.
If the model is just "worse at" black people, it's going to make more mistakes matching to them.
When this software is being sold to these departments, it's amazing that people in the chain don't seem to be talking enough about the training set used or performance on certain populations. If you are going to arrest or build a case on facial recognition, you would think that they would be prepared to defend its accuracy against a broad range of demographics. Embarrassing failures and mistaken arrests, hurts their program, not to mention the money the city losses in lawsuits.
The answer to this conundrum might be that neither the departments nor the vendors are particularly interested in avoiding bias. Paying lip service is generally sufficient.
It makes sense to me? The algorithm specialises in distinguishing between the faces in its training set. It works by dimensionality reduction. If there aren't many black faces there it can just dedicate a few of its dimensions to "distinguishing black face features".
Then if you give it a task that only contains black faces, most of the dimensions will go unused.
Are black faces overrepresented or underrepresented? According to AI researchers, we're faced with Schrodinger's Mugshot--there's simultaneously too many and too few!
It's phrased accurately if confusingly. The bigger and un-fixable problem is that people are more apt to believe that a computer has calculated the correct answer when by its very nature popping oft bad images into a facial recognition search is almost always going to produce results even if most are fake and the real ID may not even be among the results.
Without additional leads police are strongly incentivized to pick one of the results and run with it and in many cases with enough data you have enough to get a plea or conviction even if they didn't do it especially if the person so selected was in the database in the first place because they have a record.
Convictions/pleas are obtained all the time with similar levels of proof.
This is fundamentally the same problem as dragnet searches of phone GPS to see who was in a space in a range of time. It could be a valuable investigative tool but its also a great way to "solve" a crime by finding someone to pin it on.
Because models are trained and validated on real data. Given a training set of crimes and corresponding surveillance footage, arrestee info is a (not noisy) label for “who is the guy in the movie.”
With a moment's thought, even the most emotive amongst us should see that the mugshots will be part of the training set--the photographed individuals are, after all, the class of true positives.
You train a model on a bunch of photos of white people, and a few photos of black people.
You then deploy that model, and use the model to match black person detained by racist officers against a database of photos that the police have from before. In that database the majority of people are black.
Shitty AI that was not properly taught what black people look like because most of the people in the training data were white, says that it found a probable match for detained black person.
Racist officers do not attempt to second guess the computer, so they throw innocent black person into their car and drive off to the police station.
Come on, we know that there is variation, sometimes drastic, between populations on all different facets of life. But this one? No. It would be racist to even broach the subject. That’s why we know White people are to blame for it.