It's *worth it* when you're salaried? Compared to investing the money? Do you plan to land a very-high-paying executive role years down the line? Are you already extremely highly paid? Did Claude legitimately 10x your productivity?
I'm serious - the productivity boost I'm getting from using AI models is so significant, that it's absolutely worth paying even 2k/month. It saves me a lot of time, and enables me to deliver new features much faster (making me look better for my employer) - both of which would justify spending a small fraction of my own money. I don't have to, because my employer pays for it, but as I said, if I had to, I would pay.
I am not paying this myself, but the place I work at is definitely paying around 2k a month for my Claude Code usage. I pay 2 x 200, for my personal projects.
I think personal subs are subsidized while corporate ones definitely not. I have CC for my personal projects running 16h a day with multiple instances, but work CC still racks way higher bills with less usage. If I had to guess my work CC is using 4x as little for 5x the cost so at least 20x difference.
I am not going to say it has 10xed or whatever with my productivity, but I would have never ever in that timeframe built all those things that I have now.
I don't know why you keep insisting that no one is making any money off of this. Claude Code has made me outrageously more productive. Time = Money right?
I'm an employee, and my boss loves me because I deliver things he wants quickly and reliably - because I use AI tools. Guess who he will keep in the next round of layoffs?
the larger the trial size, the smaller the outcome
I find this a bit surprising. Could there be something else affecting the accuracy of larger trials? Perhaps they are not as careful, or cutting corners somewhere?
Maybe. Those could be the case. But ignoring all confounding factors, this phenomenon is possible with numerical experiments alone. One of the meanings of "the Law of Small Numbers".
Sure, could be just lucky. But if there are several successful small studies, and several unsuccessful large ones (no idea if this is the case here), we should probably look for a better explanation.
It does not require more explanation: publication bias means null results aren't in the literature; do enough small low quality trials and you'll find a big effect sooner or later.
Then the supposed big effect attracts attention and ultimately properly designed studies which show no effect.
Just my hypothesis, but I wonder if larger sample sizes provide a more diverse population.
A study with 1000 individuals is likely a poor representation of a species of 8.2 billion. I understand that studies try to their best to use a diverse population, but I often question how successful many studies are at this endeavor.
If that's the case, we should question whether different homogeneous population groups respond differently to the substance under test. After all, we don't want to know the "average temperature of patients in a hospital", do we?
No, the other way around. It's the combination of two well known effects. Well, three if you're uncharitable.
1. Small studies are more likely to give anomalous results by chance. If I pick three people at random, it's not that surprising if I happened to get three women. It would be a lot different if I sampled 1,000 people.
2. Studies that show any positive result tend to get published, and ones that don't tend to get binned.
Put those together, and you see a lot of tiny studies with small positive results. When you do a proper study, the effect goes away. Exactly as you would expect.
The less charitable effect is "they made it up". It happens.
reply