I have seen this written many times and can't shake off this feeling myself; I feel more productive using LLMs but I am not sure I really am. I even feel quite overloaded right now with all the ideas that I could do. In the past I also had many ideas but they were quickly set aside understanding that there's not enough time for everything. Now, it usually starts with prompting and I get into a rabbit hole. In the end, it feels like a lot of words have been exchanged but the results are nowhere to be found.
> ChatGPT, we have been optimizing production work for six weeks. <uploads some random documents that management team has uploaded to SharePoint, most of them generated by LLMs>. Finalize this optimization work.
> <ChatGPT spits out another document and claims that production work is now optimal>
I think the OP’s proposal is great but impossible to implement right away. Some steps have to be taken towards that direction though. First, eliminate borrowing using their stocks as collateral, thus avoiding capital gains tax. That would immediately reduce the number of new mega yachts.
But the biggest boon for society would be progressive taxes on inheritance. It wouldn’t be government’s problem to figure out how it would work. It would be on inheritor to figure out how to pay the taxes on their newly inherited wealth.
>I have an even simpler step one: increase the IRS budget significantly so that they actually have enough resources to go after the big guys.
>It's on a downward spiral consistently, and it was further cut by 9% this year.
Has it been on a downward spiral?
The Inflation Reduction Act of 2022[0] added $80 billion over ten years to the IRS budget[1] (a good step in the right direction, IMHO), but which has now been withdrawn. Gee, I wonder why?
Fun fact: Elon stopped fighting his Twitter purchase around the same time the Inflation Reduction Act was signed. In fact, the purchase was finally completed mere 2 months later.
Hm, I wonder why those two dates happen to be so close. It's truly a mystery.
I don’t get why reasonable claims like this get downvoted. Are billionaires downvoting them? Do so many other ambitious people expect to become billionaires at some point in their lives?
This comment is repeating a political slogan with no consideration of the content of the article.
Also the slogan is a Marxist alternative theory of wealth and power which conflicts with some basic premises of being interested in startups and is debunked in pg readings.
> This comment is repeating a political slogan with no consideration of the content of the article.
Bezos was able to purchase one of the nation's most important news outlets because he is a billionaire. If his wealth was capped at $100M, he would have had to pool resources with many other ultra-wealthy individuals to effect the same purchase. These people would have competing interests, and would also themselves be open to being bought out because their ownership stake in the company would be small. This would be good for the country, because one person being able to turn an important news outlet into his personal propaganda machine is bad, as the article describes.
Has bezos made any decisions for you lately? Money tends to be a pretty weak form of power.
What we are seeing is bezos trying to translate money into media power, which is real. But it doesn’t seem to have worked out that well. His paper is not that influential or distinct. He is unable to wield much control over the journalists likely because they operate within a tight culture that exists across outlets.
Your instinct is that media is actually real power. Isn’t a few people operating the media scarier than a few people being able to consume a lot?
But then why is media powerful? Public opinion is directly useful for winning elections. But also indirectly as public officials need to cater to the public.
I downvoted it because I think that forcibly taking away people's wealth (however much or little they have) is immoral in the extreme. However, I did just now vouch for it because the post isn't breaking any rules even if I do think it's a bad take.
Let’s not discount with what means that wealth has been acquired in the first place. One man’s wealth might be poverty for the thousands. That’s why we got social welfare, it’s just not doing enough. At this point I would say a random store floor cleaner is more valuable for society than someone like Jeff Bezos.
Peter Thiel thinks that he has the upper hand and will outsmart everyone to stay at the top. The problem with chaos is that it’s very difficult to control so there’s a good chance he won’t and different actors will come to the top.
Depends on what is a ‘generation’ for LLMs. It would be weird to build a model which is a generation behind. My guess is that like all models, it will be considered the best until the novelty factor wears off and then it will be more or less the same like all modern LLMs - better in some domains, worse in others.
Edit: and it will probably also lead in most major benchmarks which says next to nothing about the quality.
Hindsight is 20/20. That bitcoin is a store of value has been talked about for a very long time when other blockchains overtook it in terms of functionality. People’s memories are short so I am sure it will be touted as such again in a couple years.
The point of Moltbook is that OpenClaw human owners installed skills to participate in Moltbook, not that the bots decided to do that on their own. There’s no denying that it’s stupid and fake though.
To me it looks like some of the more “interesting” posts are created by humans. It’s a pointless experiment, I don’t understand why would anyone find it interesting what statistical models are randomly writing in response to other random writings.
I think the level at which someone is impressed by AI chatbot conversation may be correlated with their real-world conversation experience/ skills. If you don’t really talk to real people much (a sadly common occurrence) then an LLM can seem very impressive and deep.
I never considered this aspect at all. To me it feels more that some people find it really fascinating that we finally live in the future. I think so too, just with a lot of reservations but fully aware that the genie has been let out of the bottle. Other people are like me. And the rest don’t want any part of this.
However, personal views aside, looking at it purely technically, it’s just a mindless token soup, that’s why I find it weird that even deeply technical people like Andrej Karpathy (there was a post made by him somewhere today) find it fascinating.
A human, not a statistical model. I can insert any random words out of my own volition if I wanted to, not because I have been pre-programmed (pre-trained) to output tokens based on a limited 200k (tiny) context for one particular conversation and forget about it by the time a new session starts.
That’s why AI models, as they currently are, won’t ever be able to come up with anything even remotely novel.
Neural networks is an extremely loose and simplified approximation of how actual biological brain neural pathways work. It’s simplified to the point that there’s basically nothing in common.
reply