That isn't true. I signed up for a fresh account for a project I was working on. Despite following no-one and not having interacted with anything, all I was pushed were racists, bigots, and extremist political content.
While this is an interesting data point, the main thing it tells us is that when the algorithm has no information about your preferences, that it skews racist.
This might be because, absent other information, the algorithm defaults to the "average" user's preferences.
Or it might be evidence of intentional bias in the algorithm.
The next piece of data we need is, if we take a new account, and only interact with non-Nazi accounts and content (e.g. EFF, Cory Doctorow, Human Rights Watch, Amnesty, AOC/Obama/Clinton etc), does the feed become filled with non-racist content, or is it still pushed?
Or you can just leave the platform. We don’t always need to interrogate the exact reasons why something happens, we can just see it, document it, then go elsewhere.
Even if you believe that Musk and team don’t “touch the scales” of the algorithm, the inevitable consequence of the decision to prioritize comments of people willing to pay for blue checks, is to discourage users not in that segment from engagement at all levels.
The resulting shift in attention data naturally propagates to weight the input to the algorithm away from “what does an average user pay attention to” and more towards “what does a paying user pay attention to.”
Setting morality aside, this is a self-consistent, if IMO short-sighted, business goal. What it is not is a way to create a fair and impartial “mirror” as you have described.
I created an account, picked "pets" as my interest. I was suggested several pet-related accounts to follow, and followed none.
I went to the home page and "for you" was populated about 80% from known right accounts and angry right-flavored screeds from people I didn't recognize.
The other 20% was just a smattering of random, normal stuff. None of it about pets.
I think it's good advice, the main difference is that Bsky encourages you to do that by giving you the possibility to customize your feeds (and set whatever as the default). You can have a combination of personal lists and custom algorithmic feeds (your own or someone else's).
Even ignoring musk's takeover, I think it's a better model that reduces doomscrolling, ragebait and generally low quality interactions.
If I visit a buffet looking for a healthy snack, but 90% of the dishes are fast food, then I'll probably spend a lot of time looking through the fast food, and may even eat some as the best worst option.
Similarly, I have found the overall content pool to have significantly worsened since Musk's takeover. The algorithm keeps serving me trash. It doesn't mean I want trash.
You can take your analogy further. The buffet noticed you pausing on unhealthy food, and begins replacing all the healthy options with unhealthy options. People shame your criticisms and note you could easily put blinders on and intentionally look longer at healthy options anytime you accidentally glance at an unhealthy one. the alternative would be an absolute repression of free speech after all.
A whole lot of machine learning practitioners use X. Makes it difficult to avoid if you're interested in the news. It's definitely a network effect issue.
But you can also follow people and read only what they write, reply to them, and write yourself.