Hacker Newsnew | past | comments | ask | show | jobs | submit | HumanOstrich's commentslogin

I used this for a while. What I don't like is that it updates its database by creating an entirely new copy and then deleting/renaming. For me that meant a several-hundred-MB file was being unnecessarily rewritten on a regular basis. It's a rather excessive waste of resources and not a polite thing to do when a lot of people have SSDs now.

I uninstalled it for that reason.


If you set `no_db=1` in `Everything.ini` you can configure it to be memory-only: https://www.voidtools.com/forum/viewtopic.php?t=9994#:~:text...

I think re-indexing all my drives every time it runs is even worse.

I think you could uncheck “Indexes” → “NTFS” (or ReFS, or FAT, or whatever else) → “Monitor changes” to disable that and leave yourself to press the “Force Rebuild” button at whatever cadence you like.

Or, in `Everything.ini` terms:

  allow_force_rebuild=1
  home_update_indexes=1 -- ‘use the monitor_pause and monitor_stop states’
  monitor_stop=1
  home_update_indexed_properties=1 -- ‘use the indexed_property_pause state’
  indexed_property_pause=1
  read_directory_changes=0


Also I just realized you can get a better middle ground between the default daily DB update and RAM-only mode:

   db_auto_save_type=1  -- (From daily to interval mode)
   db_auto_save_interval=<milliseconds>

and btw sorry I'm not trying to convince you to like Everything; was just curious to figure out if/how it could be done :)

That's a different problem and not really relevant to OpenClaw. Also, your issue is primarily a skills issue (your skills) if you're using one of the latest models on Claude Code or Codex.

You have to be joking. I tried Codex for several hours and it has to be one of the worst models I’ve seen. It was extremely fast at spitting out the worst broken code possible. Claude is fine, but what they said is completely correct. At a certain point, no matter what model you use, llms cannot write good working code. This usually occurs after they’ve written thousands of lines of relatively decent code. Then the project gets large enough that if they touch one thing they break ten others.

I beg to differ, and so do a lot of other people. But if you're locked into this mindset, I can't help you.

Also, Codex isn't a model, so you don't even understand the basics.

And you spent "several hours" on it? I wish I could pick up useful skills by flailing around for a few hours. You'll need to put more effort into learning how to use CLI agents effectively.

Start with understanding what Codex is, what models it has available, and which one is the most recent and most capable for your usage.


Well, I will not be berated by an ostrich!

Huge models? First you have to spend $5k-$10k or more on hardware. Maybe $3k for something extremely slow (<1 tok/sec) that is disk-bound. So that's not a great deal over batch API pricing for a long, long time.

Also you still wouldn't be able to run "huge" models at a decent quantization and token speed. Kimi K2.5 (1T params) with a very aggressive quantization level might run on one Mac Studio with 512GB RAM at a few tokens per second.

To run Kimi K2.5 at an acceptable quantization and speed, you'd need to spend $15k+ on 2 Mac Studios with 512GB RAM and cluster them. Then you'll maybe get 10-15 tok/sec.


How much extra power do you think you would need to run an LLM on a CPU (that will fit in RAM and be useful still)? I have a beefy CPU and if I ran it 24/7 for a month it would only cost about $30 in electricity.


Edit ~/.claude/settings.json and add "effortLevel": "medium". Alternatively, you can put it in .claude/settings.json in a project if you want to try it out first.

They recommend this in the announcement[1], but the way they suggest doing it is via a bogus /effort command that doesn't exist. See [2] for full details about thinking effort. It also recommends a bogus way to change effort by using the arrow keys when selecting a model, so don't use that either.

[1]: https://www.anthropic.com/news/claude-opus-4-6

[2]: https://code.claude.com/docs/en/model-config#adjust-effort-l...


Pathetic how they have no support for modifying sampling settings, or even a "logit_bias" so I can ban my claude from using the EM dash (and regular dash), semicolons, or "not". Also will upweight things like exclamation points

Clearly those whose job it is to "monitor" folks use this as their "tell" if someone AI generated something. That's why every major LLM has this particular slop profile. It's infuriating.

I wrote a long winded rant about this bullshit

https://gist.github.com/Hellisotherpeople/71ba712f9f899adcb0...


You can do it via /model and pressing left and right though

That's not a thing, at least not in my installation of Claude Code.

It works for me! (Edited link since original had laptops serial number in it: https://screen.studio/share/3CEvdyji)

Claude Code v2.1.37

EU region, Claude Max 20x plan

Mac -- Tahoe 26.2


Good to know it works for some people! I think it's another issue where they focus too much on MacOS and neglect Windows and Linux releases. I use WSL for Claude Code since the Windows release is far worse and currently unusable do to several neglected issues.

Hoping to see several missing features land in the Linux release soon.

I'm also feeling weak and the pull of getting a Mac is stronger. But I also really don't like the neglect around being cross-platform. It's "cross-platform" except a bunch of crap doesn't work outside MacOS. This applies to Claude Code, Claude Desktop (MacOS and Windows only - no Linux or WSL support), Claude Cowork (MacOS only). OpenAI does the same crap - the new Codex desktop app is MacOS only. And now I'm ranting.


What version are you on? Did you run a Claude update?

I'm on v2.1.37 and I have it set to auto-update, which it does. I also tend to run `claude update` when I see a new release thread on Twitter, and usually it has already updated itself.

what? Their documentation is hallucinated?

Yep, and their documentation AI assistant will egregiously hallucinate whatever it thinks you want to hear, then repeat itself in a loop when you tell it that it's wrong.

Yesterday I asked a question about a Claude Code setting inside Claude Code, don't recall which, and their builtin documentation skill—something like that—ended up doing a web search and found a wrong answer on a third party site. Later I went to their documentation site and it was right there in the docs. Wonder why they can't bundle an AI-friendly version of their own docs (can't be more than a few hundred KBs compressed?) inside their 174MB executable.

It's insane that they concluded the builtin introspection skill for claude documentation should do a web search instead of simply packing the correct documentation in local files. I had the same experience like you, wasting tokens and my time because their architecture decision doesn't work in practice.

I have to google the correct Anthropic documentation and pass that link to claude code because claude isn't able to do the same reliably in order to know how to use its own features.


Also if they bundled the documentation for the version you're running it would have fewer problems due to version differences (like stable vs latest).

They used to? I have a distinct memory of it doing exactly that a few months ago. Maybe it got dropped in the mad dash that passes for CC sprint cycles

If we want to make it extremely complex, wasteful, and unusable for 99% of people, then sure, put it on the blockchain. Then we can write tooling and agents in Rust with sandboxes created via Nix to have LLMs maintain the web of trust by writing Haskell and OCaml.

Well done, you managed to tie Rust, Nix, Haskell and OCaml to "extremely complex, wasteful, and unusable"

Boring Java dev here. Do I just sit this one out?

Zig can fix this, I'm sure.

zig can fix everything

This is irrelevant to the article and discussions here. Weird copypasta bullet points too.

Bluesky also randomly bans new accounts saying they violated the ToS. Like right after signup before you do anything. It says you'll receive an email with details (never happens) and offers a form to appeal. The form goes nowhere and you never hear anything again. This happened to me a couple months ago so it's probably still an issue. It seems more like sloppy, careless engineering than malice, however.


Happened to me a few weeks ago. I replied/filled out the form, and after a day it was unlocked. Seems to be very hit and miss, maybe depending on who is seeing your replies? Regardless, definitely a sucky issue...


This happened to me and I made a new account, which isn't banned yet but it could be any day now if they detect "ban evasion". Why I don't trust centralised systems.


You can tell it not to do that and it will show inline diffs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: