> In connection with using or accessing our Services you agree to comply with this User Agreement, our policies, our terms, and all applicable laws, rules, and regulations, and you will not...
> use any robot, spider, scraper, data mining tools, data gathering and extraction tools, or other automated means (including, without limitation buy-for-me agents, LLM-driven bots, or any end-to-end flow that *attempts to place orders without human review*) to access our Services for any purpose, except with the prior express permission of eBay;
I'll then get Claude Code for the Web to go to that repo unzip the zip and read the documents. It will make a first pass at the entire codebase.
I'll merge that into main and create another Claude Code for the Web Opus session with any ideas I've had in the meantime - which will usually be a few.
Then I clone it to a local machine and get Claude Code Opus to try and get it to work. And I'll prompt it from there until it works. If it's a Linux program, that'll be in a terminal window. If it's Windows, I'll use VS Code because it's a better terminal in VS Code than it is in a terminal window on Windows.
That's a general workflow. Sometimes I won't use GitHub at all. Sometimes a PXE boot an entire Linux machine and give it that with admin privs.
And sometimes I just tell it to use sudo as my own account. On my router for instance, if we want to do things with the firewall.
Why do you go through all the trouble of uploading to Claude code for web only to download it back and run it in Claude code terminal? Are there different rate limits for each endpoint so that’s why you do it? Why not just work entirely using Claude code terminal locally?
I do most of my initial work in Claude Chat, on my phone. I have my best ideas when I'm away from my desk. Originally, Claude for the Web was only on the iPhone not Android - but it is now.
My iPhone 11 won't let you download certain files, and file handling on the iPhone is awful.
I guess I've just got used to the flow. I don't always do it like that, it was just one example.
Well, honestly, that's already what I do every day. Claude on my phone is rather handy, especially with voice questions.
I get a lot of recipes from it, I've built some tools that I give a list of ingredients I have already and it'll suggest recipes.
And it's just helped me fix my washing machine.
These are things I couldn't do before easily by just asking a search engine.
I get it to make product recommendations.
But that does carry some risk, of course, on the questions where pushing me in one direction or another is favourable to the people answering the question - Where should I get my mortgage? Which products are best that kind of thing.
Right now it is quite neutral, it scrapes existing reviews and gives pros and cons to various decisions.
Running it remotely on a VM seems like a very sensible option. Just don't give it permission to nuke the remote repository hah (EG don't allow force-push, use protected branches, only allow write access to branches it created)
DuckDB (like most database systems and many applications using memory allocators like jemalloc or mimalloc) doesn't immediately release memory back to the OS after freeing it internally.
Memory allocator strategy - DuckDB uses an allocator that keeps freed memory in a pool for reuse. Returning memory to the OS is expensive (system calls, page table updates), so allocators hold onto it anticipating future allocations.
Thanks for explaining this! I suspected there was some additional context and was digging into it now. Problem is that the memory never seems to be freed, and I could update my issue to show this.
Maintainers have acknowledged problems like this on other issues too.
reply