LLM auto-complete is good — it suggests more of what I was going to type, and correctly (or close enough) often enough that it’s useful. Especially in the boilerplate-y languages/code I have to use for $dayjob.
Search has been neutral. For finding little facts it’s been about the same as regular search. When digging in, I want comprehensive, dense, reasonably well-written reference documentation. That’s not exactly wide-spread, but LLMs don’t provide this either.
Chat-driven generates too much buggy/incomplete code to be useful, and the chat interface is seriously clunky.
Search has been neutral. For finding little facts it’s been about the same as regular search. When digging in, I want comprehensive, dense, reasonably well-written reference documentation. That’s not exactly wide-spread, but LLMs don’t provide this either.
Chat-driven generates too much buggy/incomplete code to be useful, and the chat interface is seriously clunky.