> But at the same time technofeudalism, dystopia, etc.
You could run LLMs locally to mitigate this. Of course, running large models like GLM-4.6 is not feasible for most people, but smaller models can run even on Macbooks and sometimes punch way above their weight
You could run LLMs locally to mitigate this. Of course, running large models like GLM-4.6 is not feasible for most people, but smaller models can run even on Macbooks and sometimes punch way above their weight