Hacker Newsnew | past | comments | ask | show | jobs | submit | maronato's commentslogin

The problem is that it seems they didn't license it: https://pbs.twimg.com/media/HD2Ky9jW4AAAe0Y?format=jpg&name=...

If you use Claude through an interface that’s not Claude Code, you’ll only stick with it for as long as it proves itself the best. With other interfaces, you can experiment with multiple models and switch from one to another for different tasks or different periods of time.

Those tokens going to other providers are tokens not going to Anthropic, so they want to lock you in with Claude Code. And it clearly works, since a lot of people swear by it.


Because other OSs do not and the notepad++ team wants all users to have a similar experience.

If you don’t need auto updates, just disable them.

More importantly, notepad++ being able to update itself is not the exploit here. Your OS’ package manager would download the same compromised binary as notepad++’s built in updater.


What OS doesn't have a package manager now? Windows, Linux, and MacOS all have their own systems.

On windows, the package manager downloads the release of notepad++ directly from github, so it would not have been compromised. The hijack was done on the notepad++ website at the webhost level as I understand it, and the built in updater pulled from there.


Is the 1Password extension still not working on it?

I really want to switch, but no 1P support makes it really hard, unfortunately.


Are you talking about on macOS or iOS? On macOS I think the 1Password extension has always worked for me? At least it definitely does now. What issues did you have with it? On iOS I don’t use the extension - I’ve got 1Password set up as my default password store there.


If the goal is to support them, they do offer a subscription: https://tailwindcss.com/sponsor#insiders

While the content is different, it’s much cheaper than Tailwind Plus. If you use AI, it may even be more useful than Plus because of the great agent rules and discord community.


People and companies can host it for personal/internal use.

People and companies cannot host it and offer it as a service to other people or companies.

https://www.elastic.co/licensing/elastic-license/faq


The license doesn’t mention anything about “personal” or “internal” use.

Again, IANAL, but I can see why a company might be cautious about using Bear as a self-hosted blog engine, since companies technically have “users.”

For comparison, the Elastic License v2 - which this license is apparently modeled on - explicitly restricts use by "third parties":

> "You may not provide the software to third parties as a hosted or managed service"

----

The Bear license doesn’t include similar language, which could create uncertainty.

It might help to explicitly clarify that self-hosting for one’s own use is allowed, or to add “third party” wording to the limitations.

I only raise this because (a) licensing is tricky, and (b) if this feedback helps the author clarify their intended license terms, that’s a win for everyone.

https://www.elastic.co/licensing/elastic-license


But he chose not to use the exact wording from the Elastic Search license, which clearly says "third parties." Instead, he wrote his own license, and now it is not clear if I am allowed to self-host. In my opinion that is a bad decision.


Most websites don’t let users sign up with passkeys. You need to create an account using email/password and then go to their settings page and create a passkey. Now you can sign in with the passkey.


Claude trying to cheat its way through tests has been my experience as well. Often it’ll delete or skip them and proudly claim all issues have been fixed. This behavior seems to be intrinsic to it since it happens with both Claude Code and Cursor.

Interestingly, it’s the only LLM I’ve seen behave that way. Others simply acknowledge the failure and, after a few hints, eventually get everything working.

Claude just hopes I won’t notice its tricks. It makes me wonder what else it might try to hide when misalignment has more serious consequences.


Or it was trained to be aligned with Musk by receiving higher rewards during reinforcement learning steps for its reasoning.


This isn’t dangerous in the sense that they’re smart or produce realistic art. It’s misaligned with the company’s and human values.

The model doesn’t have to be powerful to snitch you to the FBI or have a distorted sense of morality and life.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: