Ideally this should be something search engines handle - but they do a poor job in specialised areas like code repos.
It's helpful to have a github mirror of your "real" repo (or even just a stub pointing to the real repo if you object to github strongly enough that mirroring there is objectionable to you).
One day maybe there will be an aggregator that indexes repos hosted anywhere. But in many ways that will be back to the square one - a single point of failure.
The Fediverse seems to dislike global search. Or is that just a mastodon thing?
IMHO - disagree but it depends on point of view so this is not ”you are wrong” but ”in my view it’s not like that”.
I think it’s the role of the software vendor to offer a package for a modern platform.
Not the role of OS vendor to support infinite legacy tail.
I don’t personally ever need generational program binary compatibility. What I generally want is data compatibility.
I don’t want to operate on my data with decades old packages.
My point of view is either you innovate or offer backward compatibility. I much prefer forward thinking innovation with clear data migration path rather than having binary compatibility.
If I want 100% reproducible computing I think viable options are open source or super stable vendors - and in the latter case one can license the latest build. Or using Windows which mostly _does_ support backward binaries and I agree it is not a useless feature.
Software shouldn't rot. If you ignore the cancer of everything as a subscription service, algorithms don't need to be tweaked every 6 months. A tool for accounting or image editing or viewing text files or organizing notes can be written well once and doesn't need to change.
Most software that was ever written was done so by companies that no longer exist, or by people (not working for a software company) no longer associated with those company they wrote the tool for. In many of these cases the source is not available, so there is no way to recompile it or update it for a new platform, but the tool works as good as ever.
It makes honest people feel rewarded, valued and acknowledge. It teaches people who wish to follow the rules and conform to social norms what those norms are and where we actually draw the line in practice.
Looked at slightly differently, given a split between high trust and low trust preventing conversions from high to low is similarly important to inducing conversions from low to high.
Yes, my understanding (and I suspect the reason why the airflow experiment worked) is that a large part of the reason this happens is because of a mismatch between the output from the vestibular and visual systems. So, the automated defenses of your body freak out and go into a defensive mode.
I think that ~30% of the population just has more sensitivity to the mismatch.
There is always going to be some movement. It’s impossible for there not to be. Whether it is rendered in the VR environment or happening in real-life through small little motions, there’s a lot of little things that help to establish the mismatch.
It’s probably most like getting car sick. You are obviously moving, but you are also stationary at the same time. This doesn’t happen to folks suffering from motion sickness when they are driving, though, because there is now a physical action tying the motion to your inputs.
This may lead you to ask why people watching a movie in a theater don’t get motion sick and the reason is the same, multiple inputs tell you otherwise. You can see the edges of the screen, you can see the audience, there’s a lot of input telling your body there’s nothing weird going on here. The more immersive, the more some people’s bodies do not handle the illusion well.
Have you considered that it's unsolvable? Or - at least - there is an irreconcilable tension between capability and safety. And people will always choose the former if given the choice.
in a pure sense no, it's probably not solvable completely. But in a practical sense, yes, I think it's solvable enough to support broad use cases of significant value.
The most unsolvable part is prompt injection. For that you need full tracking of the trust level of content the agent is exposed to and a method of linking that to what actions it has accessible to it. I actually think this needs to be fully integrated to the sandboxing solution. Once an agent is "tainted" its sandbox should inherently shrink down to the radius where risk is balanced with value. For example, my fully trusted agent might have a balance of $1000 in my AWS account, while a tainted one might have that reduced to $50.
So another aspect of sanboxing is to make the security model dynamic.
> The issue title was interpolated directly into Claude's prompt via ${{ github.event.issue.title }} without sanitisation.
How would sanitation have helped here? From my understanding Claude will "generously" attempt to understand requests in the prompt and subvert most effects of sanitisation.
I would not have helped. People are losing their mind over agents "security" when it's always the same story: You have a black box whose behavior you cannot predict (prompt injection _or not_). You need to assume worst-case behavior and guardrail around it.
And yet people keep not learning same lesson. It's like giving extremely gullible intern that signed no NDA admin rights to your everything and yet people keep doing it
What was the injected title? Why was Claude acting on these messages anyway? This seems to be the key part of the attack and isn’t discussed in the first article.
Because that's how LLMs work. The prompt template for the triage bot contained the issue title. If your issue title looks like an instruction for the bot, it cheerfully obeys that instruction because it's not possible to sanitize LLM input.
Because the standard for configure scripts says that the current directory at invocation is the build directory and the location of the configure script is the source directory. You are expected to be able to have multiple build directories if you want them. If you have written your configure script correctly, than in-tree builds (srcdir == builddir) also work, but most people don't want that anyway.
You can. But this makes intent clear. If you clone a git repo and see build/ with only a gitkeep, you are safe to bet your life savings on that being the compiled assets dir.
It's helpful to have a github mirror of your "real" repo (or even just a stub pointing to the real repo if you object to github strongly enough that mirroring there is objectionable to you).
One day maybe there will be an aggregator that indexes repos hosted anywhere. But in many ways that will be back to the square one - a single point of failure.
The Fediverse seems to dislike global search. Or is that just a mastodon thing?
reply