Hacker Newsnew | past | comments | ask | show | jobs | submit | haneul's commentslogin

Even in a product as technically wonderful as Temporal, we can have relatively simple oversights like this that lead to cross tenant leakage.

If anyone is more familiar with Temporal, is there a way clients could have had internal defense in depth that guards against tenant leakage at the provider (Temporal) level?


Don't use namespaces. Wire up multi-tenant at the RBAC level. Need stronger isolation? Run another cluster.

Things like this are inevitable, especially these days.

Encrypting tenant data with per tenant keys is a good defense against this kind of thing.

My codex CLI didn’t notice version bump available, but I manually did pnpm add -g @openai/codex and 5.3 was there after.

As someone who doesn't keep track of the influencer scene at the moment because I am way addicted to building...

You should totally give Claude Code a try. The biggest problem is that it is glaze-optimized, so have to work at getting it to not treat you like the biggest genius of all time. But when you manage to get in a good flow with it, and your project is very predictably searchable, results start to be quite helpful, even if just to unstuck yourself when you're in a rut.


This. Claude Code was the only one to be able to grok my 20 year old C++ codebase so that I could update things deep in it's bowels to make it compile because I neglected it on a thumb drive for 15 years. I had no mental model of what was going on. Claude built one in a few minutes.


I will try it. I did use Cursor agents beforehand (using Sonnet/Opus 4), and my problems were that it was slower than I was (meaning me prompting AI), and was not good enough to leave it unattended.


Hah that's pretty fun. I got tossed about by the animated hands for a few, but grabbed a 194 after that.

Dunno about the trigrams though, mostly it's on the "token group" level for me - either the upcoming lookahead feels familiar or it doesn't, and I don't much get bothered by the specific letters as much as "oh I don't have muscle memory on that word, and it's sadly nestled between two easy words, so it's going to be a patchy bit of alternating speed".


Thank you - glad you liked it and thanks for sharing your impressions and feedback; helps me understand what the users like.

> Dunno about the trigrams though, mostly it's on the "token group" level for me - either the upcoming lookahead feels familiar or it doesn't, and I don't much get bothered by the specific letters as much as "oh I don't have muscle memory on that word, and it's sadly nestled between two easy words, so it's going to be a patchy bit of alternating speed".

Could you elaborate a bit on this part - not sure I fully follow.

The trigrams/bigrams is mostly to help the user discover if there are some patterns that really slow them down or have a lot of mistakes. This is something I wanted that I didn’t see in any other apps.

This also what we use under the hood for SmartPractice weak point identification. We look at what the most relevant character sequences (for example the ta sequence is way more common than za) are and what the user struggles with the most. This is just one of the weak points we use in the user weakness profile.


Love this news! Amazing by Bereket!


I wonder if a Lexus/Toyota Acura/Honda Lamborghini/Audi OpenAI/Microsoft marketing split isn't in the best interests of tech giants going forward since LLMs are nondeterministic, unlike the deterministic nation-states they've built up till now...


Don’t dangle the man - enrich him with your advice!


Well the details in the article are sparse, but given what we are told, it seems highly likely that instead of using their ML model directly, they could use their ML model to fit a regression or a piecewise polynomial (eg a linear interpolation or spline) over the result. So the user input is not driving the ML model it is simply an input into a polynomial giving a calculation that is trivial for a modern computer.

Then they wouldn’t even need to cache anything and the result would be instantaneous with no real loss of accuracy.


Yea the data market from discord bots is quite a thing. Really concerning, imo.


> tin foil can disrupt mind control

You're not weaponizing Gell-Mann amnesia against us are you?


Not at all. Just doing my part to point out, whenever it's topical, that tin foil hats work and aluminum foil hats don't. There's a reason they want you to call aluminum foil by the wrong name.


Committed to the bit.

Kudos


Mind control waves are pure magnetic fields as opposed to traditional EM waves. So although aluminum can act as a Faraday cage, its not a magnetic shield and hence not capable of stopping mind control.


> In plain language:

> No matter how sophisticated, the system MUST fail on some inputs.

Well, no person is immune to propaganda and stupididty, so I don't see it as a huge issue.


I have no idea how you believe this relates to the comment you replied to.


If I'm understanding correctly, they are arguing that the paper only requires that an intelligent system will fail for some inputs and suggest that things like propaganda are inputs for which the human intelligent system fails. Therefore, they are suggesting that the human intelligent system does not necessarily refute the paper's argument.


If so, then the papers argument isn't actually trying to prove that AGI is impossible, despite the title, and the entire discussion is pointless.


But what then is the relevance of the study?


I suppose it disproves embodied, fully meat-space god if sound?


I'm looking at the title again and it seems wrong, because AGI ~ human intelligence. Unless human intelligence has non physical components to it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: