You can run it on-prem, where you can actually technologically enforce data custody.
Custody enforcement using the cloud hosted product, is mostly contractual, although they do offer some technical features, like encrypting all data using a AWS KMS key in the customer's AWS account.
Still, this relies on trusting that they won't make their own separate copies of the data.
I don't quite understand the 5 second overlap.
I assume it's so that events that occur over the chunk boundary don't get missed, but is there any examples or benchmarking to examine how useful this is?
yea, it's so events on a chunk boundary still get captured in at least one chunk. i haven't had the chance to do formal benchmarks on overlap vs. no-overlap yet. the 5s default is a pragmatic choice, long enough to catch most events that would otherwise be split, short enough to not add much cost (120 chunks/hr to ~138). also it's configurable via the --overlap flag.
^ this is a common security misconception in crypto. "We're using an HSM, they can't steal our private key." OK genius now you still have to secure the HSM.
There's no shortcut to MPC/multisig with 3+ keyholders.
> There's no shortcut to MPC/multisig with 3+ keyholders.
The whole concept of a stablecoin seems to be based on centralised trust.
Ultimately there is some org that has the fiat bank account, that mints and redeems the coins.
Nope, that is the foundation of bad stablecoin. Trustless decentralized stablecoin like DAI exist. People just largely don't do their homework and prefer scams that lure them in with promises of 'yield'
DAI and SKY are backed in large part by USDC, so they are not truly decentralized. It is possible in theory, but nobody has successfully done it so far.
It's possible in practice: that's how DAI worked originally. It's just not very competitive where the main customer -- traders -- want a lot of liquidity and razor thin spread.
DAI made some dumb decisions for market reasons recently but it was an actual stablecoin for a long time. It worked fine, they just decided to make it worse for some reason.
I really liked these benchmarks, and would check in with them from time to time.
No benchmark is perfect, but these ones cover such a wide variety of different languages and frameworks, it's a good resource for getting a rough idea of the kind of performance that a given stack is capable of.
I don't know much about TechEmpower the company, it seems to be a small consultancy, maintaining this project probably takes non insignificant resources from them.
The end of the project seems kind of unceremonious, but they don't owe anything to anyone.
It's cool in a 'how much can you tune it' kind of way, but has little practical value. Most sites would be tickled with a 4 digit requests per second number, so does it matter if your chosen framework does 50k/sec or 3 million/sec? Not really.
I think the biggest problem was it just had too many entries, most of which seem tuned to cheating benchmarks. Would probably be more valuable just choosing the top 3 by popularity from the top 15 languages or so.
> too many entries, most of which seem tuned to cheating benchmarks
Even for entries that didn't cheat, the code was sometimes unidiomatic in the sense that "real programmers can write Fortran in any language".
This[0] article articulates the issue with by highlighting an ASP.NET implementation that was faster than more 'honest' Java/Go implementations primarily by not using ASP.NET features, skirting some philosophical line of what it means to use something.
For me, the more interesting discussion of whether a language/library is faster/leaner than another exists in actual idiomatic use. In some languages you are actively sweating over individual allocations; in some you're encouraged to allocate collections and immediately throw them away. Being highly concerned with memory and performance in the latter type of language happens, but is seldom the dominant approach in the larger ecosystem.
For anyone wondering, the ASP.NET Core benchmark applications appear to be largely the same.
However it also appears that as of the last benchmark (round 23), “aspnetcore“ has fallen to 35on the fortunes leaderboard. The code for that result, really just uses kestrel. It doesn’t even import any of the usual ASP.NET Core NuGet packages, just what’s provided by the web sdk. [0]
I tried this in chatgpt, asking " geschniegelt" on a 5.2 instant temp chat, and got some interesting results.
Sometimes it would reply with the correct definition of geschniegelt, the description would sometimes be in German, sometimes in English.
Most of the time it would give me a definition for a different German word "Geil".
For whatever reason, the most interesting results I got were via my work's m365 copilot interface, where it would give me random word descriptions in Hebrew[0] and Arabic[1].
reply