Hacker Newsnew | past | comments | ask | show | jobs | submit | otterley's commentslogin

It depends on how you've configured the router. It's quite common to reject or drop ingress traffic received on an egress interface destined to a NATed network address. In fact, I would flag any configuration that didn't have that.

It depends on how you've configured the router. It's quite common to reject or drop ingress traffic received on an egress interface destined to a NATed network address. In fact, I would flag any configuration that didn't have that.

Yes, but we've just successfully rewritten the article in the comment section as "it's not having NAT that provides the security itself, but other configuration any sane person would expect on a device doing NAT to prevent unexpected inbound connections" is exactly what the article set out to separate.

Fair point!

Yes, of course. If NAT denied connections in the way people think it does, then it wouldn't be necessary to separately configure the router to reject inbound connections. It's possible to have configurations that don't do that precisely because NAT doesn't do that itself.

IAAL (not legal advice) and find your comment confusing: first, because standing is a question of whether you can even have your complaint heard by a court; and second, because “brazenness” doesn’t necessarily make a case stronger.

(2022)


I don’t disagree. However, this is a “news” site, and so, we should be posting stories about recent events related to the project, as opposed to a homepage that hasn’t been updated in years.

It’s the difference between posting a story about a recent Tesla lawsuit vs. linking to Tesla’s homepage.


You are assuming everyone knows everything about things that were released in the past. At least some of us are learning about this today.

Well, when they see news related to that project, they can go look up the project if they like. It's no different than news related to any other technology you might not know about.

not helpful

You can find the Docker documentation at https://docs.docker.com.

In case you don't get the reference: https://www.youtube.com/watch?v=cISYzA36-ZY

And how are you solving the problem? The article does not say.

> I'm answering the question your observability vendor won't

There was no question answered here at all. It's basically a teaser designed to attract attention and stir debate. Respectfully, it's marketing, not problem solving. At least, not yet.


theres more information here https://docs.usetero.com/introduction/how-tero-works the link in the article is broken.

They determine what events/fields are not used and then add filters to your observability provider so you dont pay to ingest them.


What’s the differentiation vs., say, Cribl? Telemetry pipeline providers abound.

The question is answered in the post: ~40% on average, sometimes higher. That's a real number from real customer data.

But I'm an engineer at heart. I wanted this post to shed light on a real problem I've seen over a decade in this space that is causing a lot of pain; not write a product walkthrough. But the solution is very much real. There's deep, hard engineering going on: building semantic understanding of telemetry, classifying waste into verifiable categories, processing it at the edge. It's not simple, and I hope that comes through in the docs.

The docs get concrete if you want to peruse: https://docs.usetero.com/introduction/how-tero-works


I would contend that it is impossible to know a priori what is wasted telemetry and what isn’t, especially over long time horizons. And especially if you treat your logs as the foundational source of truth for answering critical business questions as well as operational ones.

And besides, the value isn’t knowing that the waste rate is 40% (and your methodology isn’t sufficiently disclosed for anyone to evaluate its accuracy). The value in knowing what is or will be wasted. It’s reminiscent of that old marketing complaint: “I know that half my advertising budget is wasted; I just don’t know which half.”

Storage is actually dirt cheap. The real problem, in my view, is not that customers are wasting storage, but that storage is being used inefficiently, that the storage formats aren’t always mechanically sympathetic and cloud-spend-efficient to the ways they data is read and analyzed, and that there’s still this culturally grounded disparate (and artificial) treatment of application and infrastructure logs vs business records.


> Redis is fundamentally the wrong storage system for a job queue when you have an RDBMS handy

One could go one step further and say an RDBMS is fundamentally the wrong storage system for a job queue when you have a persistent, purpose-built message queue handy.

Honestly, for most people, I'd recommend they just use their cloud provider's native message queue offering. On AWS, SQS is cheap, reliable, easy to start with, and gives you plenty of room to grow. GCP PubSub and Azure Storage Queues are probably similar in these regards.

Unless managing queues is your business, I wouldn't make it your problem. Hand that undifferentiated heavy lifting off.


Rails shops seem to not like to use SQS/PubSub/Kafka/RabbitMQ for some reason. They seem to really like these worker tasks like SideKiq or SolidQueue. When I compare this with Java, C# or Python who all seem much more likely to use a separate message queue then have that handle the job queue.

Rails shops running on normal CRuby, have difficult in effectively scaling out multithreading due to the GVL lock. It's much easier to "scale" ruby using forking with sidekiq or multi process, and to have it consume data from a Redis list. It is possible to get around the GVL using JRuby, but that poses a different set of constraints and issues.

There is some definite blending of async messaging in the Ruby world though. I've seen connectors which take protobufs on a kafka topic and use sidekiq to fan out the work. With Redis (looking at sidekiq specifically) it becomes trivial to maintain the "current" working set with items popped out of the queue, with atomic commands like BLMOVE (formerly BRPOPLPUSH).

Kafka is taking an interesting turn however with the KIP-932 "Queues for Kafka" initiative. I personally believe it could eat RabbitMQ's lunch if done effectively. Allowing for multiple consumers, a "working set" of unack'ed data, without having to worry as much about the topic partition count.


> Rails shops running on normal CRuby, have difficult in effectively scaling out multithreading due to the GVL lock. It's much easier to "scale" ruby using forking with sidekiq or multi process, and to have it consume data from a Redis list.

This isn't cloud-native at all. In a cloud-native world, these workers would be running in hosted functions (e.g. Lambda) and be consuming from a work queue. I assume this is possible in Rails, but the startup overhead might be considerable.


I've also noticed that they conflate the notion of workers, queues, and message busses. A worker handles asynchronous tasks, but the means by which they communicate might be best served by either a queue or a message bus, depending on the specific needs. Tight coupling might be good for knocking out PoCs quickly, but once you have production-grade needs, the model begins to show its weaknesses.

It took seven years to address this concern following the initial bug report (2018). That seems like a lot, considering how instrumenting CPU time can be in the hot path for profiled code.

400x slower than 70ns is still only 28us. How often is the JVM calling this function?

It depends. If you’re doing continuous profiling, it’d make a call to get the current time at every method entry and exit, each of which could then add a context switch. In an absolute sense it appears to be small, but it could really add up.

This is what flame graphs are super helpful for, to see whether it’s really a problem or not.

Also, remember that every extra moment running instructions is a lost opportunity to put the CPU to sleep, so this has energy efficiency impact as well.


If you are doing continuous profiling, you are probably using a low overhead stack sampling profiler rather than recording every method entry and exit.

That's a fair point. It really depends. For example, if you're recording method run times via an observability SDK at full fidelity, this could be an issue.

If it's calling it twice per function, that's enormously expensive and this is a major win.

28us is still solid amount of time

If it's called once an hour, who cares?

Even called every frame 60 times per second, it's only 0.2% of a 60 fps time budget.

It's not a huge amount of time in absolute terms; only if it's relatively "hot."


It is possible - and even so happens - that from time to time, even a person with whom you vehemently disagree with most things is right about something. As they say, even a stopped clock is right twice a day.

You're about to find out just how unethical an App Store monopoly can get.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: