Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't have a lot of experience with it yet, but Loki looks promising for small projects. You'd still use it as a centralized logging server, but it's not as resource-expensive as something like self-hosting ELK.

I've only been using it for my homelab, and haven't even moved everything to it yet - but I like it so far. I already use Grafana+influxdb for metrics so having logs in the same interface is nice.

https://grafana.com/oss/loki/



I've been benching it in production.

On a 1core 2gb vps it can do around 1200 logs/sec.

Compared to ELK, we're saving several hundred $ per month.


If you can't do 1200 logs/sec with that hardware + elasticsearch, you've done something horrendously wrong.

You can do 1200 logs/sec with a container limited to 1 core and 256mb of memory running elasticsearch.


The savings are in RAM costs due to ELK stack basically not running under 4GB. And larger logs with stack trace are much slower compared to Loki. There's a reason there is no cheap ELK services.


Really depends on the size of each log and the complexity of the tokenizers. With 1 core you have time budget of less than a millisecond per log statement for processing and that doesn't include the relevant ES/Lucene operational overheads.

This is extremely doable for some workloads, but not others. Really depends on what you're stuffing in.


We use Loki in production. It's pretty good! There are still some issues with memory usage (particularly the log-cli tool) and query performance, but it's a great start.


Loki is really hard to scale up




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: