Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have used this tool in the past, though free tier only. It was easy to get up and running and easy to plug into a CICD pipeline. The problem we had with it in practice was that we largely preferred serverless technologies in AWS where the cost depended mostly or even completely on actual usage - things like Lambda invocations, SQS operations, or autoscaling ECS services, for example. In this case the estimates we got from Infracost were not very useful. Providing a meaningful cost estimate requires projecting usage, which is something that our development teams were very bad at, if they could be bothered to care at all.

I like the idea of implementing tagging enforcement in the pipeline. In a perfect world you would use cloud policies to do this, but in practice this is a big loser in AWS where a staggering number of resources are created by one API call and then tagged as a followup API call, meaning an SCP to prevent launch of untagged resources won't ever work.



Great point about the multiple API calls. One of the big problems we’ve heard about using SCPs is that they are too late. If a deployment fails because of them the developer needs to go through another pull request/code review.

Estimating costs for serverless technologies upfront is definitely challenging. We're thinking of bringing in the last 30 days of usage for these resources to give engineers some visibility.


I've not used the product, so it may already do this, but does it ask you for the data it needs in the Pull Request?

I have experience interacting with a logging system, where any diff to the logged data would need a tag like `log_size_increase=3 bytes` – the CICD system would then turn this, with the data already available, into an estimate of the overall extra storage needed.

Perhaps the same could be done. Rather than figuring out "usage" of some serverless systems, which is a very vague question and therefore hard to answer, perhaps it could be more specific. For example, how many requests per second is it expected to receive? Or, which other serverless functions call it (and therefore which will it necessarily have the scale of). Or, what increase in usage would be expected for this change.


It's been a while since I used this tool, but as best I can recall there was a way to provide usage estimates to feed the variable cost calculations. The biggest problem we had was getting development teams to know and care enough to provide accurate numbers. The suggestion in the post below to provide 30 days historical data as a starting point could be a great way to have a meaningful baseline. If someone had better projections, they could provide them, but at least it wouldn't be a total crapshoot.


> but in practice this is a big loser in AWS where a staggering number of resources are created by one API call and then tagged as a followup API call

We have a bot at work that sends you (or a DL with a bunch of people) a nastygram if you forget to tag your resources, but it doesn't know this. So if CloudFormation isn't done, you'll get the email and then have to respond to everyone with a screenshot showing that you didn't in fact goof it up. I wonder if you can make it so EventBridge (or however it's implemented, I'm not sure) delay an event for 30 seconds so they don't actually look until CF is done tagging.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: