just migrated off of PG to ddb as the main db for my application (still copying data to SQL for data analytics). Working with distributed functions and code hosted on lambdas, the connection management to SQL became a nightmare with dropped requests all over the place.
Yeah I have been using Supabase recently and I really like it. You still get the “serverless” benefits but at the end of the day it is just a Postgres database with some plugins. It is super easy to figure out where the data is coming from/going to.
Meanwhile at work I have a cowoker who loves to create AWS soup where they use an assortment of lambdas/api gateways/sqs queues/sns topics to accomplish tasks such as taking files from one s3 bucket and putting them in another s3 bucket owned by a different team. Their justification of this was that it was generic so other teams could use it, but it is a pain to maintain and make changes to.
Not to be that guy, but why lambdas? I'm genuinely curious. I've never found the "cost savings" (big air quotes) worth it in comparison to the increased configuration/permissions complexity. Especially when Fargate exists, where you can just throw a docker container at AWS, what do Lambdas add? The zero scaling?
With CDK, I can get an ECS service up and running in the same amount of time it'd take to create a lambda function behind API gateway or triggered by SQS/cron. Deploys are easier, cost savings are real, permissions/configuration are the same level of complexity unless you're cutting corners. I'd only use ECS for stuff I know would be high sustained throughput, long duration(>15m) tasks, or things that absolutely need more persistence between executions.
Serverless is great if you recognize that it's just somebody else's container runtime. I wish there was better tooling for Docker based Lambdas though. I hate whole S3 deployment dance for zip file based Lambdas (yes SAM does it for you now but it's still there).
EC2-backed ECS has a great use case for things that you can run ephemerally in a container but require a persistent data store.
Why not? The setup I’m experimenting with for an API right now is basically a single Lambda that’s accessible through a function URL (so no ELB/ALB) + an RDS instance. Spinning up additional environments is a single Cloudformation call and deployment artifacts should work with both Docker containers or S3 (depending on the Lambda execution environment).
Seems like a leaner setup than using ECS/Fargate + LBs to me. Have I overlooked something?
One of lambda's ideal use cases is personal projects. Personal projects usually serve very few requests so lambda's ability to scale to zero results in cost savings.
I totally believe you, I just can't see how it becomes easier than chucking a container on Fargate or something. Maybe I've just been scarred by lambda rat's nests in the past.
Yeah, the "proper" way to do Lambdas, shown in so many fancy architecture diagrams, is a rat's nest. I don't like APIs on Lambda unless you can shove them into one container with a catchall proxy on API Gateway. They really shine if you're processing SQS messages or EventBridge events. If you aren't using other AWS services and aren't cost engineering, then Lambdas probably aren't worth the headache.
Lambda is the most expensive thing you can do if you have more than 25% utilization. Fargate is extremely close to modern on-demand EC2 pricing (m7a family).
Right, running ECS on EC2, not Fargate on EC2. When ECS launched it only had the EC2 launch type (where as you said you must manage your machines). Fargate then came along for both ECS and EKS where Amazon managed the machines for you.