Why do you need a connection to a database during the meeting? Doesn't it make more sense to record the meeting data to some local state first, and then serialize it to database at the end of the meeting or when a database connection is available? Or better yet, have a lightweight API service that can be scaled horizontally that is responsible for talking to the database and maintains its own pool of connections.
They probably don't even need a database anyway for data that is likely write once, read many. You could store the JSON of the meeting in S3. It's not like people are going back in time and updating meeting records. It's more like a log file and logging systems and data structures should be enough here. You can then take that data and ingest it into a database later, or some kind of search system, vector database etc.
Database connections are designed this way on purpose, it's why connection pools exist. This design is suboptimal.
It took me a long time to realize this but yes asking people to just open and write to files (or S3) is in fact asking a lot.
What you describe makes sense, of course, but few can build it without it being drastically worse than abusing a database like postgres. It's a sad state of affairs.
The Copilot they have integrated into Azure is absolutely useless. Every now and then I'll get frustrated at which one of the thousands of menus some switch is under and I'll ask their chatbot and it will spend a lot of time "Identifying the problem..." and "Gathering information..." only to give me links to generic help articles, have some sort of error, or give me flat out wrong information.
These days I try to interact with Azure through the command line and asking Claude, which works pretty well most of the time but there are some things their API cannot do and you are forced to use their crazy Azure UI. It's not as bad as the AWS console UI, but still bad.
It's amazing to me a company that spent so much and invested so much in OpenAI has such a terrible product and got almost nothing out of it. Even standard ChatGPT is way better at giving you directions on what to do than their useless Copilot.
Yup, I asked it how long an azure subscription had existed and it could not even tell me that. Literally now() minus the object’s creation date and it had no idea what to do.
Agree. In general, the whole Microsoft "Admin" panel is utter garbage. Messy, slow, with ten different interfaces. Finding something without Googling it first is impossible.
The problem is all these SaaS companies have cut costs so much that all their support has been reduced to useless offshore at best and at worst a chatbot. They do go down and don't work and often times there's simply nothing you can do. The worst offenders will seize upon the moment and force you to upgrade a support plan before they will even talk to you, even if the issue is their own making.
Unless you're a huge customer and already paying them tons of money, expect to receive no support. Your only line of defense if something happens and you're not a whale is that some whale is upset and they actually have their people working on the problem. If you're a small company, startup, or even mid-size, good luck on getting them to care. You'll probably be sent a survey when you don't renew and may eventually be a quotient in their risk calculus at some point in the distant future, but only if you represent a meaningful mass of customers they lost.
> The problem is all these SaaS companies have cut costs so much that all their support has been reduced to useless offshore at best and at worst a chatbot.
Tremendous opportunity announcement!
If you are building a dev-focused SaaS, treat your support team exactly as they are: a key part of the product. Just like docs or developer experience, the support experience is critical.
Trouble is, it's hard to quantify the negative experience, though tracking word of mouth referrals or NPS scores can try.
Not to mention the fact that you trade one source of pollution for another. You think giant rockets to lift tons of equipment into space is good for the environment?
Meanwhile, in the real world, as a software developer who uses every possible AI coding agent I can get my hands on, I still have to watch it like a hawk. The problem is one of trust. There are some things it does well, but its often times impossible to tell when it will make some mistake. So you have to treat every piece of code produced as suspect and with skepticism. If I could have automated my job by now and been on a beach, I would have done it. Instead of writing code by hand, I now largely converse with LLMs, but I still have to be present and watching them and verifying their outputs.
Yeah but just look at what happened within the last 2 years. I was not convinced about the AI revolution but I bet in another 2 years, we won't be looking at the output..
Not so sure, there are indiosyncracies now within the various models, I suspect all this is the result of RLHF, and they cause side.effects. I'm not sure that more attention-is-all-you-need is necessarily going to give us another step change, maybe more general intelligence, but not more focus. Possibly also we soon end up with grokked AI's on all side: pushing their agenda whatever you asked... Gemini: "no this won't work with Cloudflare, I created your GCP account, there you go" OpenAI: "I am certain you really wanted me to do all these other tasks and I have done them, you should upgrade your tokens plan" etc (you know how to fill in for DeepSeek and Grok already, right)
I've been coming around to the view that the time spent code-reviewing LLM output is better spent creating evaluation/testing rigs for the product you are building. If you're able to highlight errors in tests (unit, e2e, etc.) and send the detailed error back to the LLM, it will generally do a pretty good job of correcting itself. Its a hill-climbing system, you just have to build the hill.
At the end of the day, you're still trusting a misogynistic man to get you from point A to point B. One drives the car and works as a gig worker and wears a flannel shirt, and the other sits in an office at Waymo HQ, wears a patagonia vest. Both are still part of the patriarchy and have very little interest in making sure you're safe, unless there's money to be made.
As much as I want to assume this is a trolling response, I'll pretend it is in good faith. The person you replied to is not speaking about nebulous dangers of "the patriarchy". They are talking about the risk of being verbally harassed, or physically/sexually assaulted by the driver during or directly after the ride.
There is another solution I use all the time: move deleted records to their own table. You probably don't need to do this for all tables. It allows you to not pepper your codebase with where clauses or statuses, everything works as intended, and you can easily restore records deleted by mistake, which is the original intent anyways. You can easily set this up by using a trigger at the database level in almost every database, that just works.
The texture of Gaussian Splatting always looks off to me. It looks like the entire scene has been textured or has a bad, uniform film grain filter to me. Everything looks a little off in an unpleasing way -- things that should be sharp are aren't, and things that should be blurry are not. It's uncanny valley and not in a good way. I don't get what all the rage is about it and it always looks like really poor B-roll to me.
I use Claude daily and I 100% disagree with the author. The article reeks of someone who doesn't understand how to manage context appropriately or describe their requirements, or know how to build up a task iteratively with a coding agent. If you have certain requirements or want things done in a certain way, you need to be explicit and the order of operations you do things in matters a lot in how efficient it completes the task, and the quality of the final output. It's very good at doing the least amount of work to just make something work by default, but that's not always what you want. Sometimes it is. I'd much rather prefer that as the default mode of operation than something that makes a project out of every little change.
The developers who aren't figuring out how to leverage AI tools and make them work for them are going to get left behind very quickly. Unless you're in the top tier of engineers, I'm not sure how one can blame the tools at this point.
DTMF was designed to interoperate with human voice and the tones were chosen on purpose to be unlikely or impossible for human voice to trigger. If there is no human voice, you don't need to use DTMF you could use any number of tones. I wonder if you could use base64 or base58 with 64 or 58 unique tones and be able to send text at a reasonable rate?
They probably don't even need a database anyway for data that is likely write once, read many. You could store the JSON of the meeting in S3. It's not like people are going back in time and updating meeting records. It's more like a log file and logging systems and data structures should be enough here. You can then take that data and ingest it into a database later, or some kind of search system, vector database etc.
Database connections are designed this way on purpose, it's why connection pools exist. This design is suboptimal.
reply