The process described in the article is literally just checking the boxes blindly for what passes for a design process these days. The guru's say interview customers so they have done just that without really understanding why. Given it's AI it's also possible the whole thing is entirely made up and someone just tweaked the design over an afternoon and shipped it.
99% of normies aren't paying for ChatGPT, there's a reason why they're pushing heavy for corporate welfare + government contracts. They're unable to sell to consumers so now they'll selling to governments while trying to lock-in contracts that subsequent people can't easily dismantle.
> Model T to 2026 Camry was an amazing shift without really changing the combustion engine.
A lot of times the big jumps in internal combustion engine development have been down to materials science or manufacturing capability improvement.
The underlying thermodynamics and theoretical limits have not changed, but the individual parts and what we make it out of have steadily improved over time.
The other factor to this is the need for emissions reduction strategies as a overriding design factor.
The analogue to these two in LLMs are:
1. The harnesses and agentic systems-focused training has gotten better over time so performance has increased without a step change in the foundation models.
2. The requirements for guardrails and anti-prompt injection and other concepts to make LLMs palatable for use by consumers and businesses.
MacBook Pro. And it’s only an M1 - not an M1 Pro or M1 Max, but base M1 like you were abundantly clear on. And a maximum memory of 16Gb. Straight from the MacTracker app.
It pays to carefully proofread what you write before you submit a post. You never mentioned your M1 being a Pro or Max, only an M1. It was your MacBook which was a MacBook Pro, not the M1 chip itself.
MacBook Pro. And it’s only an M1 - not an M1 Pro or M1 Max, but base M1 like you were abundantly clear on. And a maximum memory of 16Gb. Straight from the MacTracker app.
It pays to carefully proofread what you write before you submit a post. You never mentioned your M1 being a Pro or Max, only an M1. It was your MacBook which was a MacBook Pro, not the M1 chip itself.
I went looking for the latest line of apple computers after reading this thread and I noticed they force you into the higher CPU's in order to get the higher amounts of unified memory.
So not only are they content charging +$400 or +$600 for RAM which in itself ludicrously overpriced, they force you to upgrade +$1000-2000 on the top CPU's.
Its impossible to spec a macbook pro or a mac mini with a base CPU and a decent amount of RAM. Total scam since they know people want the RAM to use with local LLMs.
This was not always the case - When I specced out my macbook pro M1 16gb it was entirely possible to get 32 and 64gb without any tie-in to CPU upgrades.
I was ready to drop a few grand on a new macbook pro M5 or M4 pro with a decent amount of RAM but it's currently set up to be an insane price gouge.
To get 32GB of RAM it's an M5 chip price $1999.
To get 64GB of RAM you are forced to to grab the M4 max CPU, and it's $3,899 on apple right now. What a scam.
They're just limiting the range of SKUs they have to manufacture. For all we know, the base M-series die might not even support that larger amount of in-package memory to begin with.
Both were better in terms of user ergonomics and also much faster that what we used before, black, mypy, pylint etc.
IDK it Typescript is too generic but TS made me think of javascript as a proper language and not something just to do silly little animations on websites for. Types unlock proper data modeling and application code to me.
Figma and Figjam are really great tools for design and planning. I didn't like Miro's pricing model, and stuff we used before like Sketch wasn't as good.
reply