Hacker Newsnew | past | comments | ask | show | jobs | submit | davidsgk's commentslogin

Going from GWT at my first internship to regular HTML/CSS/JavaScript at the next (before even TypeScript) was such a fresh experience.


Out of curiosity, do you use any specific version control features when writing like this outside of history which is just kind of there by default? It would be hilarious but amazing if you said your editors (if you have any) give you feedback in the form of pull requests.


Yeah that would be awesome but sadly they'd just be confused and annoyed by it all. I mean in my writing communities I'm the only one that uses Git, Linux, and MD. Everyone else just sticks with Google Docs, Word, or some more specialized writing apps.

I sometimes use Git branches to explore different story ideas but I mostly just use the system for basic versioning and history. I also use Git for written roleplays to store my logs and thoughts, and since MD is basically text I can use Unix tools like grep and wc.


> I mean, it's basically inevitable, isn't it? Cultural heterogeneity is born out of isolation.

When thinking about this cynically, I feel like this has some depressing ramifications. Cultural differences (and isolation) are often born from financial disparities as well. Does this mean that the path to global cultural homogeneity is basically just gentrification on a massive scale where economic -> cultural minorities get more and more ostracized?


Trickle down globalism. I think you're onto something.


Bandwagoning onto this aside, I had really similar thoughts on both fronts. Are things really "retrograde" in Japan because they hang onto older technologies faster, or do they hang around longer because they make sense for their circumstances and needs? We don't say walkie-talkies on security guards are retrograde (not a perfect analogy, I know).

And yep, I like to think of Seveneves as 3 books in a big trench coat where the first 2 are fantastic and I wholeheartedly recommend to anyone that likes some solid sci-fi.


Agree with your question...

"Older" is a relative measurement of time, but doesn't say anything about the marginal benefit or cost.

An example of your question I see from Japan relates to payment systems. People often critique Japan for its cash-oriented systems and have also observed the difficulty that digital payment systems have had in gaining traction in Japan... an example for why digital payment systems have had poor adoption is the existence of earlier, well-adopted solutions that provide many of the same benefits in the majority of practical scenarios (i.e. stored value IC cards: Suica/Pasmo/etc.). In cases like this, the "newer" and supposedly better technology doesn't provide as large of a marginal benefit. They already solved much of the problem.


This discussion was posted in May 2012, more than 12 years ago. Interesting thinking about the landscape of platforms over time. Note that Jamie Zawinski has stated that this was a statement less on copycats and more on platformization: https://en.wikipedia.org/wiki/Jamie_Zawinski#Zawinski's_Law


> Dokku is great, but historically it didn't really handle resilience.

Would you mind elaborating a bit on this? I'm exploring some serverless options right now and this would be useful info. Do you mean it's not really designed out of the box for resilience, or that it fails certain assumptions?


I'm not the person you're responding to, but I believe I can answer that question as well.

Dokku essentially just started a container. If your server goes down, so did this container because it's just a single process, basically.

Other PaaS providers usually combine it with some sort of clustering like k3s or docker-swarm, this provides them with fail over and scaling capabilities (which dokku historically lacked). Haven't tried this k3s integration either myself, so can't talk about how it is nowadays.


Yea, this. Dokku was basically a single-server thing. If that box dies, your site goes down until you launch it on a new box. That might not be a huge deal for smaller sites. If my blog is down for a day, it's not a big deal.

With a cluster, if a server goes down, it can reschedule your apps on one of the other servers in the cluster (assuming that there's RAM/CPU available on another server). If you have a cluster of 3 or 5 boxes, maybe you lose one and your capacity is slightly diminished, but your apps still run. If your database is replicated between servers, another box in the cluster can be promoted to the primary and another box can spin up a new replica instance.

Dokku without a cluster makes deploys easy, but it doesn't help you handle the failure of a box.


Yeah the k3s scheduler is basically "we integrate with k3s or BYO kubernetes and then deploy to that". It was sponsored by a user that was migrating away from Heroku actually. If you've used k3s/k8s, you basically get the same workflow as Dokku has always provided but now with added resilience.

Note: I am the Dokku maintainer.


Ah gotcha, thanks for the insight!


One might say it's also pretty petty to call out a casual usage of a notation being used in a way that people in the thread are understanding just fine...


I think it’s helpful because it is a common mistake.


An amazing (if a bit flowery) read that highlights more arcane stuff you can do with the TS type system: https://www.richard-towers.com/2023/03/11/typescripting-the-...


That was fantastic! The end got me good.


I agree. If it's available, I always appreciate a toast + notification tray combo where you get non-blocking feedback on successes but you can also keep track of any past messages.


I have a question for folks working heavily with AI blackboxes related to this - what are methods that companies use to test the quality of outputs? Testing the integration itself can be treated pretty much the same as testing around any third-party service, but what I've seen are some teams using models to test the output quality of models... which doesn't seem great instinctively


Take this with a grain of salt because I haven't done it myself, but I would treat this the same as testing anything that uses some element of random.

If you're writing a random number generator, that generates numbers between 0 and 100. How would you test it? Throw your hands up in the air and say nope, can't test it, it's not deterministic! Or maybe you can just run it 1000 times and make sure all the numbers are indeed between 0 and 100. Maybe count up the number frequencies and verify its uniform. There's lots of things you can check for.

So do the same with your LLMs. Test it on your specific use-cases. Do some basic smoke tests. Are you asking it yes or no questions? Is it responding with yes or no? Try some of your prompts on it, get a feel for what it outputs, write some regexes to verify the outputs stay sane when there's a model upgrade.

For "quality" I don't think there's a substitute than humans. Just try it. If the outputs feel good, add your unit tests. If you want to get scientific, do blind tests with different models and have humans rate them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: