Hacker Newsnew | past | comments | ask | show | jobs | submit | primaprashant's commentslogin

They used this exact treatment in an episode of The Good Doctor, S01E06. Original air date: October 30, 2017

https://the-good-doctor.fandom.com/wiki/Not_Fake


Yes, but the treatment has been devised way prior to 2017.

They also used it in Grey's Anatomy S15E17 (2019).

I was thinking same, Jared saving that girls skin after burn.

You should check out Devstral 2 Small [1]. It's 24B and scores 68.0% on SWE-bench Verified.

[1]: https://mistral.ai/news/devstral-2-vibe-cli


To be clear, GLM 4.7 Flash is MoE with 30B total params but <4B active params. While Devstral Small is 24B dense (all params active, all the time). GLM 4.7 Flash is much much cheaper, inference wise.

I don't know whether it just doesn't work well in GGUF / llama.cpp + OpenCode but I can't get anything useful out of Devstal 2 24B running locally. Probably a skill issue on my end, but I'm not very impressed. Benchmarks are nice but they don't always translate to real life usefulness.

yeah, big fan of uv and ruff and recently ty. It's also from astral and it fixed type checking. You should check it out

They also have an API which you can use to get the icon SVG.

I love making (architecture) diagrams in D2 [1], and love using the vast library of icons from Iconify in my diagrams where it makes sense. A sample diagram with SVG from Iconfiy would look like this:

  docker: Docker {
    icon: https://api.iconify.design/logos/docker-icon.svg
  }

  kubernetes: Kubernetes {
    icon: https://api.iconify.design/logos/kubernetes.svg
  }

  docker -> kubernetes: deploy
[1]: https://d2lang.com/

Thanks for the d2 rec; this looks really good and is written in Go!

This is really nice. I keep track of most important habits to me like how often I go to gym, how much protein I eat everyday, and how many days I read (books), on something physical (pen and paper). Mostly on monthly calendars. This would make tracking each of them separately on a single piece of paper across the entire year pretty neat.


Started making side projects as a developer this year and hope to start working on my own products full-time from next year. Two books I found useful for positioning the product:

- Obviously Awesome by April Dunford (https://www.goodreads.com/book/show/45166937-obviously-aweso...)

- Building a StoryBrand by Donald Miller (https://www.goodreads.com/book/show/210137279-building-a-sto...)


This is so funny. I've read both these books and absolutely loved Obviously Awesome and could not stand StoryBrand.

Wrote reviews on both Obviously Awesome[0] and StoryBrand[1] for anyone interested.

[0] https://www.goodreads.com/review/show/5369411790 [1] https://www.goodreads.com/review/show/5686301930


Tx. I've been building a lot of projects as well. These seem interesting!


Anyone specifically looking for ML engineering blogs should find this useful: https://github.com/primaprashant/ml-engineering-blogs


Thanks a lot, I was literally gonna type whether anyone knows good ML blogs


Good stuff. You will enjoy my short essay, I want to give a lot of fucks! [1], which argues against the typical conclusion reached by people working at big corp long enough: "Stop caring. Stop giving a fuck. Focus on things outside of work".

The core insight it, if you start to feel the need to stop caring, instead of changing your character and values, treat it as a strong signal to change your environment.

[1]: https://anandprashant.com/posts/i-want-to-give-a-lot-of-fuck...


Pricing is $0.5 / $3 per million input / output tokens. 2.5 Flash was $0.3 / $2.5. That's 66% increase in input tokens and 20% increase in output token pricing.

For comparison, from 2.5 Pro ($1.25 / $10) to 3 Pro ($2 / $12), there was 60% increase in input tokens and 20% increase in output tokens pricing.


Calculating price increases is made more complex by the difference in token usage. From https://blog.google/products/gemini/gemini-3-flash/ :

> Gemini 3 Flash is able to modulate how much it thinks. It may think longer for more complex use cases, but it also uses 30% fewer tokens on average than 2.5 Pro.


Yes, but also most of the increase in 3 Flash is in the input context price, which isn't affected by reasoning.


It is affected if it has to round-trip, e.g. because it's making tool calls.


Apples to oranges.


Started a newsletter [1] focused on agentic coding updates, nothing else. Other newsletters/blogs cover a lot of generic AI news, industry gossip, and marketing fluff. Having a focused feed is something I wanted for myself and finally I have enough time that I can write this newsletter regularly.

[1]: https://www.agenticcodingweekly.com/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: