Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The other question is how does it compare to human doctors, they also sometimes recommend unsafe or wrong treatment.


One way I tend to view ML is that, when its wrong, its catastrophically wrong. Because it doesn't actually understand anything its doing, and simply looking at a probability model and picking based off it, you end up with the issue that a few carefully selected pixel changes in a picture of a cat takes the model from a cat to an ostrich.

The model does not see a significant difference between the cat/ostrich, or cancer/cold, whereas we do; this implies that, when the model is wrong, it is likely to not just provide an incorrect treatment, but a catastrophically incorrect treatment.

Where the human sees a cat-like creature, and if not guess a cat, then something similar to a cat (4 legged, furry, etc), the ML model is willing to jump anywhere, in the worst case.

So its not just rate of misdiagnosis, but by how much as well.


The some other other questions: how much does it cost, compared to training a human? Does it actually save any time / resources if it's outputs must always be scrutinized?


That may be a question someone outside of an FDA approval process might asked. But if you have been, you understand the importance of "safety and effectiveness", and you will note there is nothing there about "cost effective" or "marketable".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: