Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It does depend on the approach, though. If you're SpaceX and switched on to this stuff, and testing is part of development that's one thing. If you've done your real testing already and this is more like a demonstration sort of test, that's different.


My theory is that they should have done a development test before the demo, but because it's so expensive , the higher ups just told the engineers to skip it.

They probably just said "Well, it was _only_ a retrofit, and you just changed a couple of minor things for the test. Are you sure we need to spend $17 million on another throwaway test?" And the engineer said he's not 100% sure it's necessary, but it would be advisable. So the higher up, not understanding how brittle and complex systems are or how development works, decided that meant it wasn't necessary.

I think the main issue might be that there is no way to do a good test without spending millions of dollars. Which I think might be because of structural issues in the defense industry.


Disclaimer: I'm no missile tech expert.

As far as I understand, these things have self-test routines and "dry-runs" which go through all the states in test mode and simulates a firing.

With my limited knowledge, this validates all the electronic systems on the missile or what it is, sometimes creating synthetic data during the test, so the missile can "navigate on the bench" by "wiggling it fins like a dreaming puppy".

Maybe this particular one passed that test, and something chemical failed along the way, I don't know.


That sounds extremely plausible. Maybe one of those tests had a || true still in its GitLab pipeline (-:


Or, more practically, they may be using pytest-vw (https://github.com/auchenberg/volkswagen).


About once a year I remember that package and laugh spontaneously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: