Hacker Newsnew | past | comments | ask | show | jobs | submit | mstratman's commentslogin

What worked best in my experience is having a roundtable discussion between all the interviewers about their experience, and any concerns or objections they have. i.e. soft veto.

The hiring manager had the ultimate say, though. But he/she had to acknowledge, address, and own any objection and take personal responsibility for it if the candidate was hired.

When you have a good team who is earnestly focused on preserving a good team, it works really well. Red flag objections were pretty much never ignored, and were effectively vetoes. Minor objections or slight concerns about experience or aptitude, if they were overridden, prompted hiring managers to take extra time and care to help ensure the new hire's success.


While it's more about the flaws in how scientific consensus are used with respect to areas that are (or should be) debatable, the video does a good job of introducing some of the problems with the overlap of the pharmaceutical and public health agendas.

I'd love to see a followup that takes this idea and looks at a different angle, such as ways that scientific "consensus" was invented and propagated to benefit political agendas. Or done so from a lack of respect for the public's ability to deal with information and nuance (e.g. simplify the situation and distill complex factors into a one-size-fits-all prescription for behavior).


Reason TV did an interesting video recently looking at several angles of homelessness: https://www.youtube.com/watch?v=gcZhmUfDePE

There are a lot of things you can take from it, but one overarching opinion is that "housing first" gets in the way of helping those who are down on their luck and find themselves hopefully-temporarily without a home (as opposed to those who cannot or will not work to change their situation).

It makes the case you need multiple approaches to deal with the vastly different homeless situations.

Check it out.


Those are usually straw bales from what I've seen.


As a non-farmer, I honestly don't know the difference between hay and straw.


Also not a farmer, but hay has seeds. I think straw is technically any dead plant stems.

You can cover your new grass seed with straw to protect it from birds. If you try that with hay you'll plant hay.


Son of a farmer, living in a farmer community. Alfalfa hay doesn't have seeds unless you harvested it waaay too late, and then it's probably worthless.

I've only ever heard of straw referring to wheat straw - the wheat stems (and grainless heads) left in the field after harvesting.

Hay, on the other hand, is the whole plant (well, not the roots) of alfalfa (a legume), or brome (a grass) or prairie hay (mix of native grasses).

This may be slightly different in other areas, but the above is how it is in Kansas.


Hay is for eating (it's stored grass), straw is for other uses.


Most small businesses require the owner to take significant financial risk, or work for free for years or more.

Each growth stage presents tough choices of additional risk, as well.

An employee doesn't shoulder any of this. There is a world of difference!


Limits on property ownership are a very different topic than taxation or crime.

They are treated fundamentally different in law, economics, and philosophy.


Taxation is basically a limit of property ownership. The inheritance tax comes to mind (also an estate tax although those are not well established everywhere in the world) but also income-taxation limits what you can earn as a person and therefore what you can own.

And taxation is also not the only parallel one could draw. For example, limits on the usage of you property is also highly regulated in various ways. Usage and ownership are very closely related.


You need a large amount of capital for it to do enough good for others (e.g. via investments into companies) that the returns you receive are enough to live on.


Most wealth already is in the economy, providing capital to companies and jobs to their growing workforces.

It's not sitting around collecting dust.


Any time the first line of a project description reads "This is NewTool, written in <language>" it's an immediate red flag that makes it hard for me to continue reading.

Obviously it doesn't necessarily mean the project has no merits, but language choice is one of the less relevant details and it signals that the author wanted to make a toy in a language new to them. Good for them, but most of the time the rest of the world doesn't care.

If the project has merits, and this one might, it should talk about them in the intro.


> most of the time the rest of the world doesn't care

Much of the time, HN readers however will. There's the odd software developer hanging about here, some of whom are interested in programming languages, or so I believe.


Rust isn't new to Turner (the author), he was one the Rust Language team at Mozilla.


Agree mostly, but there are certain languages that I give it a pass for given the context. A good shell written in Rust is intriguing.

I don't think I want the fad to go away, because it can even help me know what to steer clear of. e.g. Introducing a new shell written in JS!).


> Sure for some things, like elective surgeries (lasik for example) or routine dental care, people can shop around, and prices will reach a sane level. But the majority of healthcare does not work like that.

This is precisely because there is no health "insurance" middlemen for purchasing those services. The market drives those costs down.

If - like in pre-ww2 America - instead people saved up for the inevitable doctors visits and paid out of pocket directly to the doctors and hospitals, costs would be FAR lower both due to competition and price sensitivity. This is the fundamental problem with using health "insurance" for expected costs, rather than just unpredictable emergencies.


"If - like in pre-ww2 America - instead people saved up for the inevitable doctors visits and paid out of pocket directly to the doctors and hospitals, costs would be FAR lower both due to competition and price sensitivity."

So like, real question here... How does would that work if you're poor, chronically ill, have cancer, need an organ replacement, have HIV, etc? How do I know if I'm getting my money's worth, given I do not have medical education? What happens if I can't make a choice of what services I consent to because I've been rendered incapacitated due to a medical emergency?


Yes but it is also because those specific types of treatments respond well to market forces: they are common enough for there to be a lot of competing providers, and if the price is too high, people can elect not to receive those services.

For a counterexample, look at dental surgery. This is also often not covered by insurance, but it's much more expensive, and low-income individuals often take on debt and risk their financial security to undergo these procedures because the alternative is often a severely degraded quality of life. The difference is that the demand is much less flexible, so providers can get away with higher prices. Many healthcare procedures fall into this category.


And what if you happened to not be able to save? Or if something was too expensive for your savings? Or if you chose the wrong savings vehicle and the stock market tanked just when you needed that new hip? How much should you save? How do you know?

Pre-WW2 America isn't a place most people would want to return to in terms of healthcare. It was far far less advanced, people died much more easily and you may recall a certain Great Depression which made it so those savings weren't quite as reliable as people hoped.

Oh and when you go to the ER after getting hit by a bus, are you going to call around for the best deal?

You might argue that's what insurance is for, but now you end up back where we are today, with a middle-man paying for things. Things that are essentially guaranteed to happen but unpredictable in their magnitude.

I'm not saying we shouldn't have more market information, but it's just incorrect to say the market will solve this. Serious medical problems are a, "your money or your life" situation, pure unregulated markets are going to really struggle in this area.


>Pre-WW2 America isn't a place most people would want to return to in terms of healthcare.

I dunno, the differences weren't huge, and at least people didn't go bankrupt (data not from the US, but close).

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2625386/

"Life expectancy of mature women taken from Hollingsworth8 and OPCS data for England and Wales

Date Life expectancy of women at 15 years (years)

1480–1679 48.2

1680–1779 56.6

1780–1879 64.6

1891 61.6

1901 62.6

1911 66.4

1921 68.1

1951 73.4

1961 75.7

1971 76.8

1981 78.0"

1989 79.2


That seems to indicate a 16% increase in life expectancy between the last pre-WW2 data point and the most recent data point. That's pretty huge, especially when you consider that those are averages, meaning that the disparity is made up not so much of old people dying older, but more of young people not dying from illnesses that are now preventable.

As a specific and personal data point, I have chronic kidney disease, which developed rapidly when I was 25. With pre-WW2 medicine (no transplants, no dialysis), I might have lived to 26, maaaaybe, and even then only if I'd been able to control my blood pressure long enough for renal failure to set in instead of a stroke or heart attack.

Thanks to modern medicine, I'm now nearing 40 with a good prognosis; there's no reason to believe it will impact my life expectancy, and the impact on my quality of life is relatively minimal. Compared to dying in my mid-20s, that's a vast world of difference. I will not willingly go back to pre-WW2 medicine, thank you very much.


Not to derail whatever point you're trying to make but its important to note if you use WW2 as the middle point, you're splitting the data between a pre-penicillin world and post-penicillin world. If anything I'm amazed that its only a 16% increase after such a massive change.


Yup, that's because it's about averages.

To take a contrived and simplified example, imagine a hypothetical world in which everybody dies on their 90th birthday by default, but a handful of diseases cause 25% of the population to die on their 30th birthday instead. That brings the average life expectancy to 73.75y.

Now imagine a bunch of medical breakthroughs bring the 30-year-old mortality rate down from 25% to 7%. That enormous difference brings the average life expectancy to 85.45y.

That's in increase in life expectancy just shy of 16%, despite the fact that it comes from a 72% reduction in 30-year-old mortality rates.

That's obviously a massive oversimplification, and real distributions look different, but it illustrates the problem with just looking at average life expectancies; it makes it look like everyone's life span has just been scaled up by 16%, which is not the case. If you look at it from the perspective of, "what are the chances my life will be cut short by some preventable disease?", the magnitude of the change is much greater. Some of that change is due to very cheap innovations like bread-mold-as-antibiotic, and some of it is due to very expensive innovations like organ transplants. Either way, I for one am glad those days are over, despite the appealing economic simplicity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: