I feel like there should be some take away from the fact that we have to come up with new and interesting metrics like “Length of a Task That Can Be Automated” in order to point out that exponential growth is still happening. Fwiw, it does seem like a good metric, but it also feels like you can often find some metric that’s improving exponentially even when the base function is leveling out.
It's the only benchmark I know of with a well-behaved scale. Benchmarks with for example a score from 0-100% get saturated quite quickly, and further improvements on the metric are literally impossible. And even excluding saturation, they just behave very oddly at the extremes. To use them to show long term exponential growth you need to chain together benchmarks, which is hard to make look credible.
In a similar vein, I always found it interesting (although frightening) that rabies cause hydrophobia. The theory is that drinking water can wash away the virus from your saliva, inhibiting its ability to spread through bites.
It makes sense that a virus passed through saliva would evolve like this, but I just find it particularly unsettling when a pathogen can effect higher-level behaviors like drinking water (or jumping into water for mantises).
The frightening part is that it’s a cognitive effect. That’s crazy. And it opens the whole “how much of our personality is real versus controlled by microbes” question.
Imagine there was a virus or parasite that just made you feel pleasure, all the time, with no tolerance effects?
I wonder what progress has been made in addiction medicine for meds that simply prevent the development of tolerance? If possible, it would fall under the category of harm reduction. Failing the patient to get sober, they could at least continue getting high on the same amount which might prevent their failure to function.
Youd have to figure out how to continuously produce dopamine and serotonin, or replicate their effects from the perspective of pleasure. Pretty tall order since they have multiple purposes inside you. Trillion dollar idea though.
I was more suggesting that if the receptors could be targeted (I have no idea how, just spit-balling) by another agent, then tolerance would perhaps not occur. The addict/user would still need the original drug.
Receptor downregulation plays a key role in maintaining homeostasis within normal brain function, so attempting to intefere with that process is playing a dangerous game, however it is in theory possible, since the effects of receptor activation are separate from the downregulation process, though they are linked.
When a neuron's receptors get strongly activated, that neuron can withdraw receptors from its surface into the interior of the cell (a process call internalisation), and from there either digest the receptors (downregulation) or move the receptors back to the surface of the cell where they resume their typical function (resensitisation). Those processes are potential targets for a tolerance-mitigating drug.
The tricky part is that they are very fundamental processes across all neurons and it would be very hard to target, say, dopaminergic receptors in the nucleus accumbens to ventral tegmental area (the "reward circuit") without also affecting neurons across the entire brain.
The best cure for tolerance is taking a break :) easier said than done, I know.
I appreciate harm reduction but I think any such 'perfect' drug would lead to dehydration / starvation deaths, or at least a lot more people living on the streets.
> I always found it interesting (although frightening) that rabies cause hydrophobia.
Well, there are two potential senses of "hydrophobia".
In its primary use, it means "rabies", and it's not really interesting that rabies would cause that.
In rare cases, it could mean "fear of water", which rabies doesn't cause. Rabies causes pain when swallowing. The pain causes fear through conventional mechanisms.
I have not checked the sources, but according to Wikipedia [1]:
Rabies has also occasionally been referred to as hydrophobia ("fear of water") throughout its history. It refers to a set of symptoms in the later stages of an infection in which the person has difficulty swallowing, shows panic when presented with liquids to drink, and cannot quench their thirst. Saliva production is greatly increased, and attempts to drink, or even the intention or suggestion of drinking, may cause excruciatingly painful spasms of the muscles in the throat and larynx. Since the infected individual cannot swallow saliva and water, the virus has a much higher chance of being transmitted, because it multiplies and accumulates in the salivary glands and is transmitted through biting.
I feel like calling "shows panic when presented with liquids to drink" a fear of water is a perfectly fine shorthand. Even if it might not be a literal fear of all forms of water, only water you are supposed to drink
Sure, I don't know how it works physiologically...
But anecdata at least suggests that being in enough pain can cause panic, but it might do so indirectly so that the fear is created around the inability to think the pain will ever end or at least lessen at least a bit.
My leg has been fucked for 15 years. Sometimes it hurts so bad, I’d need narcotics to make it go away. I don’t panic when walking, I just deal with it because I need to get to my destination. If you are thirsty, you will drink through the pain. Panic is something else.
I think you've gotten fear confused with panic. Fear is indeed a learned behavior, panic is not. Panic is beyond rational thinking, it is what gets people who save drowning people dead. Panic is what gets normal humans dead when in bad situations. Panic is not learned, it saves your ass at all costs -- or gets you killed.
They’re just wrong. Neither panic nor fear is learned behaviour. What one panics about or fears is in part learned. But there is still a lot of instinct at play.
If I come into the room and stab you with a steak knife every time you drink, and sometimes even if you only think about drinking, you will definitely panic when drinking is brought up in the future, after some time.
Not sure what's wrong with you that you cannot empathize.
As someone who has most definitely been in more shit than most humans on this planet, I can empathize just fine.
As I mentioned in a sibling comment, I think you've confused fear with panic. Fear can be conditioned, panic cannot. You can panic from fear, but it is not a guaranteed thing, and often, that panic is long after the fear is gone (aka PTSD).
Panic is an autonomic response to saving yourself at all costs. It is not something you "learn" or have "conditioned" into you, and if so, definitely not over the course of a few weeks that you have a virus; otherwise we'd all be dead from Covid and go into a panic every time we cough.
Panic is what causes you to drown a person saving you, so that you can breathe. Panic is what causes you to over-correct and steer into a tree. Panic is what causes you to run out of your house, in the middle of winter in pajamas, because there was a spider. Panic has a cause, but it is mindless with the only goal of saving oneself. The action itself is often quite stupid-looking, in hindsight and lack of context.
Most people have never seen a person panic, first-hand. Most people have never panicked. Today's world is largely safe, so it is easy to confuse fear with panic.
> symptoms can include slight or partial paralysis, anxiety, insomnia, confusion, agitation, abnormal behavior, paranoia, terror, and hallucinations
There is more than just pain here. The virus changes the host behavior, making it more aggressive, so it is very possible that it also promotes a panic reaction to pain.
without a big genetic assay isn't it virtually impossible to know whether or not the trait (hydrophobia) persisted due to the symptom itself rather than just a correlated advantageous mutation that brought along hydrophobia as a happy coincidence?
if we need to continue the flawed math analogy; evolution has always done pretty imprecise cocktail-napkin math, even if it has been wildly successful at it.
NCCN guidelines and Cochrane Reviews serve complementary roles in medicine - NCCN provides practical, frequently updated cancer treatment algorithms based on both research and expert consensus, while Cochrane Reviews offer rigorous systematic analyses of research evidence across all medical fields with a stronger focus on randomized controlled trials. The NCCN guidelines tend to be more immediately applicable in clinical practice, while Cochrane Reviews provide a deeper analysis of the underlying evidence quality.
My main goal here was to show what you could do with any set of medical guidelines that was properly structured. You can choose any criteria you want.
Hey author here! Appreciate the feedback! Agreed on importance of portability and durability.
I'm not trying to build this out or sell it as a tool to providers. Just wanted to demo what you could do with structured guidelines. I don't think there's any reason this would have to be unique to a practice or emr.
As sister comments mentioned, I think the ideal case here would be if the guideline institutions released the structured representations of the guidelines along with the PDF versions. They could use a tool to draft them that could export in both formats. Oncologists could use the PDFs still, and systems could lean into the structured data.
The cancer reporting protocols from the College of American Pathologists are available in structured format (1). No major laboratory information system vendor properly implements them, properly, and their implementation errors cause some not-insignificant problems with patient care (oncologists calling the lab asking for clarification, etc). This has pushed labs to make policies disallowing the use of those modules and individual pathologists reverting to their own non-portable templates in Word documents.
The medical information systems vendors are right up there with health insurance companies in terms of their investment in ensuring patient deaths. Ensuring. With an E.
According to what killjoywashere said, the vendors do not want to implement these standards. So if CAP wants the standards to be relevant, they should release them for random people to implement.
> The medical information systems vendors are right up there with health insurance companies in terms of their investment in ensuring patient deaths. Ensuring. With an E.
Medical information system vendors only care about making a profit, not implementing actual solutions. The discrepancies between systems can lead to bad information which can cost people their life.
VistA was useful in it's time but it's hardly world class anymore. There were fundamental problems with the platform stack and data model which made it effectively impossible to keep moving forward.
It wouldn't be appropriate for the federal government to push any particular product. They have certified open source EHRs. It's not at all clear that increased adoption of those would improve patient outcomes.
If I understand correctly, Estonia made their own EMR/EHR from scratch. The government produced (and commissioned?) software is all open source. https://koodivaramu.eesti.ee/explore
EMR software seems like something that shouldn't be that hard. It's fundamentally a CRUD. Sure, there's a lot of legacy to interface with, but medical software seems like a deeply dysfunctional and probably corrupt industry.
I'm sure there's a lot of work, but hundreds of millions per deployment is not justifiable. The Finnish EPIC deployment has cost almost a billion euros.
Estonia's from-scratch system was reportedly about 10 million euros.
It doesn't look like the XML data is freely accessible.
If I could get access to this data as a random student on the internet, I'd love to create an open source tool that generates an interactive visualization.
How about fixing the format? Something that is obviously broken and resulting in patient deaths should really be considered a top priority. It's either malice or masskve incompetence. If these protocols were open there would definitely be volunteers willing to help fix it.
You seem to think that the default assumption is that fixing the format is easy/feasible, and I don't see why. Do you have domain knowledge pointing that way?
It's a truism in machine learning that curating and massaging your dataset is the most labor-intensive and error-prone part of any project. I don't why that would stop being true in healthcare just because lives are on the line.
I think there are more options than malice or incompetence. My theory is difficulty.
There’s multiple countries with socialized medicine and no profit motive and it’s still not solved.
I think it’s just really complex with high negative consequences from a mistake. It takes lots of investment with good coordination to solve and there’s an “easy workaround” with pdfs that distributes liability to practitioners.
Healthcare suffers from strict regulatory requirements, underinvestment in organic IT capabilities, and huge integration challenges (system-to-system).
Layering any sort of data standard into that environment (and evolving it in a timely manner!) is nigh impossible without an external impetus forcing action (read: government payer mandate).
Incompetence at this level is intentional, it means someone doesn't think they'll see RoI from investing resources into improving it. Calling it malice is appropriate I feel.
Not actively malicious perhaps, but prioritising profits over lives is evil. Either you take care to make sure the systems you sell lead to the best possible outcomes, or you get out of the sector.
Agree that most companies prioritize profits over lives in an unconscionable manner, but there's a point of diminishing returns where eventually you can save a few more lives, but at an astronomical cost. Auto manufacturers have the same dilemma: spend a few hundred million dollars adding safety features, or nix the features and hope to lose less than that in lawsuits?
Eventually the question will be, how far do we really need to go, i.e. how much profit do we allow ourselves before it's morally untenable and we should plow it back into R&D? Unfortunately, as long as health care is for-profit, and absent effective regulation, companies will always err on the side of profit.
The company not existing at all might be worse though? I think it’s too easy to make blanket judgments like that from the outside, and it would be the job of regulation to counteract adverse incentives in the field.
You're making a lot of unsupported assumptions. There's no reliable evidence that this is causing patient deaths, or that a different format would reduce the death rate.
>Agreed on importance of portability and durability.
I think "importance" is understating it, because permanent consistency is practically the only reason we all (still) use PDFs in quite literally every professional environment as a lowest common denominator industrial standard.
PDFs will always render the same, whether on paper or a screen of any size connected to a computer of any configuration. PDFs will almost always open and work given Adobe Reader, which these days is simply embedded in Chrome.
PDFs will almost certainly Just Work(tm), and Just Working(tm) is a god damn virtue in the professional world because time is money and nobody wants to be embarrassed handing out unusable documents.
PDFs generally will look close enough to the original intent that they will almost always be usable, but will not always render the same. If nothing else, there are seemingly endless font issues.
In this day and age that seems increasingly like a solved problem to most end users, often a client-side issue or using a very old method of generating a PDF?
Modern PDF supports font embedding of various kinds (legality is left as an exercise to the PDF author) and supports 14 standard font faces which can be specified for compatibility, though more often document authors probably assume a system font is available or embed one.
There are still problems with the format as it foremost focuses on document display rather than document structure or intent, and accessibility support in documents is often rare to non-existent outside of government use cases or maybe Word and the like.
A lot of usability improvements come from clients that make an attempt to parse the PDF to make the format appear smarter. macOS Preview can figure out where columns begin and end for natural text selection, Acrobat routinely generates an accessible version of a document after opening it, including some table detection. Honestly creative interpretation of PDF documents is possibly one of the best use cases of AI that I’ve ever heard of.
While a lot about PDF has changed over the years the basic standard was created to optimize for printing. It’s as if we started with GIF and added support to build interactive websites from GIFs. At its core, a PDF is just a representation of shapes on a page, and we added metadata that would hopefully identify glyphs, accessible alternative content, and smarter text/line selection, but it can fall apart if the PDF author is careless, malicious or didn’t expect certain content. It probably inherits all the weirdness of Unicode and then some, for example.
I believe you have good intentions, but someone would need to build it out and sell it. And it requires lots of maintenance. It’s too boring for an open source community.
There’s a whole industry that attempts to do what you do and there’s a reason why protocols keep getting punted back to pdf.
I agree it would be great to release structured representations. But I don’t think there’s a standard for that representation, so it’s kind of tricky as who will develop and maintain the data standard.
I worked on a decision support protocol for Ebola and it was really hard to get code sets released in Excel. Not to mention the actual decision gates in a way that is computable.
I hope we make progress on this, but I think the incentives are off for the work to make the data structures necessary.
Yeah, my last company paid for a subscription to this. Enjoyed using it. Don’t think there’s a massive market, but definitely lots of devs who want easy DB access and would pay $5/month.
The average true cost to acquire a single customer is in the hundreds of dollars, to pay for sales & marketing labor, advertising etc. So $5/month is nearly equivalent to "free" from a business perspective.
Databases are B2B products, not consumer products. They are commercially useful only when placed in the context of some larger business process (e.g. tracking customers/orders/goods/users/batches/events/patients/filings etc.).
We're talking about database viewers/editors, not databases in general. But also databases are used plenty in consumer products, e.g., some sqlite file that stores your app's config
And these are consumers:
> definitely lots of devs who want easy DB access and would pay
For CAC statistics purposes, if a database is used in a consumer product, then the customer of database-related products is the company that makes the consumer product, not the consumer themselves.
"Software developer" typically refers to an occupation (whether self-employed or working for a corporation), so products for developers would also be classified as B2B rather than B2C.
If you use a database viewer to view sqlite of your shell history, you do not end up in any corporate statistics of a non-existing company making the shell
By the way, another fundamental issue with your link is that it's SaaS, while this is about a desktop app
> so products for developers would also be classified as B2B rather than B2C.
Only if those devs buy it as a business, not individuals like what we're discussing here.
So basically you can't get to any relevant CAC number from your link
> “By sourcing and filtering only the highest-quality and most representative data for LLM use cases, we reduced the pretraining set to just 13 billion tokens—drastically cutting the environmental impact of further training while preserving performance.”
Would love to know more about how they filtered the training set down here and what heuristics were involved.
I think that the models we use now are enormous for the use cases we’re using them for. Work like this and model distillation in general is fantastic and sorely needed, both to broaden price accessibility and to decrease resource usage.
I’m sure frontier models will only get bigger, but I’d be shocked if we keep using the largest models in production for almost any use case.