This seems like one of the best possible use cases for LLMs -- porting old, useful Python/Javascript into faster compiled language code. Something I don't want to do, that requires the type of intelligence that most people agree AI already has (following clear objectives, not needing much creativity or agency).
But if you think it through, it's intractable. You need to 2x+ the transportation cost of all products (it will cost more to get them back for multiple reasons, including products not being as neatly packaged and often going from many-to-one transportation to many-to-many). Companies also need to double their specializations and adopt recycling processes that will largely be redundant with other companies; you basically make it impossible for small companies to make complicated products. And are we including food products, the majority of trash? It makes a lot more sense to centralize waste repurposing and benefit from economies of scale.
Waste management is already a very profitable industry. Of course, it's wasteful, just burying stuff, and environmentally harmful. But I'm of the opinion that it will soon be economically viable to start mining landfills for different types of enriched materials, and government subsidies could bridge the gap for things that are of greater public interest to recycle.
I've been working on the software side of the technology needed to do this in my spare time for a couple years, waiting for some hardware advancements.
> You need to 2x+ the transportation cost of all products
As with all economics, it's not a one-way street. A change in conditions causes a change in behavior. Increased costs will cause a change in how products are designed, manufactured, used. If one-time use cost goes through the roof, suddenly all vapes will be multi-use. Plastic bottles will disappear in favor of dispensers and multi-use bottles. Not all of them, but most of.
It's about incentives in a dynamic system, not spot bans in an otherwise static world.
Why would 2x the transportation cost be intractable, but ruining the environment, killing life in the oceans, destroying the basis of our future food production, etc, be tractable?
> You need to 2x+ the transportation cost of all products... Companies also need to double their specializations and adopt recycling processes that will largely be redundant with other companies
I think 3rd parties would spring up to deal with that stuff
Maybe they could use big trucks that just collect all refuse from the curb. And maybe that is something that the city should do so that we don’t have a dozen trucks collecting a dozen different trash cans from every house.
That was tried, and what ultimately occured was disgusting.
The world was full of new computers popping up and every middle class or above person buying new ones like they do with iphones now. Companies started recycling programs, and many immediately went the route of corruption. They would pack up shipping containers full of ewaste, with 40-50% reusable items, and the rest junk, allowing them to skirt the rules. These containers would end up in 3rd world countries, with people standing over a burning pile of ewaste, filtering out reusable metals. There was, at one point, even images of children doing this work. The usable items were sold dirt cheap, with no data erasing, leading to large amounts of data theft, and being able to buy pages of active credit card numbers for a dollar.
We are talking about less critical things now, like vape pens, but its not a far throw for it to instantly become an actually bad idea to let other companies do the recycling. Make the manufacturer deal with it, or even the city/state, via public intake locations (like was mentioned of switzerland in another part of this thread)
As far as i know a large portion of what i described shutdown after it came to light, although i would not be the least bit surprised if it was still happening in some capacity, or even in full under the disguise of something else
I've had great results, and every workout I do consists of an exercise I can do at least 20 reps of for the first set, sometimes going up to 50. I can still gain strength by increasing the weight slowly week by week but maintaining a high level of reps. I don't think it takes longer at the gym -- just do 2 sets per motion instead of the more common 3-5. The breaks in between sets at the gym are the real time sink. Plus, you get lean muscle with high endurance, and virtually no injuries. Last tip: put your phone/music in a locker while you're at the gym if you want to both improve your workout, save time, and practice being more present.
It seems like they fixed the most obvious issue with the last release, where codex would just refuse to do its job... if it seemed difficult or context usage was getting above 60% or so. Good job on the post-training improvements.
The benchmark changes are incredible, but I have yet to notice a difference in my codebases as of yet.
Is there something similar with twice the memory/bandwidth? That's a use case that I would seriously consider to run any frontier open source model locally, at usable speed. 128GB is almost enough.
Fill up the memory with a large model, and most of your memory bandwidth will be waiting on compute shaders. Seems like a waste of $5,000 but you do you.
It was the only thing to be optimistic about in this administration, but it sure didn't last long. We should all know that this was the last attempt that had a chance of addressing the national debt -- the only other way out is extreme inflation.
Musk was absolutely the wrong guy for the job. He doesn't have the patience to spend 4 years carefully poring over government expenses, nor the security clearance (AFAIK) to address pentagon spending. Plus, I don't think he's humble enough to bring in people who actually know what to look for.
Prediction: AI will become commoditized ~15 IQ points higher than the state of the art models today, and with larger context, within 4 years as the incremental improvements in training from synthetic data plateaus (we've already used all the "real" data out there) and open source models are cheaply trained on the outputs of the big money models. Then AI development stagnates until someone invents an effective way to use competitive reinforcement learning to train generalized intelligence (similar to how AlphaGo was trained), removing the need for vast quantities of training data. Then, we get real AGI.
If that's true and if today's frontier models are around 120 IQ (who knows if that is true, but let's run with it, source: https://www.trackingai.org/home) then we'll have an enormous number of ~135 IQ bots with nearly unlimited conscientiousness.
I can't even begin to understand what that would mean.
At the speeds AI is moving, we've effectively used it all; the high quality data you need to make smarter models is coming in at a trickle. We're not getting 10^5 Principia Mathematicas published every day. Maybe I just don't have the vision to understand it, but it seems like AI-generated synthetic data for training shouldn't be able to make a smarter model than whatever produced that data. I can imagine synthetic data would be useful for making models more efficient (that's what quantized models are, after all), but not pushing the frontier.
It seems to me the two are effectively the same unless you have significantly misshaped teeth (remineralizing vs regenerating). I also use hydroxyapatite, just to reduce my fluoride exposure, although I believe fluoride is supposed to be a more potent remineralizer (and fluorapatite is allegedly stronger than natural hydroxyapatite). But the upside is that I don't mind swishing hydroxyapatite around in my mouth for 10 minutes, twice a day, so whenever I go to the dentist, I'm the healthiest mouth of the day (not the case pre-hydroxyapatite tooth paste/powder).
NHAP particles are smaller than fluoride particles, so they're able to penetrate farther into the porous surface of the teeth; flouride basically can only coat the surface. There is some research indicating that NHAP is more effective than flouride at remineralizing (e.g. https://pmc.ncbi.nlm.nih.gov/articles/PMC4252862/) but that flouride is more protective than NHAP because NHAP isn't protective at all. (The flouride creates a temporary sacrificial enamel-like shell layer that closes off pores in the surface of the teeth in addition to buffering acids; the NHAP will just create new enamel.)
My dentist says that NHAP is great if you have lots of cavities or drink lots of acidic drinks like soda, but once your enamel is repaired too much of NHAP can actually cause weird growths.
Dave's toothpaste has both NHAP and flouride (and the sensitivity agent used in Sensyodyne) if you're looking for the best of all worlds in the U.S.
After doing some research, I decided to go for this one: https://drjennatural.com/products/dr-jen-super-paste-with-na.... 10% nHAP (rod-shaped), RDA under 50 (exact number unspecified), nothing obviously objectionable in the ingredients, and comes with or without fluoride. My only minor quibble is that I couldn't determine the exact range of HAP particle sizes, which some other vendors do list. On the other hand, it has some strong reviews that seem credible, and there aren't many other options that explicitly provide 10% nHAP with a low RDA, and even fewer that offer a fluoridated version on top of that.
SuperMouth also looked like a great option with an RDA of 67 (particularly for kids who like crazy flavors), and Elims also looked good for anyone who doesn't mind the 92.71 RDA. Ollie stood out for its minimal ingredients list, but turned out to have a relatively high RDA of 143.
I currently use BioMin C in the morning and F at night, but based on everything I'm learning right now about nHAP, I figure it can't hurt to stack Dr. Jen with those. Maybe in a few years I'll get some keratin in the mix too.
Nobs is good because they only use rod-shaped NHA, not needle-shaped NHA which has a worse safety profile. Safety profile is important for anything nano
IDK how to tell what brand uses what type without independent testing or taking their word for it. Several makers have come out and said needle-shaped is cheaper to buy so if a brand has 10% formulation as opposed to 1 or 3 or 5%, it is more likely to be using needle-shaped. (And there is a separate conversation to be had whether 10% is needed/ideal concentration anyway)
For me the game changer here is the speed. On my local Mac I'm finally getting token counts that are faster than I can process the output (~96 tok/s), and the quality has been solid. I had previously tried some of the distilled qwen and deepseek models and they were just way too slow for me to seriously use them.
reply