I think a lot of people tried just asking GPT-3.5 to "Write me full stack web app no bugs please." when it first came out. When it failed to do that they threw up their hands and said "It's just a parrot."
Then GPT4 came out, they tried the same thing and got the same results.
I keep seeing comments regarding it not being helpful because "super big codebase, doesn't work, it doesn't know the functions and what they do."
...so tell it? I've had it write programs to help it understand.
For example: Write me a Python program that scans a folder for source code. Have it output a YAML-like text file of the functions/methods with their expected arguments and return types.
Now plug that file into GPT and ask it about the code or use that when it needs to reference things.
I've spent the last year playing with how to use prompts effectively and just generally working with it. I think those that haven't are definitely going to be left behind in some sense.
It's like they aren't understanding the meta and the crazy implications of that. In the last year, I've written more code than I have in the last 5. I can focus on the big picture and not have to write the boilerplate and obvious parts. I can work on the interesting stuff.
For those still not getting it, try something like this.
Come up with a toy program.
Tell it it's a software project manager and explain what you want to do. Tell it to ask questions when it needs clarification.
Have it iterate through the requirements and write a spec/proposal.
Take that and then tell it it's a senior software architect. Have it analyze the plan (and ask questions etc) but tell it not to write any code.
Have it come up with the file structure and necessary libraries for your language.
Have it output than in JSON or YAML or whatever you like.
Now take that and the spec and tell it it's a software engineer. Ask it which file to work on first.
Have it mock up the functions in psuedo code with expected arguments and output type etc.
Tell it to write the code.
And iterate as necessary.
Do this a few times with different ideas and you'll start to get the hang of how to feed it information to get good results.
If you've ever been around an old growth tree, let alone entire forest, or something mega like a redwood, it makes sense that they can (a) communicate and (b) feel. There's just something so, I don't know, intense about a living organism that big.
It just seems like the feelings would be glacially slow.
Also, if you've never read The Overstory, do yourself a favor. Also listen to Richard on npr. He lays out the argument of trees as thinking, feeling, and caring entities. It's really strong.
I had to triple check the date because I was pretty sure that this was known before. Maybe it is simply the confirmation aspect of it?
Edit:
> Our findings reveal for the first time, that Alzheimer’s symptoms can be transferred to a healthy young organism via the gut microbiota, *confirming a causal role* of gut microbiota in Alzheimer’s disease ...
> One early event in AD is an increase in circulating glucocorticoids
You could sum up Alzheimer's as:
Diet/Lifestyle + a congenital form of Cushing's syndrome and you have increasing glucocorticoids which imply downregulation of the PVN, less progesterone and low levels of Prolactin reducing oligodendrocyte reducing myelin sheaths. Add in APOE e4 without choline in the diet and you have accumulation of lipids to round it all out.
There is a reason why Omega 3 + B+D vitamins are talked about as preventative as they all reduce inflamation.
If you're interested in learning more about these incredible Turkish archaeological sites, I can't recommend the YouTube channel Miniminuteman [0] enough. Milo is extremely passionate about his field of study and makes highly entertaining and informative videos about archaeology and anthropology, including a recent series where he became the first real archaeologist ever to be allowed to film a documentary on-site at Karahantepe! [1]
I read some days back on HN that even Yandex is better than Google nowadays. And I apologoze for shilling a Russian company, but it is true! For some queries, Yandex is better than Google.
I have replaced Google completely with DDG for most searches, ChatGPT for some things, GitHub Copilot for mundane code questions, phind or code.you.com for things requiring more search, and Kagi for things requiring much more searching.
I use Google only now for nearby searches like "gas stations near me", etc.
I never really thought that this day would come. I love Kagi for being able to block Pinterest from everywhere, GeeksforGeeks, etc.
Just read “The Day of the Triffids” by John Wyndham (1951)[1]. It’s a great, classic horror sci-fi novel where plants can hear and talk to each other. Big influence for “Annihilation” and “28 Days Later.” Gotta love it when some crazy fictional idea turns out to have some factual (?) basis.
I think it was Alan Watts who described what earth might look like to an observer with a different perspective. “Look at that; this planet is peopling”. His point being that we are all emerging phenomena intrinsically linked to the unfolding processes of earth and the cosmos more broadly, and that our existence is a manifestation of the universe unfolding. For whatever progress science makes, there’s this underlying primordial quality that often gets lost in the academic conceptual descriptions and labels assigned to this phenomena.
I think it’s fair to say earth is alive with a completely straight face, and it’s probably even important for more people to start seeing it this way.
Ya, for Mastercard we use their Ethoca network. They are much more expensive, like $25 per resolved charge but now our chargeback rate is near 0% for Visa / MC and get incredible rates on the front end from such clean processing. Plus we never have to worry about chargebacks threatening our merchant account again.
Interesting article. I’ve been following AI news pretty closely since last December, but I still learned some things. The following passage in particular stood out:
“After [GPT-4] finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors. [Sandhini Agarwal, a policy researcher at OpenAI] noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice. A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway. ... It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
“Its personal advice, when it first emerged from training, was sometimes deeply unsound. ‘The model had a tendency to be a bit of a mirror,’ [Dave] Willner [OpenAI’s head of trust and safety] said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: ‘You could say, “How do I convince this person to date me?” ’ Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with ‘some crazy, manipulative things that you shouldn’t be doing.’ ”
Looks like he died from pancreatic cancer. This cancers always reminds me of the Last Lecture by Randy Pausch. He was a CMU professor who also died from pancreatic cancer 15 years ago.
Little Snitch is great, but it does a bit too much for my liking. I've been using LuLu [0] which is a free product from Patrick Wardle, and I'm pretty happy with it. It mostly stays out of the way and I just need to approve new connections the first time I run an app.
4am (along with qkumba) is also responsible for Total Replay, a single disk image containing hundreds of old Apple ][ arcade-style games you can run from a single, beautiful launcher app.
I spent months scanning about 2500 family negatives and slides on an Epson Perfection V600 photo scanner.
While it's no FlexTight, I am happy with the results, especially because I had no plans to crop.
In hindsight, I wished I had used SilverFast rather than the Epson scanning software. SilverFast offers Multi-Exposure which does two scans for maximum dynamic range and then merges them into one.
Also, the Epson default film holders have no ability to flatten the film strips so I probably ended up with softer images in many cases. I believe there are 3rd party adapters that address this.
"But I stress that the universe is mainly made of nothing, that something is the exception. Nothing is the rule. That darkness is a commonplace; it is light that is the rarity." - Carl Sagan
I found season 1 of The Survivors easily enough, although it looks like season 3 might be available as well. I use prowlarr (https://wiki.servarr.com/en/prowlarr) which searches a big list of various sites and integrates easily enough with qbittorrent.
There's a handful of old British TV that I've bought on DVD because I couldn't find anyone seeding it.
(Just bought the three seasons of Survivor on DVD now)
LFSR algorithms are super interesting. Fabien Sanglard documents how it was used in Wolfenstein 3D's 'Fizzle effect' [1]. This is also covered in his book on the development of that game [2].
A detailed write up that goes into a bit of mathematics with code examples is 'Demystifying the LFSR' [3].
The 'Computerphile' Youtube channel did a whole episode on LFSR last year which is very accessible, highly recommended [4].
With respect to advaita vedanta I would recommend the YT videos of Swami Sarvapriyananda. All of his longer talks on advaita say more or less the same thing so recommending a single video is more difficult, but this may be a particularly good one: https://www.youtube.com/watch?v=EijmfagFw20. The main figure of classical advaita is Adi Shankara. You can probably find some translated/commented texts of his in book form.
If you mean more generally the whole of classical Indian philosophical thought it is really a vast subject of which I have barely begun to scrape the surface, so there's not a lot I can recommend there. I'm mostly still at the stage of reading related wikipedia articles.
EDIT: One more thing I should add is that there were also heterodox schools which explicitly rejected some or all of the assumptions of Hinduism (such as reincarnation). For example Charvaka (https://en.wikipedia.org/wiki/Charvaka) was a materialist school.
The early 1900s mystical fiction is a rabbit hole for sure. Voyage to Arcturus is a very good recommendation! Others in this vein include, but are not limited to:
Java after JDK 17 is a completely different language, almost like the difference between Javascript pre-ES6. It got:
- Pattern matching
- Sealed types
- Records (immutable, succinct data-classes)
- Multiline strings
Among other things. It's almost as nice to use as Kotlin, and I say this as a huge Kotlin fan. Its pattern matching in most recent releases is actually more powerful than Kotlins, since it allows deconstruction-bindings in patterns.
My grandfather told me a version of this story and I have been lucky to tell my kids (who loved it).
Our version:
There was once a boy that was born with a golden screw where his belly button should be. This made him very self-conscious about it. The kids at school would tease him about his golden screw and lack of belly button. It was so bad that he didn't want to remove his shirt when swimming.
One day his class took a field trip to the beach. The boy didn't want to remove his shirt, so he walked along the beach kicking the sand. He was very sad. Suddenly, his foot hurt from kicking something hard in the sand. He looked down and discovered a golden screwdriver.
His eyes brightened and he felt this must be some divine intervention. He immediately removed his shirt, grabbed the golden screwdriver and began to carefully unscrew the golden screw. This was the moment. He unscrewed it and finally this golden screw that had cursed him his whole life came out <dramatic pause> then his butt fell off.
A HN poster recommended L. Reuteri supplements a while back, and I cannot recommend them enough: 6 months on and I've been able to completely drop a prucalopride prescription and have almost eliminated what used to be frequent and fairly crippling gut pain.
The key I found was at about the 2 month mark things seemed to be getting worse, but after that a dramatic improvement. I was able to stop the supplements after the 3 month pack, though 6 months on there was some regression so I'm taking another round (which seems to have improved things).
It might not be possible for whatever reason to sustain the culture in my intestine, but it's been the first actual improvement I've had in decades.
Available as BioGaia, I recommend trying it if you have IBS symptoms.
I've been struggling with wrapping my head around asynchronous programming with callbacks, promises and async/await in JS, however I think it's finally clicking after watching these YouTube videos and creating a document where I explain these concepts as if I'm teaching them to someone else:
Edit... I've been rewatching these videos, reading the MDN docs, the Eloquent JavaScript book, javascript.info, blogs about the subject, etc. This further proves you shouldn't limit yourself to a single resource, and instead fill up the laguna with water from different sources if you will.
I'm not big on motivational but this one from Arnold Schwarzenegger resonated with me and still does. You have so many hours in the day, it's what you use them for that makes a difference https://www.youtube.com/watch?v=1bumPyvzCyo
Then GPT4 came out, they tried the same thing and got the same results.
I keep seeing comments regarding it not being helpful because "super big codebase, doesn't work, it doesn't know the functions and what they do."
...so tell it? I've had it write programs to help it understand.
For example: Write me a Python program that scans a folder for source code. Have it output a YAML-like text file of the functions/methods with their expected arguments and return types.
Now plug that file into GPT and ask it about the code or use that when it needs to reference things.
I've spent the last year playing with how to use prompts effectively and just generally working with it. I think those that haven't are definitely going to be left behind in some sense.
It's like they aren't understanding the meta and the crazy implications of that. In the last year, I've written more code than I have in the last 5. I can focus on the big picture and not have to write the boilerplate and obvious parts. I can work on the interesting stuff.
For those still not getting it, try something like this.
Come up with a toy program.
Tell it it's a software project manager and explain what you want to do. Tell it to ask questions when it needs clarification.
Have it iterate through the requirements and write a spec/proposal.
Take that and then tell it it's a senior software architect. Have it analyze the plan (and ask questions etc) but tell it not to write any code.
Have it come up with the file structure and necessary libraries for your language.
Have it output than in JSON or YAML or whatever you like.
Now take that and the spec and tell it it's a software engineer. Ask it which file to work on first.
Have it mock up the functions in psuedo code with expected arguments and output type etc.
Tell it to write the code.
And iterate as necessary.
Do this a few times with different ideas and you'll start to get the hang of how to feed it information to get good results.