> There's just no way in hell ChatGPT at its current level is going to guide you flawlessly through all of that if you start with a simple "I want to build a raytracer" prompt!
I mean, maybe not "flawlessly", and not in a single prompt, but it absolutely can.
I've gone deep in several areas, essentially consuming around a book's worth of content from ChatGPT over the course of several days, each day consisting of about 20 prompts and replies. It's an astonishingly effective way to learn, because you get to ask it to go simpler when you're confused and explain more, in whatever mode you want (i.e. focus on the math, focus on the geometry, focus on the code, focus on the intuition). And then whenever you feel like you've "got" the current stage, ask it what to move onto next, and if there are choices.
This isn't going to work for cutting-edge stuff that you need a PhD advisor to guide you through. But for most stuff up to about a master's-degree level where there's a pretty "established" progression of things and enough examples in its training data (which ray-tracing will have plenty of), it's phenomenal.
If you haven't tried it, you may be very surprised. Does it make mistakes? Yes, occasionally. Do human-authored books also make mistakes? Yes, and often probably at about the same rate. But you're stuck adapting yourself to their organization and style and content, whereas with ChatGPT it adapts its teaching and explanations and content to you and your needs.
How did you verify that it wasn't bogus? Like, when it says "most of the time", or "commonly", or "always", how do you know that's accurate? How do those terms shape your thinking?
> when it says "most of the time", or "commonly", or "always", how do you know that's accurate?
Do you get those words a lot? If you're learning ray-tracing, it's math and code that either works or doesn't. There isn't a lot of "most of the time"?
Same with learning history. Events happened or they didn't. Economies grew at certain rates. Something that is factually "most of the time" is generally expressed as a frequency based on data.
So are you just verifying/factchecking everything it tells you? How is that a good learning experience? And if you don't, you are learning made up stuff, so not great either.
It's a good tool to learn stuff, I'm not trying to argue that, but one has to be fully aware of its shortcomings and put in extra work. With actual tutorials or books you have at least some level of trust.
I mean, I have to verify stuff in human-written tutorials too. Humans are wrong all the time.
A lot of it is just, are its explanations consistent? Does the code produce the expected result?
Like, if you're learning ray-tracing and writing code as you go, either it works or it doesn't. If the LLM is giving you wrong information, you're going to figure that out really fast.
In practice, it's just not really an issue. It's the same way I find mistakes in textbooks -- something doesn't quite add up, you look it up elsewhere, and discover the book has a typo or error.
Like, when I learn with an LLM, I'm not blindly memorizing isolated facts it gives me. I'm working through an area, often with concrete examples, pushing back on what seems confusing, until getting to a state where things make sense. Errors tend to reveal themselves very quickly.
I mean, maybe not "flawlessly", and not in a single prompt, but it absolutely can.
I've gone deep in several areas, essentially consuming around a book's worth of content from ChatGPT over the course of several days, each day consisting of about 20 prompts and replies. It's an astonishingly effective way to learn, because you get to ask it to go simpler when you're confused and explain more, in whatever mode you want (i.e. focus on the math, focus on the geometry, focus on the code, focus on the intuition). And then whenever you feel like you've "got" the current stage, ask it what to move onto next, and if there are choices.
This isn't going to work for cutting-edge stuff that you need a PhD advisor to guide you through. But for most stuff up to about a master's-degree level where there's a pretty "established" progression of things and enough examples in its training data (which ray-tracing will have plenty of), it's phenomenal.
If you haven't tried it, you may be very surprised. Does it make mistakes? Yes, occasionally. Do human-authored books also make mistakes? Yes, and often probably at about the same rate. But you're stuck adapting yourself to their organization and style and content, whereas with ChatGPT it adapts its teaching and explanations and content to you and your needs.