> Is every new thing not just combinations of existing things?
If all ideas are recombinations of old ideas, where did the first ideas come from? And wouldn't the complexity of ideas be thus limited to the combined complexity of the "seed" ideas?
I think it's more fair to say that recombining ideas is an efficient way to quickly explore a very complex, hyperdimensional space. In some cases that's enough to land on new, useful ideas, but not always. A) the new, useful idea might be _near_ the area you land on, but not exactly at. B) there are whole classes of new, useful ideas that cannot be reached by any combination of existing "idea vectors".
Therefore there is still the necessity to explore the space manually, even if you're using these idea vectors to give you starting points to explore from.
All this to say: Every new thing is a combination of existing things + sweat and tears.
The question everyone has is, are current LLMs capable of the latter component. Historically the answer is _no_, because they had no real capacity to iterate. Without iteration you cannot explore. But now that they can reliably iterate, and to some extent plan their iterations, we are starting to see their first meaningful, fledgling attempts at the "sweat and tears" part of building new ideas.
Well, what exactly an “idea” is might be a little unclear, but I don’t think it clear that the complexity of ideas that result from combining previously obtained ideas would be bounded by the complexity of the ideas they are combinations of.
Any countable group is a quotient of a subgroup of the free group on two elements, iirc.
There’s also the concept of “semantic primes”. Here is a not-quite correct oversimplification of the idea: Suppose you go through the dictionary and one word at a time pick a word whose definition includes only other words that are still in the dictionary, and removing them. You can also rephrase definitions before doing this, as long as it keeps the same meaning. Suppose you do this with the goal of leaving as few words in it as you can. In the end, you should have a small cluster of a bit over 100 words, in terms of which all the other words you removed can be indirectly defined.
(The idea of semantic primes also says that there is such a minimal set which translates essentially directly* between different natural languages.)
I don’t think that says that words for complicated ideas aren’t like, more complicated?
I wish they would keep 4.1 around for a bit longer. One of the downsides of the current reasoning based training regimens is a significant decrease in creativity. And chat trained AIs were already quite "meh" at creative writing to begin with. 4.1 was the last of its breed.
So we'll have to wait until "creativity" is solved.
Side note: I've been wondering lately about a way to bring creativity back to these thinking models. For creative writing tasks you could add the original, pretrained model as a tool call. So the thinking model could ask for its completions and/or query it and get back N variations. The pretrained model's completions will be much more creative and wild, though often incoherent (think back to the GPT-3 days). The thinking model can then review these and use them to synthesize a coherent, useful result. Essentially giving us the best of both worlds. All the benefits of a thinking model, while still giving it access to "contained" creativity.
My theory, based on what I would see with non-thinking models, is that as soon as you start detailing something too much (ie: not just "speak in the style of X" but more like "speak in the style of X with [a list of adjectives detailing the style of X]" they would loose creativity, would not fit the style very well anymore etc.
I don't know how things have evolved with new training techniques etc. but I suspected that overthinking their tasks by detailing too much what they have to do can lower quality in some models for creative tasks.
I also terribly regret the retirement of 4.1.
From my own personal usage, for code or normal tasks, I clearly noticed a huge gap in degraded performance between 4.1 and 5.1/5.2.
4.1 was the best so far. With straight to the point answers, and most of the time correct. Especially for code related questions.
5.1/5.2 on their side would a lot more easily hallucinate stupid responses or stupid code snippet totally not what was expected.
Have you tried the relatively recent Personalities feature? I wonder if that makes a difference.
(I have no idea. LLMs are infinite code monkeys on infinite typewriters for me, with occasional “how do I evolve this Pokémon’ utility. But worth a shot.)
Well yeah, because 5.2 is the default and there's no way to change the default. So every time you open up a new chat you either use 5.2 or go out of your way to select something else.
(I'm particularly annoyed by this UI choice because I always have to switch back to 5.1)
As far as I can tell 5.2 is the stronger model on paper, but it's been optimized to think less and do less web searches. I daily drive Thinking variants, not Auto or Instant, and usually want the _right_ answer even if it takes a minute. 5.1 does a very good job of defensively web searching, which avoids almost all of its hallucinations and keeps docs/APIs/UIs/etc up-to-date. 5.2 will instead often not think at all, even in Thinking mode. I've gotten several completely wrong, hallucinated answers since 5.2 came out, whereas maybe a handful from 5.1. (Even with me using 5.2 far less!)
The same seems to persist in Codex CLI, where again 5.2 doesn't spend as much time thinking so its solutions never come out as nicely as 5.1's.
That said, 5.1 is obviously slower for these reasons. I'm fine with that trade off. Others might have lighter workloads and thus benefit more from 5.2's speed.
> FPGAs will never rival gpus or TPUs for inference. The main reason is that GPUs aren't really gpus anymore.
Yeah. Even for Bitcoin mining GPUs dominated FPGAs. I created the Bitcoin mining FPGA project(s), and they were only interesting for two reasons: 1) they were far more power efficient, which in the case of mining changes the equation significantly. 2) GPUs at the time had poor binary math support, which hampered their performance; whereas an FPGA is just one giant binary math machine.
I have wondered if it is possible to make a mining algorithm FPGA-hard in the same way that RandomX is CPU-hard and memory-hard. Relative to CPUs, the "programming time" cost is high.
My recollection is that ASIC-resistance involves using lots of scratchpad memory and mixing multiple hashing algorithms, so that you'd have to use a lot of silicon and/or bottleneck hard on external RAM. I think the same would hurt FPGAs too.
I had to return my Vision Pro after trying it for a week. I'm one of those rare customers that genuinely wanted to keep it, because it's the only VR headset I could _actually_ get work done in thanks to its stellar resolution and overall screen quality. In spite of its many, many flaws. But I had to ditch the thing because: 1) it's stupidly heavy, and 2) it's the only headset that caused me eyestrain.
I was praying for a new revision, but ... this wasn't it. No mention of making the thing lighter. Seems like instead they _added_ weight to the band to compensate.
Guess I'll keep waiting and hoping someone else fills the space. Maybe, just maybe, there will be a real Quest Pro with the same screen quality as the AVP. The Quest 3 is almost perfect in every regard except for that, so I'd happily drop "stupid" money to grab one with an AVP level display in it. (With the usual caveats of it being an evil Meta product, etc, etc).
The problem isn't really total weight, it's unbalanced weight. For comparison, see the BoboVR head straps for the Quest (https://www.bobovr.com/products/s3-pro), which look ridiculous and add a lot of total weight (especially with a battery), but are actually more comfortable than not having it, because they spread out and counterbalance the weight of the headset.
The Dual Knit Band may help AVP comfort, but I'm skeptical. To have a significant benefit it would need to have much stiffer side support so that the entire thing can lever the weight of the headset upwards and pull the center of gravity way back from the face.
At least in the U.S. the equality of women in society (and in law) has slowly risen over the last 100 years. Over that same period the availability of pornographic images has also slowly risen (from magazines, to VHS, to the Internet, to streaming videos, to VR).
So if we're looking at correlation, doesn't the data imply that _more_ porn is associated with _more_ rights for women?
(Conversely, the vast majority of people calling for and enacting policies for more restrictions on pornography are also rolling back rights for women.)
I used almost 100% AI to build a SCUMM-like parser, interpreter, and engine (https://github.com/fpgaminer/scumm-rust). It was a fun workflow; I could generally focus on my usual work and just pop in occasionally to check on and direct the AI.
I used a combination of OpenAI's online Codex, and Claude Sonnet 4 in VSCode agent mode. It was nice that Codex was more automated and had an environment it could work in, but its thought-logs are terrible. Iteration was also slow because it takes awhile for it to spin the environment up. And while you _can_ have multiple requests running at once, it usually doesn't make sense for a single, somewhat small project.
Sonnet 4's thoughts were much more coherent, and it was fun to watch it work and figure out problems. But there's something broken in VSCode right now that makes its ability to read console output inconsistent, which made things difficult.
The biggest issue I ran into is that both are set up to seek out and read only small parts of the code. While they're generally good at getting enough context, it does cause some degradation in quality. A frequent issue was replication of CSS styling between the Rust side of things (which creates all of the HTML elements) and the style.css side of things. Like it would be working on the Rust code and forget to check style.css, so it would just manually insert styles on the Rust side even though those elements were already styled on the style.css side.
Codex is also _terrible_ at formatting and will frequently muck things up, so it's mandatory to use it with an autoformatter and instructions to use it. Even with that, Codex will often say that it ran it, but didn't actually run it (or ran it somewhere in the middle instead of at the end) so its pull requests fail CI. Sonnet never seemed to have this issue and just used the prevailing style it saw in the files.
Now, when I say "almost 100% AI", it's maybe 99% because I did have to step in and do some edits myself for things that both failed at. In particular neither can see the actual game running, so they'd make weird mistakes with the design. (Yes, Sonnet in VS Code can see attached images, and potentially can see the DOM of vscode's built in browser, but the vision of all SOTA models is ass so it's effectively useless). I also stepped in once to do one major refactor. The AIs had decided on a very strange, messy, and buggy interpreter implementation at first.
Maybe this is an insane idea, but ... how about a spider P2P network?
At least for local AIs it might not be a terrible idea. Basically a distributed cache of the most common sources our bots might pull from. That would mean only a few fetches from each website per day, and then the rest of the bandwidth load can be shared amongst the bots.
Probably lots of privacy issues to work around with such an implementation though.
If all ideas are recombinations of old ideas, where did the first ideas come from? And wouldn't the complexity of ideas be thus limited to the combined complexity of the "seed" ideas?
I think it's more fair to say that recombining ideas is an efficient way to quickly explore a very complex, hyperdimensional space. In some cases that's enough to land on new, useful ideas, but not always. A) the new, useful idea might be _near_ the area you land on, but not exactly at. B) there are whole classes of new, useful ideas that cannot be reached by any combination of existing "idea vectors".
Therefore there is still the necessity to explore the space manually, even if you're using these idea vectors to give you starting points to explore from.
All this to say: Every new thing is a combination of existing things + sweat and tears.
The question everyone has is, are current LLMs capable of the latter component. Historically the answer is _no_, because they had no real capacity to iterate. Without iteration you cannot explore. But now that they can reliably iterate, and to some extent plan their iterations, we are starting to see their first meaningful, fledgling attempts at the "sweat and tears" part of building new ideas.
reply