Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I too am deeply skeptical of the current economic allocation, but it’s typical of frontier expansions in general.

Somehow, in AI, people lost sight of the fact that transformer architecture AI is a fundamentally extractive process for identifying and mining the semantic relationships in large data sets.

Because human cultural data contains a huge amount of inferred information not overtly apparent in the data set, many smart people confused the results with a generative rather than an extractive mechanism.

….To such a point that the entire field is known as “generative” AI, when fundamentally it is not in any way generative. It merely extracts often unseen or uncharacterized semantics, and uses them to extrapolate from a seed.

There are, however, many uses for such a mechanism. There are many, many examples of labor where there is no need to generate any new meaning or “story”.

All of this labor can be automated through the application of existing semantic patterns to the data being presented, and to do so we suddenly do not need to fully characterize or elaborate the required algorithm to achieve that goal.

We have a universal algorithm, a sonic screwdriver if you will, with which we can solve any fully solved problem set by merely presenting the problems and enough known solutions so that the hidden algorithms can be teased out into the model parameters.

But it only works on the class of fully solved problems. Insofar as unsolved problems can be characterized as a solved system of generating and testing hypothesis to solve the unsolved, we may potentially also assail unsolved problems with this tool.



Different algorithms do different things but “generative” AI can certainly come up with new stories and images and with different algorithms AI can work with not fully solved problems like protein folding.


Coming up with new arrangements of bits is not a particularly hard problem on its own, but the current crop of ai is certainly able to do that in the extractive process, in fact randomness is a key part of training and inference. But making new things from old parts does not constitute innovation, insofar as the arrangements follow known paths.

That doesn’t make it non useful. It just makes it non innovative.

Trial and error within a defined problem space is an area where automation can definitely be useful. Once again though, the result is not innovation but rather automation of labor.

There is a -lot- of labor requiring mind numbing repetition or iteration. The vast majority of labor falls into this category, and exists in fundamentally solved problem spaces, but still is complex enough that the algorithms involved are opaque. This is where the current type of AI can work miracles when trained with enough oblique data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: