It's cyclical. The term "ML" was popular in the 1990s/early 2000s because "AI" had a bad odor of quackdom due to the failures of things like expert systems in the 1980s. The point was to reduce the hype and say "we're not interested in creating AGI; we just want computers to run classification tasks". We'll probably come up with a new term in the future to describe LLMs in the niches where they are actually useful after the current hype passes.
Is there a good reference available that describes what happened with expert systems in the 80's? I'm only vaguely aware about such things but in my mind they seem to have some utility.
H.P. Newquist's "The Brain Makers: Genius, Ego, and Greed In The Search For Machines That Think" (1994) is a good book that centers on the second AI bubble (1980-1987) and its collapse. It is maybe a bit too focused on the companies and CEOs themselves rather than the technology, but it does cover the main problem of expert systems -- their brittleness when faced with a situation they haven't been designed for. One of the things that we learned with expert systems is it is better to have probabilistic weights for things rather than the IF THEN ELSE branches of a traditional expert system -- this led to things like Bayesian models which were popular before deep learning took over.
It's cyclical. The term "ML" was popular in the 1990s/early 2000s because "AI" had a bad odor of quackdom due to the failures of things like expert systems in the 1980s. The point was to reduce the hype and say "we're not interested in creating AGI; we just want computers to run classification tasks". We'll probably come up with a new term in the future to describe LLMs in the niches where they are actually useful after the current hype passes.