Follow-up papers by other authors which “only extend or expand on the specific finding in very minor ways” have a secondary benefit. In addition to expanding the original findings, they are also implicitly replicating the original result. This is perhaps a crucial contribution in light of the replication crisis!
If only. I worked in cog/neuro sci, and the career builders there produce small variations on the original. Variations on the Stroop task, which dates back to 1935(!), are still being published, despite the fact that there is no explanation for the effect. And when you consider that null results are rarely published, and that many aspects of the methodology are flawed, a new paper cannot be considered a replication: it's just wishful thinking upon wishful thinking.
Are you claiming the Stroop effect hasn’t been proven to exist or just that there hasn’t been an explanation?
Funnily enough, the first “professional” coding I ever did was writing up a Stroop test in Visual Basic for a neuro professor, and I recall the effect being undeniably clear. At a personal anecdotal level, I would time myself with matching colors versus non-matching, and even with practice I could not bring my non-matching times down to my matching times.
As I wrote: there is no explanation. After 90 years, and thousands of papers (Google scholar counts nearly 90.000), still no idea what causes it. Why go on then? It's obvious that this path isn't going to lead to the solution.
Maybe. But that is a generous reading. I used to attend many computational aesthetic conferences. The sheer volume of non photorealistic rendering cross hatch algorithms was almost laughable.