It is kinda interesting. I talked with a less technical member of my extended family over the holidays. Fairly successful guy in his chosen profession ( accounting ). To say he was skeptical is an understatement and he is typically the most pro-corporate shill you can find for a company to save a few bucks. I assumed he would be attempting to extol its virtues with the assumption that lower level work has errors anyway. I was wrong. Sadly, we didn't get to continue down that line since my kid started crying at that moment.
count me among the skeptics. the big problem i see is that there is no way to verify whether any AI output is correct. it is already very hard to prove that a program is correct. proving that for AI is several levels more difficult, and even if it were possible the cost would be so high to make it not worth it.
I am personally somewhere in between. Language models do allow me to do things I wouldn't have patience to do otherwise ( yesterday chatgpt was actually helpful with hunting down a bug it generated:P ). I think there is some real value here, but I do worry it will not be captured properly.