> This really belongs in Harvard's "JFK School of Government", not economics.
From the PDF you linked:
> ECON 1152 and HKS SUP 135
(HKS is Harvard Kennedy School)
> Sections: 1 per week at times to be arranged. Sections will be divided into two groups: one for students with no prior coursework in statistics/econometrics and another intended for those who have taken courses in statistics/econometrics. You may choose which type of section you would like to attend depending upon your background.
> Kennedy School students must enroll in the more advanced section.
So it looks to be a joint class between Econ and the Kennedy school, with the Kennedy school students required to take the class on "hard mode".
Keynesian economics was not supposed to be taught during the McCarthy era, and it didn't doom the West. We didn't understand economics very well before the 20th century, and it didn't doom the West. I seriously doubt that one class being taught at Harvard is going to doom the West.
Wrong side of the Atlantic. I didn’t have to buy a single textbook my entire undergrad because we had libraries and lecture notes. Required textbooks are far more a US thing than most other countries.
Harvard absolutely has courses taught from a book. I’m sure Mankiw requires his textbook for Harvard’s intro econ course. He wrote it, he thinks it’s good.
And he's made $42 million in royalties on the book. It's almost endearing how economists claim they are somehow immune to incentives, until you pause to consider they are primarily employed as apologists for the continuation of rent-seeking policies that entrench the rich and mighty.
> INCREASING THE INCOME SHARE TO THE BOTTOM 20 PERCENT OF CITIZENS BY A MERE ONE PERCENT RESULTS IN A 0.38 PERCENTAGE POINT JUMP IN GDP GROWTH.
> The IMF report, authored by five economists, presents a scathing rejection of the trickle-down approach, arguing that the monetary philosophy has been used as a justification for growing income inequality over the past several decades. "Income distribution matters for growth," they write. "Specifically, if the income share of the top 20 percent increases, then GDP growth actually declined over the medium term, suggesting that the benefits do not trickle down."
I'll add that we tend to overlook the level of government spending during periods trickle-down economics and confound. Change in government spending (somewhat unfortunately regardless of revenues) is a relevant factor.
Let's make this economy great again? How about you identify the decade(s) you're referring to and I'll show you the tax revenue (on income and now capital gains), federal debt per capital, and the growth in GDP.
You can look through any mainstream micro textbook, graduate or undergrad, and in 1,000 pages won’t see a single citation to support any model empirically. Compare that to any decent physics textbook, which will link models to the experiments that back them up. Economics for the real world won’t be simple and pure like physics, it’ll be more like geology or biology, with a lot more facts and a lot fewer theories of everything.
Experimental economics is all about seeing how markets work in practice. Mechanism design is all about taking the theory of economics and using it in the real world, where it works. Economics for the real world exists already. That’s why companies like Microsoft, Amazon and Uber hire so many Ph.D. economists.
Experimental economics is the application of experimental methods[1] to study economic questions. Data collected in experiments are used to estimate effect size, test the validity of economic theories, and illuminate market mechanisms. Economic experiments usually use cash to motivate subjects, in order to mimic real-world incentives. Experiments are used to help understand how and why markets and other exchange systems function as they do. Experimental economics have also expanded to understand institutions and the law (experimental law and economics).
This is not true. I've been through an economics program and a lot of theories have empirical data to support them. Of course it's much harder to prove there is causation and not only correlation (and all that that entails), but that's part of being a topic which to a large extent deal with social phenomena (people).
Can't dig up anything now but a lot of phenomena in papers (though of course far from all /majority) do or reference some sort of quantitative research).
Meh. The standard of "support" is pretty low. I spent a lot of time looking through the literature for empirical support for 1/ the Philips curve, and 2/ gravity models of trade. I looked for the first out of personal interest, and for the second because economists often claim that gravity models are well-supported.
Where empirical studies were actually available, they usually had large numbers of parameters, they did not test on diverse datasets (e.g. Philips curves across different economies), and they never considered an alternative hypothesis or simple baseline.
In contrast, econometric literature has seen a huge shift toward identifying causal relations from observational data.
More than any other social science, economics has adopted this approach. Experimental fields (Psychology, Medicine) never really had to deal with observational data.
Many approaches are not new to statistics, to be sure, but many have been implemented and refined in econometrics. Courses in political science, sociology and management, for example, largely use econometric textbooks and papers nowadays.
Why is that the case?
Other social sciences have been more data driven before. That is to say, they lacked a coherent theoretical framework that would alert them to causality issues. Economics was early to adopt causal analysis techniques from stats - and look for observational equivalents to experimental science approaches - because the theory framework showed early on that regression based approaches (still being done a lot in sociology and management, for example) are usually not a good idea.
It is all a trade-off. Yes, these models are often not accurate, sometimes outright wrong. But, formal theories give you precise assumptions and causal chains, which means they can be quickly discarded as paradigms (General Equilibrium, Game Theoretic Equilibrium Refinements), and crucially, they tell you about empirical issues. This has happened to a MUCH SMALLER degree in social sciences that do not have these formal theories!
The notion that economics is behind in terms of empirical approaches compared to other observational social sciences is just not correct.
I will give you the point that econ textbooks are often bad in that regard. I mean, the standard micro textbook for grads is really just a treatment of differential manifolds with other names. But those textbooks are not to be read isolation. Intro textbooks I saw, on the other hand, do have real data.
Yeah I think what I wrote was wrong.
It's not that they haven't dealt with observational data, rather they had the advantage of being able to use experiments with good external validity to a larger degree, making both observational and experimental studies easier.
The subfield known as psychology and economics definitely is based on experimental evidence. The experiments may often be laboratory based or contrived, but they are nevertheless a major input to the concepts as developed.
As you say, real world economics don't work like physics. People are not atoms. Therefore your demand that economics textbooks should be like physics textbooks doesn't really make sense.
I think Geology and Biology do have a lot of theories, too. Would you say the theory of evolution is bunk and you should just collect data and leave it at that?
It focuses on observable social and economics consequences (failures of equality, opportunity and sustainability in particular), when introducing new economic concepts.
The goal is to avoid students being blinded by appealing, but simplistic, economics models.
Having been seduced by many an economic theory before, I for one welcome this perspective.
Cool. One thing I hated about my Econ minor was that so much of undergraduate economics education assumes conditions which are never true. It’s like studying aerospace engineering and only introducing air resistance in graduate school.
My Newtonian physics courses taught projectile motion without really getting into the weeds on how to calculate air resistance. Given how complicated that can be and the much greater sophistication of mathematics involved, I think this is a reasonable choice. I don't think anybody walked out of those lectures believing that air resistance doesn't exist.
Models, in my obviously limited experience, have the potential to be incredibly useful instructional tools. You can start with the unrealistic assumptions that are literally never true to get the basic points across, and then bring it stepwise closer to reality from them.
Starting people off with the fluid dynamics to calculate air resistance reasonably accurately in projectile motion seems like it might make teaching Newtonian physics almost intractably difficult. On top of requiring a much more advanced understanding of mathematics than one can universally assume a college freshman is possessed of.
The difference is physics 101 allows you to make pretty accurate predictions and statements about the real world. Econ 101 does not do the same if you ignore personal ideology
Good comparison. This is how I felt when I took ECON 101. Though I was fascinated with the concepts, they felt so far from reality as to be effectively useless.
The problem was that this stance was used to justify claims like "the free market allocates resources in the most efficient way" and "all taxes incur a deadweight loss" which are big statements to make based on such basic coursework. If the class is going to make claims like these, it needs to start with realistic assumptions.
If I read this right, Chetty's new EC course tries not only to put the focus on empirical data and its analysis, but also to disentangle ethical value judgments from economics.
This is sorely needed. I've often thought that economists (on either side of the political divide) love to make pompous pronouncements about what's mathematically right and wrong and somehow translate that into what's ethically right and wrong, and then say that their opponents are stupid and evil.
Chetty's own inequality/mobility data gets interpreted differently by the left and the right. There is no free lunch here, when it comes to dealing with people's vested interests (biases).
Policy/decision makers know anytime things get political the issue is never about the data. If someone somewhere has to give up something, try doing it just showing them your data and analysis.
You can run the experiment on HN. Try getting something in the architecture to change based on data, and pitch it to the HN community. See what happens...
Disentangling ethical value judgments from economics is difficult, to put it mildly. It’s also front and centre in any introductory economics course, where the first lecture will introduce the distinction between normative (what should be) and positive (what is). I’m not sure I’ve ever read an intro textbook that doesn’t introduce that distinction.
Suggested parts, considering what Harvard econ has done for the human race:
IX: "How we looted the former Soviet Union, blew up their economy, got away with it, and blamed it on the Russians" (team seminar by Andrei Shleifer and Larry Summers)
X: "Selling your country to foreigners, indebting the masses for fun and profit, then telling the fools the GDP got bigger" (Greg Mankiw)
XI: "Linear regression, with ideology" (everyone else)
It’s considered among knowledgeable circles that the creation of the Russia we know of today was largely the work of the “Chicago Boys” type group of economists who were sent to liberalize the Russian economy through “shock therapy”.
There are lots of well researched stories that show they did stupid badly thought of things such as, famously, handing over stocks in major companies to its workers, instantly making them shareholders.
That sounds fine in theory. But with poverty at their throat workers simply sold their sto luck to the first person who asked. Often these were people with “connections” or some money stashed away or criminally-funded “entrepreneurs”, who with some barebones organisation and planning could quickly scoop up billions of dollars worth of stock and take a controlling position in major companies by buying them cents for a dollar.
Thus the oligarchs were born which quickly led to the kleptocratic Russia we know of today.
Liberal minded and educated Russians realised western economists were annihilating their country. One well known Russian independent magazine called the newly rebuilt Russia a “neoliberal dystopia”. Which more and more I think of as quite accurate.
This seems demonstrably untrue: "Thus the oligarchs were born which quickly led to the kleptocratic Russia we know of today"
Oligarchs have been a mainstay of Russian society since at least the time of Peter the Great when one of his main policy positions as Tsar was reducing the influence of the oligarchs (then called Boyars).
Plus, Russia under Soviet rule was unbelievably kleptocratic and corrupt.
Most Russian oligarchs today trace their wealth back to the acquisition of shares in 1991, often with the help of loans from banks run by friends.
Larry Summers, when he was "advising", didn't make a secret of the fact that he didn't really care who "owned" most Russian assets provided it was in private hands.
> Larry Summers, when he was "advising", didn't make a secret of the fact that he didn't really care who "owned" most Russian assets provided it was in private hands.
I really don’t see the connection particularly considering the new oligarchs post-Soviet period appear to be more intimately connected to organised crime than any soviet heritage power structures
That doesn't sound like looting, though, unless the new shareholders were all US citizens? And was it their plan all along to make the factory workers sell their shares to oligarchs?
It is still a very popular theory here in Germany that factory workers should be given shares in the companies they work for. Personally I think it is misguided and anti-freedom: obviously it would be better to just give them more money, with which they could opt to buy shares or other things.
But seeing the popularity of the idea, I think people could be forgiven for actually implementing it?
most of the issues of Russia of today (and the rest of eastern europe/ussr) stem from communism and its planned economy. during the latter days of the soviets the amount of graft/corruption was incredible. the factories would churn out garbage 24/7. while regular folk had no heating and electricity. this led to even more corruption and an individualist mentality post-breakup.
the west might have contributed to this downfall, but 1. this downfall was completely normal and partly expected and 2. the main culprits can be found among the populations of russia and eastern europe.
I think all you’re saying is true but I wonder whether the West shouldn’t have been more careful when we were called upon to help these countries transition away from planned economies.
people from eastern europe were waiting for the americans to come and save them since 1947.
unfortunately the west was pretty much destroyed after ww2. but even in 1989 there was still hope that the soviet nightmare would finally end. and it did, thankfully.
unfortunately we’ve been left with the russian mafia state to deal further blows to the region.
The famines in the USSR were partially caused by US trade embargoes on the USSR preventing them from buying modern farming equipment and grain supplies.
That's not to say the USSR itself bore no blame. They definitely did, but the US did everything it could to ensure that system would collapse.
That doesn't sound very convincing - after all people didn't starve before Russia had modern farming equipment. Something else must have changed. The invention of modern farming equipment is unlikely to have caused existing farming methods to become less effective.
Also, it seems unlikely to me that Soviet planning relied on buying farming equipment from the USA. Weren't they rather too proud for that?
Did the USA also deny to sell them food?
Edit, since YC doesn't allow me to write more comments: yeah ok maybe people were starving before that, the point was, the US did not cause that. Maybe their superior technology could have helped, but that is not the same as blaming them for causing the famine. If the US hadn't invented those machines, Russians would have starved all the same, and there would have been no US to blame for it. That is what I meant.
As for trade embargoes, I am not a fan. But I can also not blame the US for wanting to curb the spread of communism, which is decidedly not a harmless ideology. So I guess trade embargoes tend to be a first step, preferred to actual war.
Also not convinced that communists would have loved to trade. Wasn't everything Western frowned upon? You could get into jail for listening to Jazz music? That doesn't sound very open minded to me, and not a good basis for trade. I think they wanted to show that they could do better than capitalism, and to rely on capitalist products would have been an admittance of failure.
> after all people didn't starve before Russia had modern farming equipment.
Yes, they did. You don't know your history or are deliberately ignoring parts of it.
Russia and the Russian Empire was always struggling to produce enough food due to the nature of its landscape. Barely any of it is suitable for farming.
This is the main reason Russia always tries to expand westward (that and its lack of warm-water ports for trade). It's simple self-preservation. I'm not excusing some or all of the horrible shit they've done to pursue this goal either, but it is understandable. The US did exactly the same when it swarmed over North America wiping out or subjugating every Native tribe it encountered.
The arable land in Russia is incredibly slim and was mostly situated on its western countries bordering Europe. Most of Russia and many of its satellite states during the USSR is and were unsuitable for farming.
On the subject of US/NATO trade embargoes, the US did a similar thing to Cuba and some South American states for decades for no good reason other than 'communism bad!'.
Trade embargoes used as a tool to fight an ideological war by weakening the target country's economy, thereby discrediting its economic model by artificially limiting supplies of grain and efficient farming equipment.
All they ended up doing was hurting the regular people who probably couldn't have cared less what system they were under. They were too busy trying to live their lives.
There was nothing in the Soviet system that prevented it from trading with the west from an ideological standpoint either. The refusal to trade came from the west; and the only reason trade with the west was hindered was because the US wanted to fight an ideological war by undermining its enemies economies.
The reality is, the so-called free-market champion that is the United States engages in market distortion to eliminate rivals all the time. The best example recently was the Iraq War. That was a response to Saddam Hussein's plan to stop trading oil in dollars and switch to Euros instead.
I feel like you are deliberately ignoring information that doesn't satisfy your world view.
Thankfully, I don't get my information on world history from Wikipedia.
I read history textbooks written by actual historians and on this particular subject, I speak with people who lived through it on a regular basis and spend time in former Soviet states; they are my extended family through my partner.
You, clearly, have spent no time in any former soviet states and I'd wager you've never spoken to an Eastern European native.
I have no wish to continue talking to someone who can't post anything substantial.
> but the US did everything it could to ensure that system would collapse.
a system that killed as many people as the nazis deserved a lot more than just a revolution and some oligarchs.
i don’t think you truly understand the long term damage the soviets did in eastern europe. it’s been literally apocalyptic. and the effects are still felt today.
My partner grew up in Poland and Ukraine and lived there through the collapse of the USSR and I've spent plenty of time in both countries. Believe, me I know how much they fucked up.
That doesn't change the fact the US engaged in market distortion and economic coercion on ideological grounds.
If that fact makes you uncomfortable, I suggest you inform your representatives, because they are still doing it to weaker nations that don't dance to the American tune today.
It might surprise you to hear as well that there are quite a few things people in Eastern Europe miss from the Soviet days. The heavy focus on community, childcare, free healthcare and and quality education in the trades and higher professions for example.
My partner and her family hate how individualistic and self-serving people can be in the west. There's basically no solidarity that compares to what they had in that system. That doesn't mean they want to go back to it, it just means there are lessons that we can learn from that system to improve our own.
Make no mistake, the USSR did some things right. It's mistakes do not erase them. Don't hold them to a standard you are not willing to hold your own country to, because the failings of capitalism and the lives lost due to those failings each year are legion.
To think otherwise is nothing short of ideological fundamentalism.
Posting something more substantial than "they deserved it" would be appreciated.
"there are quite a few things people in Eastern Europe miss from the Soviet days."
The people who were lucky not to end up in jail, Gulags, banned from good jobs or made to work in the coal mines, you mean. Who were lucky to be allowed to go to university and take on an interesting job.
Survivor bias might play a huge part here.
Of course there were people who benefited from the system.
You mention "quality education in the trades and higher professions for example." - what percentage of the population got to enjoy that?
Community - as long as you didn't dissent, I guess? What if you didn't approve of everything the government did? Then community turned into people spying to you and ratting you out to the secret police?
Childcare and healthcare are available in non-communist countries, too. Besides, childcare meant being forced to give up your children at three months, so that you could go back to working and doing your part for the common good.
> You mention "quality education in the trades and higher professions for example." - what percentage of the population got to enjoy that?
All the examples I gave were freely available even to the regular workers. I've spent enough time in Poland and Ukraine to know I'm getting this information from people who actually lived through this. Primary sources.
Most wouldn't go to university, sure, but that is also true of the west. Not everybody goes to university, some become labourers or take up a trade.
Free child care, free healthcare, free education. Even free psychiatric help (something we sorely need in he west). The quality of this varied greatly sure, but it was freely available. These are basic socialist principles that are practices even by enlightened western nations today (just not the US, obviously). These were pillars the USSR was built on.
The fact that it had colossal failings elsewhere does not detract from this.
> Can you give some specific examples?
I already did.
I'm not denying the system was highly corrupt and many people suffered, but you are denying even basic truisms that you could confirm yourself if you got out of your bubble or read some history books.
The things I am saying are not excusing the horrible things the system engaged in. It would be interesting if you held your own country to the same standards as well. It wouldn't come out as clean and shiny as you think.
You are arguing from the perspective of someone who thinks there was literally NOTHING positive about the system at all. You are deliberately ignoring what I am saying and your responses in this thread have been little more than "I disagree because the USSR was bad".
This is just pure fundamentalism and I'm not going to continue arguing with an ideologue. It's wasted effort.
I know people were not just allowed to go to university. A friend of mine crossed into West Germany in 1989. He wouldn't have been allowed to study in East Germany. He then studied Mathematics in West Germany.
A friend from Poland told me how everything had seemed hopeless and she had no real perspective for her future life, until "the miracle happened" (her words) and borders and restrictions were lifted.
I think the friends you have may have been from very privileged circles in those countries, if their memories are so positive.
Childcare, I also mentioned the background: women were supposed to contribute their share of work for the common good. Meanwhile they had to give their kids to the childcare to start with their early indoctrination.
Health Care: sure, it is a good idea. But I don't think the people who were sent to Siberia or the coal mines received good health care. So at the end of the day, it wasn't really free for everybody, just for the people who sucked up to the system. Which is also a price.
> The famines in the USSR were partially caused by US trade embargoes on the USSR preventing them from buying modern farming equipment and grain supplies
what are writing about? pre 1947?
the USSR wanted nothing from the “filthy capitalists”.
“There were no major famines after 1947. The drought of 1963 caused panic slaughtering of livestock, but there was no risk of famine. After that year the Soviet Union started importing feed grains for its livestock in increasing amounts.”
>and particularly from Russian magazines that were covering the events at the time.
Why would those those magazines have particular credibility?
Post-Soviet collapse was a shit show, but it was never going to be anything bug a shit show. They were trying to rebuild a corrupt, decrepit Soviet economy as a modern market-based economy, while at the same time building, from scratch, a modern functioning judicial and a democratic political system.
The US didn't loot the former Soviet Union; Harvard and their chosen oligarchs did. Harvard basically admitted they did this in a lawsuit[1]. It's one of the worst things done in the late 20th century; was basically genocide for a quick buck.
The US media, being part of the sinister Harvard axis, of course, didn't report it, but people in Russia (the Exile guys, Russians) certainly know about it, which is why they elected a brute like Putin to keep Harvard economists and their oligarch orcs from destroying the place further. You can find stuff; even Schleifer's wiki page alludes to it. Unvetted example: http://www.softpanorama.org/Skeptics/Pseudoscience/harvard_m...
Two books every American should read:
1) Godfather of the Kremlin (the author of this book was assassinated)
2) Casino Moscow
It seems like we are both being downvoted for stating historical facts. Sad state of affairs here at HN and honestly a bit jingoistic to think US intervention in Russia was somehow “enlightened”.
Probably because these are not "historical facts". I mean, genocide? Harvard University tried to exterminate the Russian people? Who knew a bunch of academics had that kind of power...
Of course, this also neatly ignores the responsibility of Russian politicians for their own country, which was hardly under foreign occupation at the time.
The economic policies Harvard (and to be fair, the Clinton administration, whose fixers put Yeltzin in power in 96 -talk of interfering in elections: overt, boasted about even, and gone down the memory hole) inflicted on Russia in the 90s literally killed millions of Russians.
So, yes, "basically genocide" is an appropriate choice of words.
That article starts with "Under supervision of Harvard mafia Russian economy has all but collapsed" - sorry, whatever bad things Harvard people did, I think the Russian economy was already collapsed or collapsing. That is allegedly why they opened up to the West.
Any more balanced sources?
Edit: since rate limiting prevents me from further replies, in response to the comment below: the quote specifically mentions the "Russian economy", not just looting of assets owned by the state.
Russian/Soviet economy was definitly not collapsing or at least nowhere near a quickly. Afaik quality of life and income collapsed during the 90s far faster than it ever had during the Soviet Union.
That sounds like a pretty harsh reduction of quality of life to me.
And how are the Russians doing today? Better or worse than during the Soviet Union?
Are we getting an accurate picture of Soviet Union days, or do we only get to see the shiny side, with poor people brushed under the carpet, sent to Siberia or dead?
Your claim was "Afaik quality of life and income collapsed during the 90s far faster than it ever had during the Soviet Union." - so you are making a claim about the time before the 1990ies, which I refuted with the example of Ukrania.
Edit: as for the article, it doesn't seem to support the claim that the (allegedly US inspired) "shock therapy" in the 90ies was the cause for all the hardship or the deaths. It mentions economic problems in the 80ies and the "shock therapy" preventing a famine in the 90ies.
Nevertheless, I find it all very interesting. But a lot of the articles that have been mentioned in the comments sound a bit like apologist of socialism. A lot of finger pointing and blaming seems to be going on, and I don't necessarily find it all immediately trustworthy. Even to this day, many people are still around who believe Socialism was better. Any article making such claims should provide a lot of data to support it. Mere claims of "person x sad that and then everything went downhill" are not sufficient.
I think you are being carried away by things I didn’t say.
I don’t think communism under the USSR was better than any law abiding capitalist society. In fact it was objectively worst. But what happened in Russia in the 90s was brutal, reducing lifespans of the average Russian and bringing back poverty that I don’t believe was at all common in the 1980s USSR. It was unfettered Capitalism with no regard for the rule of law, and privatisation was placed above everything including legal and political precepts. The result was a dystopian nightmare that led to Russians accepting autocracy under Putin as a viable alternative. And it was all done under the aegis of American Economist who were sent over to “help”.
It may be as you said, it just isn't reflected in the article you linked to. That's all I said.
I would also be careful because there seems to be a lot of finger pointing and many people being eager to blame other people to distract from their own failings. I wouldn't believe anything that is written about it at face value.
Also you changed the goal post, now you compare to the 1980s, not all the time of the Soviet union. That's veering into "no true Scottsman" territory.
I'm sure there were many people in the 80ies who lived a dystopian nightmare in Russia, too. We just don't hear about them, because they were locked away and eventually died.
It does not matter whether the Russian economy was booming or in deep trouble - the issue that is being discussed is the looting of assets owned by the state.
The difference big data/data science/empirical study and 'classical' economics (by which I'd include any system of economics that seeks to explain human behavior via an underlying metatheory) is that a primarily empirical approach obscures the necessary underlying theory present in any experiment where you're trying to fit data to a curve.
For example, when you run a science experiment and you plot the data, you may find that you're looking at a line. While this is an interesting finding, it has zero predictive value for anything other than the exact situation you've collected data for. In order to formulate scientific law, you first must (a) believe that such a thing exists and (b) have some theory as to what shape the curve ought to fit. For example, a naive look at physics using an 'empirical' approach might incorrectly conclude that force is mass times acceleration. While moderately useful for many problems, this offers little predictive power in the general case. In order to actually formulate a law that can be of predictive value, you have to first consider various other laws and axioms (such as the constant speed of light for force), at which point -- by deduction, without any need of empirism -- you determine that this is wrong, and you need another kind of equation to fit your data to.
I don't know if the simplistic demand curves drawn in the original text book are correct or not. However, at least those are based on a particular set of assumptions that can be validated or not. The kind of empiricism put forth by Mr Chetty does not offer this at all.
All this is to say that, while data is useful for validation, it is not useful for prediction. The last thing we need is a black-box machine learning model to make major economic decisions off of. What we do need is proper models that are then validated, which don't necessarily need 'big data.'
> All this is to say that, while data is useful for validation, it is not useful for prediction. The last thing we need is a black-box machine learning model to make major economic decisions off of. What we do need is proper models that are then validated, which don't necessarily need 'big data.'
Hand-wavy theory - predicated upon physical-world models of equillibrium which are themselves classical and incomplete - without validation is preferable to empirical models? Please.
Estimating the predictive power of some LaTeX equations is a different task than measuring error of a trained model.
If the model does not fit all of the big data, the error term is higher; regardless of whether the model was pulled out of a hat in front of a captive audience or deduced though inference from actual data fed through an unbiased analysis pipeline.
If the 'black-box predictive model' has lower error for all available data, the task is then to reverse the model! Not to argue for unvalidated theory.
Here are a few discussions regarding validating economic models, some excellent open econometric lectures (as notebooks that are unfortunately not in an easily-testable programmatic form), the lack of responsible validation, and some tools and datasets that may be useful for validating hand-wavy classical economic theories:
> "Lectures in Quantitative Economics as Python and Julia Notebooks" https://news.ycombinator.com/item?id=19083479 (data sources (pandas-datareader, pandaSDMX), tools, latex2sympy)
Most of the interesting economic questions are inference problems, not prediction problems The question is not "what is the best guess of y[i] given these values of x[i]'s", but what would y[i] have been for this very individual i (or country in macro-economics) if we could have wound back the clock and change the values of x[i]'s for this individual. The methods that economists know and use may not be the best, but the standard ML prediction methods do not address the same questions, and data scientists without a social / economic / medical background are often not even aware of the distinction.
Economists and social scientists try to do non-experimental causal inference. Maybe they're not good at it, maybe the very problem is unsolvable, but it's not because they don't know how Random Forests or RNNs work. Economists already know that students from single parent families do worse at school than from married families. If the problem is just to predict individual student results, number of parents in the household is certainly a good predictor. The problem facing economists is, would encouraging marriage or discouraging diveroce improve student results? Nothing in PyTorch or Tensorflow will help with the answer..
Backtesting algorithmic trading algorithms is fairly simple: what actions would the model have taken given the available data at that time, and how would those trading decisions have affected the single objective dependent variable. Backtesting, paper trading, live trading.
Medicine (and also social sciences) is indeed more complex; but classification and prediction are still the basis for making treatment recommendations, for example.
Still, the task really is the same. A NN (like those that Torch, Theano, TensorFlow, and PyTorch produce; now with the ONNX standard for neural network model interchange) learns complex relations and really doesn't care about causality: minimize the error term. Recent progress in reducing the size of NN models e.g. for offline natural language classification on mobile devices has centered around identifying redundant neuronal connections ("from 100GB to just 0.5GB"). Reversing a NN into a far less complex symbolic model (with variable names) is not a new objective. NNs are being applied for feature selection, XGBoost wins many Kaggle competitions, and combinations thereof appear to be promising.
Actually testing second-order effects of evidence-based economic policy recommendations is certainly a complex highly-multivariate task (with unfortunate ideological digression that presumes a higher-order understanding based upon seeming truisms that are not at all validated given, in many instances, any data). A causal model may not be necessary or even reasonably explainable; and what objective dependent variables should we optimize for? Short term growth or long-term prosperity with environmental sustainability?
... "Please highly weight voluntary sustainability reporting metrics along with fundamentals" when making investments and policy decisions?
Were/are the World3 models causal? Many of their predictions have subsequently been validated. Are those policy recommendations (e.g. in "The Limits to Growth") even more applicable today, or do we need to add more labeled data and "Restart and Run All"?
> FREDcast™ is an interactive forecasting game in which players make forecasts for four economic releases: GDP, inflation, employment, and unemployment. All forecasts are for the current month—or current quarter in the case of GDP. Forecasts must be submitted by the 20th of the current month. For real GDP growth, players submit a forecast for current-quarter GDP each month during the current quarter. Forecasts for each of the four variables are scored for accuracy, and a total monthly score is obtained from these scores. Scores for each monthly forecast are based on the magnitude of the forecast error. These monthly scores are weighted over time and accumulated to give an overall performance.
> Higher scores reflect greater accuracy over time. Past months' performances are downweighted so that more-recent performance plays a larger part in the scoring.
The #GobalGoals Targets and Indicators may be our best set of variables to optimize for from 2015 through 2030; I suppose all of them are economic.
Using predictive models for policy is not new, in fact it was the standard approach long before more inferential models, and the famed Lucas critique precisely targets a primitive approach similar to what you are proposing.
The issue is the following: In economics, one is interested in an underlying parameter of a complex equilibrium system (or, if you wish, a non-equilibrium complex system of multi-agentic behavior).
This may be, for example, some pricing parameter for a given firm - say - how your sold units react to setting a price.
Economics faces two basic issues:
First, any predictive model (like a NN or simple regression) that takes price as an input, will not correctly estimate the sensitivity of revenue to price. It is actually usually the case, that the inference is reversed.
A model where price is input, and sold units or revenue is output (or vice-versa) will predict (you can check that using pretty much any dataset of prices and outputs) that higher prices lead to higher outputs, because that is the association in the data.
Of course we know that in truth, prices and outputs are co-determined. They are simultaneous phenomena, and regressing one on the other is not sufficient to "causally identify" the correct effect.
This is independent of how sophisticated your model is otherwise. Fitting a better non-linear representation does not help.
The solution is of course to reduce down these "endogenous" phenomena to their basic ingredients. Say you have cost data, and some demand parameters. Then, using a regression model (or NN) to predict the vector of endogenous outcome variables will work, and roughly give you the right inference.
Then, as a firm, you are able to use these (more) exogenous predictive variables to find your correct pricing.
This is not new, pops up everywhere in social science, is the basis of a gigantic literature called econometrics, and really has nothing to do with how you do the prediction.
The only thing that NN add are better predictions (better fitting) and the ability to deal with more data. As this inferential problem shows, using more (and more fine-grained) data is indeed crucial to predicting what a firm should do.
BUT, it is crucial to understand and reason about the underlying causality FIRST, because otherwise even the most sophisticated statistical approach will simply give you wrong results.
Secondly, the counterfactual data for economic issues is usually very scarce. The approach taken by machine learning is problematic, not only because of potentially wrong inference, but also because two points in time may simply not be based on comparable data-generating processes.
In fact, this is exactly the blindness that led to people missing the financial crisis. Of course, with enough data, and long enough samples, one should expect to be become pretty good at predicting economic outcomes. But experience has shown that in economics, these data are simply too scarce. The unobserved variation between two quarters, two years, two countries, two firms (etc.) is simply very large and has fat tails. This leads to spontaneous breakdowns of such predicitive models.
Taking these two issues together, we see that better non-linear function approximation is not the solution to our problems. Instead, it is a methodological improvement that must be used in conjunction with what we have learned about causality.
Indeed the literature moves into a different direction. Good economic science nowadays means to identify effects via natural experiments and other exogenous shifts that can plausibly show causality.
Of course such experiments are more rare, and more difficult, the larger the scale becomes. Which is why Macroeconomics is arguably the "worst science" in economics, while things like auctions and microstructure of markets are actually surprisingly good science (nowadays).
Doors are wide open for ML techniques, but really only to the point that they are useful in operationalizing more and better data.
Anyone trying to understand economic phenomena needs to be keenly aware of how inference can be done, which requires an understanding (or an approach to) - that is, a theory - of the underlying mechanisms.
Just to reiterate the practical issue here: We, as people, are just exceedingly bad in having AND putting the right data at the right place in any model. It's really not the models fault, and the contribution of ML is marginal in that regard.
Whether it is subsidies of farmers, education, tax reduction, minimum wage, austerity measures... history is full of deliciously wrong predictions and policy measures.
Almost all of them can be reduced to the simple fact that the DPG is not stable when varying the policy, and that simple fact is due to people being deliberatively reactive.
In other words, you are missing data. Data about human behavior that is simply not observed, because it didn't happen, or because it happens inside people! And then, no matter how well you fit your conditional expectation (or other moment, or whatever you fit), the errors are simply not predictable.
We miss the counterfactual data, AND we aren't even smart enough to use all the data that we have. The less we theorize, the less we use prior logic, the more we run into these paradoxes where our policy does the exact opposite of what we intended.
This is pretty much the only real constant you can find in the last 100 years of social science.
It is therefore entirely correct that social science focuses more and more on causality - and where it can be identified. Yes, it is much harder, and the opportunities to do it correctly are scarce, but necessary. In this, trusting in more data and AI is precisely the wrong approach.
Yes, some combination of variables/features grouped and connected with operators that correlate to an optima (some of which are parameters we can specify) that occurs immediately or after a period of lag during which other variables of the given complex system are dangerously assumed to remain constant.
> In fact, this is exactly the blindness that led to people missing the financial crisis
ML was not necessary to recognize the yield curve inversion as a strongly predictive signal correlating to subsequent contraction.
An NN can certainly learn to predict according to the presence or magnitude of a yield curve inversion and which combinations of other features.
- [ ] Exercise: Learning this and other predictive signals by cherry-picking data and hand-optimizing features may be an extremely appropriate exercise.
"This field is different because it's nonlinear, very complex, there are unquantified and/or uncollected human factors, and temporal"
Maybe we're not in agreement about whether AI and ML can do causal inference just as well if not better than humans manipulating symbols with human cognition and physical world intuition. The time is nigh!
In general, while skepticism and caution are appropriate, many fields suffer from a degree of hubris which prevents them from truly embracing stronger AI in their problem domain. (A human person cannot mutate symbol trees and validate with shuffled and split test data all night long)
> Anyone trying to understand economic phenomena needs to be keenly aware of how inference can be done, which requires an understanding (or an approach to) - that is, a theory - of the underlying mechanisms.
I read this as "must be biased by the literature and willing to disregard an unacceptable error term"; but also caution against rationalizing blind findings which can easily be rationalized as logical due to any number of cognitive biases.
Compared to AI, we're not too rigorous about inductive or deductive inference; we simply store generalizations about human behavior and predict according to syntheses of activations in our human NNs.
If you're suggesting that the information theory that underlies AI and ML is insufficient to learn what we humans have learned in a few hundred years of observing and attempting to optimize, I must disagree (regardless of the hardness or softness of the given complex field). Beyond a few combinations/scenarios, our puny little brains are no match for our department's new willing AI scientist.
> ML was not necessary to recognize the yield curve inversion as a strongly predictive signal correlating to subsequent contraction.
> An NN can certainly learn to predict according to the presence or magnitude of a yield curve inversion and which combinations of other features.
> - [ ] Exercise: Learning this and other predictive signals by cherry-picking data and hand-optimizing features may be an extremely appropriate exercise.
If the financial crisis has not yet occurred, how will the NN learn a relationship that does not exist in the data?
The exercise of cherry picking data and hand-optimizing is equivalent to applying theory to your statistical problem. It is what is required if you lack data points - using ML or otherwise. Nevertheless, we (as in humans) are bad at it.
Speaking of the financial crisis. It was not AI's that picked up on it, it was some guys applying sophisticated and deep understanding of causal relationships. And that so few people did this, shows how bad we humans are at doing this implicitly and automatically by just looking at data!
> Maybe we're not in agreement about whether AI and ML can do causal inference just as well if not better than humans manipulating symbols with human cognition and physical world intuition. The time is nigh!
In general, while skepticism and caution are appropriate, many fields suffer from a degree of hubris which prevents them from truly embracing stronger AI in their problem domain. (A human person cannot mutate symbol trees and validate with shuffled and split test data all night long)
ML and AI certainly can do causal inference. But then you have to do causal inference.
Again, prediction on historical data is not equivalent to causal analysis, and neither is backtesting or validation. At the end of the day, AI and ML improves on predictions, but the distinction of causal analysis is a qualitative one.
> I read this as "must be biased by the literature and willing to disregard an unacceptable error term"; but also caution against rationalizing blind findings which can easily be rationalized as logical due to any number of cognitive biases.
No. My point is that for causal analysis, you have to leverage assumptions that are beyond your data set. Where these come from is besides the point. You will always employ a theory, implicitly or explicitly.
The major issue is not the we use theories, but rather that we might do it implicitly, hiding the assumptions about the DGP that allows causal inference. This is where humans are bad. Theories are just theories. With precise assumptions giving us causal identification, we are in a good position to argue where we stand.
If we just run algorithms without really understand what is going on, we are just repeating the mistakes from the last forty years!
> If you're suggesting that the information theory that underlies AI and ML is insufficient to learn what we humans have learned in a few hundred years of observing and attempting to optimize, I must disagree (regardless of the hardness or softness of the given complex field). Beyond a few combinations/scenarios, our puny little brains are no match for our department's new willing AI scientist.
All the information theory I have seen in any of the Machine Learning textbooks I have picked up is methodologically equivalent to statistics.
In particular, the standard textbooks (Elements, Murhpy etc.) treatment of information theory would only allow causal identification under the exact same conditions that the statistics literature treats.
I fail to see the difference, or what AI in particular adds. The issue of causal inference is a "hot topic" in many fields, including AI, but the underlying philosophical issues are not exactly new. This includes information theory.
You seem to think that ML has somehow solved this problem. From my reading of these books, I certainly disagree. Causal inference is certainly POSSIBLE - just as in statistics, but ML doesn't give it to you for free!
In particular, note the following issue: To show causal identification, you need to make assumptions on your DGP (exogenous variation, timing, graphical causal relations ... whatever). Even if these assumptions are very implicit, they do exist.
Just by looking at data, and running a model, you do not get causal inference. It can not be done "within" the system/model.
If you bake these things into your AI, then it, too makes these assumptions. There really is no difference. For example, I could imagine an AI that can identify likely exogenous variations in the data and use them to predict counterfactuals. That's probably not too far off, if it doesn't exist already. But, this is still based on the assumption that these variations are, indeed exogenous, which can never be proven within the DGP.
In contrast, I find that most "AI scientists" care very much about prediction, and very little about causal inference. I don't mean this subfield doesn't exist. But it is a subfield. In contrast, for many non-AI scientists, causal inference IS the fundamental question, and prediction is only an afterthought. ML in practice involved doing correct experiments (AB testing), at best. It will sooner or later also adopt all other causal inference techniques. But, my point stands, I have yet to see what ML adds.
Enlighten me!
AI, ML and stats will merge, if they haven't already. The distinction will disappear. I believe the issues will not. I employ a lot of AI/ML techniques in my scientific work. Never have they solved the underlying issue of causal inference for me!
A causal model is a predictive model. We must validate the error of a causal model.
Why are theoretic models hand-wavy? "That's just because noise, the model is correct." No, such a model is insufficient to predict changes in dependent variables when in the presence of noise; which is always the case. How does validating a causal model differ from validating a predictive model with historical and future data?
Yield-curve inversion as a signal can be learned by human and artificial NNs. Period. There are a few false positives in historical data: indeed, describe the variance due to "noise" by searching for additional causal and correlative relations in additional datasets.
If you were to write a pseudocode algorithm for an econometric researcher's process of causal inference (and also their cognitive processes (as executed in a NN with a topology)), how would that read?
What's the point of dumping a bunch of Google results here? At least half the results are about implementations of pretty traditional etatistical / econometric inference techniques. The Rudin causal inference framework requires either randomized controlled trials or for propensity score models an essentially unverifiable separate model step.
Google's CausalImpact model, despite having been featured on Google's AI blog, is a statistical/econometric model (essentially the same as https://www.jstor.org/stable/2981553). It leaves it up to the user to find and designate a set of control variables, which has to be designated by the user to be unaffected by the treatment. This is not done algorithmically, and has very little to do with RNNs, Random Forests or regression regularization.
> If you were to write a pseudocode algorithm for an econometric researcher's process of causal inference (and also their cognitive processes (as executed in a NN with a topology)), how would that read?
[1] Set up a proper RCT, that is randomly assign the treatment to different subjects
[2] Calculate the outcome diffences between the treated and untreated
For A/B testing your website, the work division between [1] and [2] might be 50-50, or at least at similar order of magnitudes.
For the questions that academic economists wrstle with, say, estimate the effect of increasing school funding / decreasing class size, the effect of shifts between tax deductions vs tax credits vs changing tax rates or bands, or of the different outcome on GDP growth and unemployment of monetary vs fiscal expansion [1] would be 99.9999% of the work, or completely impossible.
Faced with the impracticallity/impossiblility of proper experiments, academic micro-economists have typically resorted to Instrumental Variable regressions. AFAICT finding (or rather, convincing the audience that you have) a proper instrument is not very amendable to automation or data mining.
In academic macro-economics (and hence at Serious Institutions such as central banks and the IMF), the most popular approaches to building causal models in the last 3 or 4 decades have probably been
1) making a bunch of unrealistic assumpsions of the behaviour individual agents (microfoundations/DSGE models)
2) making a bunch of uninterpretable and unverifyable technical assumptions on the parameters in a generic dynamic stochastic vector process fitted to macro-aggregates (Structural VAR with "identifying restrictions")
3) manually grouping different events in different countries from different periods in history as "similar enough" to support your pet theory: lowering interest rates can lead to a) high inflation, high unemployment (USA 1970s), b) high inflation, low unemployment (Japan 1970s), b) low inflation, high unemployment (EU 2010s) c) low inflation, low unemployment (USA, Japan past 2010s)
I really don't see how a RL would help with any of this. Care to come up with something concrete?
> What's the point of dumping a bunch of Google results here? At least half the results are about implementations of pretty traditional etatistical / econometric inference techniques.
Here are some tools for causal inference (and a process for finding projects to contribute to instead of arguing about insufficiency of AI/ML for our very special problem domain here). At least one AGI implementation doesn't need to do causal inference in order to predict the outcomes of actions in a noisy field.
Weather forecasting models don't / don't need to do causal inference.
> A/B testing
Is multi-armed bandit feasible for the domain? Or, in practice, are there too many concurrent changes in variables to have any sort of a controlled experiment. Then, aren't you trying to do causal inference with mostly observational data.
> I really don't see how a RL would help with any of this. Care to come up with something concrete?
The practice of developing models and continuing on with them when they seem to fit and citations or impact reinforce is very much entirely an exercise in RL. This is a control system with a feedback loop. A "Cybernetic system". It's not unique. It's not too hard for symbolic or neural AI/ML. Stronger AI can or could do [causal] inference.
I am at loss at what you want to say to me, but let me reiterate:
Any learning model by itself is a statistical model. Statistical models are never automatically causal models, albeit causal models are statistical models.
Several causal models can be observationally equivalent to a single statistical model, but the substantive (inferential) implications on doing "an intervention" on the DGP differ.
It is therefore not enough to validate and run on a model on data. Several causal models WILL validate on the same data, but their implications are drastically different. The data ALONE provides you no way to differentiate (we say, identify) the correct causal model without any further restrictions.
By extension, it is impossible for any ML mechanism to predict unobserved interventions without being a causal model.
ML and AI models CAN be causal models, which is the case if they are based on further assumptions about the DGP. For example, they may be graphical models, SCM/SEM etc. These restrictions can be derived algorithmically, based on all sorts of data, tuning, coding and whatever. It really doesn't change the distinction between causal and statistical analysis.
The way these models become causal is based on assumptions that constitute a theory in the scientific sense. These theories can then of course also be validated. But this is not based on learning from historical data alone. You always have to impose sufficient restrictions on your model (e.g. the DGP) to make such causal inference.
This is not new, but for your benefit, I basically transferred the above from an AI/ML book on causal analysis.
AI/ML can do causal analysis, because its statistics. AI/ML are not separate from these issues, do not solve these issues ex-ante, are not "better" than other techniques except on the dimensions that they are better as statistical techniques, AND, most importantly, causal application necessarily implies a theory.
Whether this is implicit or explicit is up to the researcher, but there are dangers associated with implicit causal reasoning.
And as Pearl wrote (who is not a fan of econometrics by any means!), the issue of causal inference was FIRST raised by econometricians BASED on combining the structure of economic models with statistical inference. In the 1940's.
I mean I get the appeal to trash talk social sciences, but when it comes to causal inference, you probably picked exactly the wrong one.
You are free to disregard economic theory. But you can not claim to do causal analysis without any theory. Doing so implicitly is dangerous.
Furthermore, you are wrong in the sense that economic theory has put causal inference issues at the forefront of econometric research, and is therefore good for science even if you dislike those theories.
And by the way, I can come up with a good number of (drastic, hypothetical) policy interventions that would break your inference about a market crash - an inference you only were able to make once you saw such a market crash at least once.
If this dependence is broken, your non-causal model will no longer work, because the relationship between yield curve and market crash is not a physical constant fact. What you did to make it a causal inference is implicitly assume a theory about how markets work (e.g. - as they do right now -) and that it will stay this way.
Actually, you did a lot more, but that's enough.
Now, you and me, we can both agree that your model with yield curves is good enough. We could even agree that you would have found it before the financial crashes, and are a billionaire.
But the commonality we agree upon is a context that defines a theory.
Some alien that has been analyzing financial systems all across the universe may disagree, saying that your statistical model is in fact highly sensitive to Earth's political, societal and natural context.
> By extension, it is impossible for any ML mechanism to predict unobserved interventions without being a causal model.
In lieu of a causal model, when I ask an economist what they think is going to happen and they aren't aware of any historical data - there is no observational data collected following the given combination of variables we'd call an event or an intervention - is it causal inference that they're doing in their head? (With their NN)
> Now, you and me, we can both agree that your model with yield curves is good enough.
Yield curves alone are insufficient due to the rate of false positives. (See: ROC curves for model evalutation just like everyone else)
> We could even agree that you would have found it before the financial crashes,
The given signal was disregarded as a false positive by the appointed individuals at the time; why?
> Some alien that has been analyzing financial systems all across the universe may disagree,
You're going to run out of clean water and energy, and people will be willing to pay for unhealthy sugar water and energy-inefficient transaction networks with a perception of greater security.
That we need Martian scientist as an approach is, IMHO, necessary because of our learned biases; where we've inferred relations that have been reinforced which cloud our assessment of new and novel solutions.
> Such is the difficulty of causal analysis.
What a helpful discussion. Thanks for explaining all of this to me.
Now, I need to go write my own definitions for counterfactual and DGP and include graphical models in there somewhere.
> In lieu of a causal model, when I ask an economist what they think is going to happen and they aren't aware of any historical data - there is no observational data collected following the given combination of variables we'd call an event or an intervention - is it causal inference that they're doing in their head? (With their NN)
It's up for debate if NN's represent what is going on in our heads. But let's for a moment assume it is so.
Then indeed, an economist leverages a big set of data and assumptions about causal connections to speculate how this intervention would change the DGP (the modules in the causal model) and therefore how the result would change.
An AI could potentially do the same (if that is really what we humans do), but so far, we certainly lack the ability to program such a general AI. The reason is, in part, because we have difficulty creating causal AI models even for specialized problems. In that sense, humans are much more sophisticated right now.
It is important to note that such a hypothetical AI would create a theory, based on all sorts of data, analogies, prior research and so forth, just like economists do.
It does not really matter if a scientist, or an AI, does the theorizing. The distinction is between causal and non-causal analysis.
The value of formal theory is to lay down assumptions and tautological statements that leave no doubt about what the theory is.
If we see that the theory is wrong, because we disagree on the assumptions, this is actually very good and speaks for the theory. Lot's of social sciences is plagued by "general theories" that can never really shown to be false ex ante. And given that theories can never be empirically "proven", only validated in the statistical sense, this leads to a many parallel theories of doubtful value.
Take a gander into sociology if you want to see this in action.
Secondly, and this is very important, is that we learn from models. This is not often recognized. What we learn from writing down models is how mechanics or modules interact. These interactions, highly logical, are USUALLY much less doubtful than the prior assumptions.
For example, if price and revenues are equilibrium phenomena, we LEARN from the model that we CAN NOT estimate them with a standard regression model!
This is exactly what lead to causal analysis in this case, because earlier we would literally regress price on quantity or production on price etc. and be happy about it. But the results were often even in the entirely wrong direction!
Instead, looking at the theory, we understood the mechanical intricacies of the process we supposedly modeled, and saw that we estimated something completely different than what we interpreted.
Causal analysis, among other things, tackles this issue by asking "what it is really that we estimate here?".
> Hand-wavy theory - predicated upon physical-world models of equillibrium which are themselves classical and incomplete - without validation is preferable to empirical models? Please.
My friend, you are strawmanning.
I said,
> What we do need is proper models that are then validated, which don't necessarily need 'big data.'
Which agrees with you. I said we need both and, not one or the other.
> If the model does not fit all of the big data, the error term is higher; regardless of whether the model was pulled out of a hat in front of a captive audience or deduced though inference from actual data fed through an unbiased analysis pipeline.
Big data without a model is still only valid for the scenario the data were collected in
> If the 'black-box predictive model' has lower error for all available data, the task is then to reverse the model! Not to argue for unvalidated theory.
Certainly, but we should simultaneously recognize that any model so conceived is still only valid in the situations the data were collected in, which makes the not necessarily useful for the future. You could turn such an equation into an economic philosophy, but you'd have to do a lot more, non metric, work.
How can you possibly be arguing that we should not be testing models with all available data?
All models are limited by the data they're trained from; regardless of whether they are derived through rigorous, standardized, unbiased analysis or though laudable divine inspiration.
> All models are limited by the data they're trained from; regardless of whether they are derived through rigorous, standardized, unbiased analysis or though laudable divine inspiration.
Some of the data we have isn't training data. Purely data-driven models tend to be ensnared by Goodhart's law.
For example, suppose we're issuing 30-year term loans and we have some data that shows that people with things like country club memberships and foie gras on their credit card statements have a higher tendency not to miss payments. So we use that information to make our determination.
But people are aware we're doing this and the same data is externally available, so now people start to waste resources on extravagant luxuries in order to qualify for a loan or a low interest rate, and that only makes it more likely that they ultimately default. However, that consequence doesn't become part of the data set until years have passed and the defaults actually occur, and in the meantime we're using flawed reasoning to issue loans. When we finally figure that out after ten years, the new data we use then will have some fresh different criteria for people to game, because the data is always from the past rather than the future.
We've already seen the kind of damage this can do. Politicians see data that college-educated people are better off so subsidize college loans, only to discover that the signal from having a degree that caused it to result in such gainful employment is diluted as it becomes more common, and subsidizing loans results in price inflation, and making a degree a prerequisite for jobs that shouldn't require it creates incentives for degree mills that pump out credentials but not real education.
To get out of this we have to consider not only what people have done in the past but how they are likely to respond to a given policy change, for which we have no historical data prior to when the policy is enacted, and so we need to make those predictions based on logic in addition to data or we go astray.
> To get out of this we have to consider not only what people have done in the past but how they are likely to respond to a given policy change, for which we have no historical data prior to when the policy is enacted, and so we need to make those predictions based on logic in addition to data or we go astray.
"Pete, it's a fool who looks for logic in the chambers of the human heart."
Logically, we might have said "prohibition will reduce substance abuse harms" but the actual data indicates that margins increased. Then, we look at the success of Portugal's decriminalization efforts and cannot at all validate our logical models.
Similarly, we might've logically claimed that "deregulation of the financial industry will help everyone" or "lowering taxes will help everyone" and the data does not support.
So, while I share the concerns about Responsible AI and encoding biases (and second-order effects of making policy recommendations according to non-causal models without critically, logically thinking first) I am very skeptical about our ability to deduce causal relations without e.g. blind, randomized, longitudinal, interventional studies (which are unfortunately basically impossible to do with [economic] policy because there is no "ceteris paribus")
The virtue of logic isn't that your model is always correct or that it should be adhered to without modification despite contrary evidence, it's that it allows you to have one to begin with. It's a method of choosing which experiments to conduct. If you think prohibition will reduce substance abuse but then you try it and it doesn't, well, you were wrong, so end prohibition.
This is also a strong argument for "laboratories of democracy" and local control -- if everybody agrees what to do then there is no dispute, but if they don't then let each local region have their own choice, and then we get to see what happens. It allows more experiments to be run at once. Then in the worst case the damage of doing the wrong thing is limited to a smaller area than having the same wrong policy be set nationally or internationally, and in the best case different choices are good in different ways and we get more local diversity.
> If you think prohibition will reduce substance abuse but then you try it and it doesn't, well, you were wrong, so end prohibition.
Maybe we're at a local optima, though. Maybe this is a sign that we should just double down, surge on in there and get the job done by continuing to do the same thing and expecting different results. Maybe it's not the spec but the implementation.
Recommend a play according to all available data, and logic.
> This is also a strong argument for "laboratories of democracy" and local control -- if everybody agrees what to do then there is no dispute, but if they don't then let each local region have their own choice, and then we get to see what happens. It allows more experiments to be run at once. Then in the worst case the damage of doing the wrong thing is limited to a smaller area than having the same wrong policy be set nationally or internationally, and in the best case different choices are good in different ways and we get more local diversity.
"Adjusting for other factors," the analysis began.
- [ ] Exercise / procedure to be coded: Brainstorm and identify [non-independent] features that may create a more predictive model (a model with a lower error term). Search for confounding variables outside of the given data.
> How can you possibly be arguing that we should not be testing models with all available data?
I'm not arguing that at all? I'm pointing out that this is good, but is rarely what is meant with 'big data'. In my experience with 'big data' in silicon valley, it refers to the constrcution of black box algorithms, rather than 'operations research' or 'statistics' which involve fitting data to a model determined via another process.
This is exactly what the Breiman paper ("Statistical Modeling: The Two Cultures") discusses. If you haven't read it already, I would say it's worth a read.
> machine learning model to make major decisions off of
I have basically no experience with ML, but from what I know I'm having difficulty understanding how it's different from OLS with constructed regressors. Can anyone explain?
"They were teaching their students big ideas. But they were ideas about what causes what — not about supply and demand."
Because supply and demand doesn't cause things? I think there are some citations needed for that claim.
Reading between the lines, I think the VOX article makes the "new approach" sound more stupid than it actually is. It is of course a good idea to test economic ideas with rigorous methods.
Article makes it sound almost as if the "new economics" was just "grievance studies" - creating statistics about how disadvantaged some people are (as if society and economics have been previously unaware that such people exist). That would be stupid, because it doesn't teach you anything about what you could possibly do about it. But between the lines, the professor seems to conduct experiments to determine actual outcomes of economic measures. That makes sense. But you still need "normal economics" to come up with measures that have a shot at improving things.
It's about time! The equilibrium theory of supply and demand has set economics back for decades. There is literally a century of criticism of the theory from postkeynesians, not only the lack of dynamism, but also all the strange assumptions.
People keep saying supply and demand is a broken theory, but I'm not sure how to interpret that.
When oil prices fall some producers seem to shut off their pumps. When restaurants raise their prices the people I know tend to cook at home more.
Do you think those are weird outliers, and increasing prices should spur increasing purchases as a general rule? Do you think consumer and producer behaviors have no relationship with price whatsoever?
I have a hard time imagining what those worlds would look like, so I expect your critique is far more nuanced.
But if it is a nuanced critique, then it seems we all agree on the general principles in broad strokes, and we don't need to throw out supply and demand after all.
The biggest problem is with the supply curve, really. Economics tends to pretend that more supply requires a higher price. In reality, more supply very often leads to a lower price due to economies of scale.
This is a fundamental problem because it means that even if you assume fixed supply and demand curves (which is very dubious), there can be more than one equilibrium. That in itself pretty much invalidates most of the standard subsequent analysis.
The other big problem is that it assumes too much that all economic actors are price takers. In reality, prices are largely administered via cost-plus pricing, and advertisement and related tricks are used extensively to subvert the basic principles of supply and demand.
Another problem specific to macroeconomics is that it largely ignored the effect of demand. Increased demand very often leads to an increase in production rather than an increase of prices, a fact that was largely ignored in decades of supply-centred thinking. This has led to bad policies in response to the global financial crisis, for example.
I agree though that it makes no sense to throw out supply and demand as concepts entirely.
Newaccount456 has already covered your confusion over the difference between a movement of and a shift along the supply curve adequately. That entirely covers the more than one equilibrium point because even in Econ 101 they tell you the curves shift. Of course there are multiple equilibria. Increasing, decreasing and constant economies of scale will be in every intro textbook.
Re: Price taking or perfect competition, that’s just one of the basic models of price determination. Monopoly, monopsony, oligopoly and monopolistic competition are all treated in introductory courses too and if you got as far as an intermediate course they’d cover the Bertrand, Cournot and Stackleberg models of oligopolistic competition and how they contrast with perfect competition and monopoly. Even introductory microeconomics will cover the three basic methods of price determination verbally.
Your penultimate paragraph also relies on not being able to distinguish between a movement of a demand curve and a shift along an existing one.
> Newaccount456 has already covered your confusion over the difference between a movement of and a shift along the supply curve adequately.
I think you're wrong, nobody is confused about shift of the curves. The problem really is, there can be multiple equilibriums even if the curves stay the same.
It is kind of difficult to see and understand why, because the traditional supply/demand theory and diagrams obscure this heavily. But if you look at the problem differently (see my other reference to Blatt), and start taking into account several products at the same time, you will see why.
This is exactly it. The single equilibrium hinges on the assumption about the monotonicity and slopes of supply and demand. But these assumptions are simply wrong, and so multiple equilibria are possible even with a single market. Multiple markets of course make the problem worse.
> Economics tends to pretend that more supply requires a higher price.
No it doesn't. Shifts along the supply curve are due to changes in price, but other factors can shift the entire supply curve to the right or left. This is like day 1 of Economics 101.
A supply curve indicates simply what a firm is willing to produce for a given price, everything else being equal. It is purely a conceptional tool, as reality corresponds to only one point on that line.
That point, jointly derived with demand, is the logical (even mechanical) consistent solution to these two assumptions - both supply and demand are sensitive and dependent on price.
Economies of scale relate to costs. They may, for example, be modeled by decreasing marginal costs.
If, for example, economies of scale increase, and costs are reduced, the supply curve will shift outward.
This then leads to a lower price. That is 100% conventional econ 101.
On the other hand, the supply curve, however it may look, already includes the economies of scale that presently exist - the whole dependence of supply and quantity. Economies of scale are, as such, compatible with standard supply curves, but nothing keeps you from using more complicated cost setups.
The point is just that a supply curve, just as a demand curve, is one functional (or relational) dependence at a given time. Any changes will shift these curves, as the underlying tradeoffs change.
Supply and demand curves are functions of price, however they only represent, not encode, the underlying behavior. It is absolutely standard to have pricing power in the market. The supply curve is then, of course, no longer a simple uni-dimensional affair. You can, however, still draw it as a function of other prices (for example).
Supply curves are not the underlying behavioral assumption, they are a useful representation thereof.
I'd argue, for example, that Cost+Markup is the most standard way, in economics, to model a firm's pricing behavior, such as in oligopoly models.
The valuable insight from economics, which generalizes the simple demand and supply curves, is that this Markup is not arbitrarily set, but depends both on demand and on other suppliers.
Without these simple mental models, many people come to wrong conclusions about what happens in markets!
> Without these simple mental models, many people come to wrong conclusions about what happens in markets!
It seems like that happens with or without the simple mental models. Put five economists in a room and you'll get five opinions.
I'd argue that one purpose the mathematical models do serve is to shut down disagreements from non-economists. Where there's a difference in opinion, the economist can (fallaciously) argue that "if you don't have a model, you don't have a point".
See some of the discussion around MMT for a practical example.
In contrast, I challenge you to put five economists into a room and ask a question about, say, auction theory. Do you still think you get five opinions?
> See some of the discussion around MMT for a practical example.
MMT is Macro, and Macro is bad. Any of it. The reason is that the assumptions of any Macro model, including MMT, are so excruciatingly far from reality that "formal theory" is difficult to do.
We have formal theory such that five economists who disagree know exactly where they disagree - on which assumption or axiom.
A theory that does not formulate out its assumptions or scope conditions, which sadly includes a lot of "heterodox models", can not be criticized at all. It may be right, it may be wrong, but we can find to common basis on which we agree or disagree.
> We have formal theory such that five economists who disagree know exactly where they disagree - on which assumption or axiom.
In practice economic debate usually degenerates into mud-slinging because there isn't a universal arbiter of what constitutes a good model. I don't think the saltwater-freshwater debate was particularly precise or reasoned.
> People keep saying supply and demand is a broken theory, but I'm not sure how to interpret that.
It is broken, especially in macroeconomics. On micro level, it is less problematic, although the main criticism of atq2119 in this thread is correct. The things wrong with it are roughly the supply and demand curves, the idea of equilibrium (assumption of its existence and uniqueness) and the approach that is inherently one-sided - one market at a time.
> When oil prices fall some producers seem to shut off their pumps. When restaurants raise their prices the people I know tend to cook at home more.
This is a strawman. These actually do not follow from the supply and demand itself, but rather from indifference curves (although utility theory has its problems too). Unfortunately, the theory of supply and demand doesn't treat multiple different products very well (even in substitution).
> I have a hard time imagining what those worlds would look like, so I expect your critique is far more nuanced.
I understand. You don't know the alternative, so you cannot imagine a different world.
If you want a good overview of the criticism, read Steve Keen's Debunking Economics. He has two chapters devoted to problems of the supply and demand theory.
And if you want see alternative, which is completely superior (gives intuitively similar results and more, requires less parameters and is easier to extend), I suggest the treatment in Blatt: Dynamic Economic Systems, first part. (This was also suggested to me courtesy of Steve Keen, but boy, it is good. Keen is not to everybody's taste, but everyone in economics should read Blatt. He is the most forgotten of all forgotten economists. Although I think the approach is not his own, I think it goes back to von Neumann, but I don't know how it is called.)
> then it seems we all agree on the general principles in broad strokes, and we don't need to throw out supply and demand after all
I disagree, we need to throw it out, because there is a much better alternative approach. See above.
I've watched the former chief economist of Goldman Sachs argue with his right-hand man about whether the big drop in crude prices in 2014 was due to supply or demand. If they can't figure it out, what hope do the rest of us have?
In almost all cases, neither supply nor demand are observable. They change through time - but we don't have a good model for how they change. The curves are nonlinear, and there's feedback between changes in supply and changes in demand.
Finally, in macroeconomics, even the theory is incoherent. The Sonnenschein-Mantel-Debreu theorem says that the aggregate demand curve can take any shape - it need not be 'downward sloping'.
You know post-Keynesianism is just a flavour of dynamic stochastic general equilibrium models, right?
Those “strange assumptions” are what distinguishes economics from disciplines whose models are so under specified that they can’t be proven wrong. Economic theory can at least say that if certain assumptions are met some set up is optimal.
No, it isn't (as viburnum explains, you might have confused it with new-keynesian). Look what Steve Keen does with his Minsky program - that's decidedly not just DSGE flavor, it's a fully dynamic model.
I would say distinguishing features of post-keynesian approach are fully dynamical modelling and treating uncertainty differently from risk. Both features go back to Keynes.
I'm no economist or game theorist but in my experience with signals/systems, the "steady state" (which is an equilibrium, I guess) is the least interesting and important to the analysis of how a system behaves. Response to dynamics is far more important to understand and design around.
The classic failure for me is the damping harmonics you get from the delay in a corrective response to a change input. It's control theory 101.
Loans take time to approve and issue, therefore interest rate changes, even if all the other assumptions stack up, suffer from such control harmonics. They end up being just as badly targeted as the discretionary fiscal policy decisions they are supposed to replace.
That problem is hand waved away using 'expectations', which when you dig down requires the entities in the calculation to have perfect foresight over infinite time. ie they can see the future.
And such expectations would 'correct' the discretionary fiscal policy approach too.
You seem to be talking about macroeconomics, which is a very different animal than supply/demand dynamics.
Macroeconomics is a science all my itself, and by and in some ways a failed one. It may even be impossible as a (public) science, since the object of the study is intelligent, aware of the findings, and has incentive to change in response to it.
This have very little to do with basic the microeconomic of supply/demand dynamics. Which could better be named "Price Theory".
> Mankiw’s textbook covers the abstract theory that underpins economics as it has been understood for decades. It is about supply and demand, about how prices can be used to match production of a good to its consumption, and about the power of markets as a tool for allocating scarce resources.
I've worked with lots of Harvard graduates over the years. They all seemed to be cloned with some kind of "group-think" about Economics. Essentially, they were all convinced that markets solved big problems, and anything vaguely Keynesian or Socialist or Marxist was doomed to failure.
So I guess I view Harvard Economics as a tool of those with power, and probably something which has contributed to creating a worse world. (Hey, I believe markets have benefits too, but not to the point of dismissing moderating policies.)
Keep reading. It goes so far as to hold Card up as some master of this new data-oriented movement - the guy massages it better than the girl offering you a happy ending. And it goes on to highlight other orthodox turned radical anti-capitalists.
Anybody skilled enough can take a sample of data and play with it enough - fill in holes with other data - exclude some points - until they get the story they want. To some extent there needs to be a mechanism (beside "people are stupid") to tie things together. And this has been the general MO of left-wing economists. They lost ideas and theory so they moved to numbers. The numbers tend to not be predictive because they are so overly played with, so obviously everybody is just being irrational - the world is wrong not their theory because they proved it right with the data.
It is Vox after all - a self proclaimed socialist website that has given page space to the the Jacobins on how Democratic Socialism isn't social democracy - it is only a starting point for pure socialism.
Card doesn't rely on irrational firms. He showed with data that labor demand is inelastic at minimum wage in the region studied. When this is true, the benefits of increasing minimum wage outweigh the costs.
On that infamous study he used phone calls that had very strange results like doubling number of employees or going to entirely fulltime. When the study was redone with payroll reports, the results reversed - back to the start minimum wage effect.
So then they republished and it was like "well if we only look at this group and we take out these and adjust this number" they were able to recover their results but the data hacking was pretty terrible.
If you ate going to write a study like that, relying on phone data screams bias (and these were two economists that couldn't claim they didn't know better).
Citation, please? Card's matching analysis has been repeated many times (e.g., Dube, et al), and each time, they have found little demand elasticity for labor at the extremely low end.
What do you mean by that? Do you mean that many jobs will eventually be lost, or do you mean that if you raise the minimum wage high enough, there will be lost jobs. The first claim has no supporting evidence, and nobody is arguing against the second claim.
Dude, you have no idea what socialism is. The vox article did briefly mention someone close to socialism, but in a pretty dismissive way. You sound like a straight up ass backwards libertarian. We could big data our way to socialism, maybe, but we could also big data our way to worse outcomes than currently. Technocratic solutions are pretty conservative at this point. Either you are ok with capitalism alienating and exploiting the vast majority of the world or you are not ok with it.
Vox's founders are straight socialists. The class seems to be run by a left wing economist with the intent to show off left wing economics. It will be used under the guise of scientific data, but really just an excuse to push newer students in a predetermined direction.
You are mixing up socialism (nationalizing the means of production) with the social democratic philosophy (leaving the means of production private but shifting tax burden to the wealthy to provide more social programs to benefit the poor). The Vox founders are not socialists, despite what the kooky Mises Institute might tell you.
Go read Jacobin. They published a big article in Vox too where the whole point was to say that social democracy (eg Scandinavian countries) is not democratic socialism. They explicitly said that there is no difference between their socialism (and communism) except for the starting point: at the ballot box not a revolution. The point of that article (explicitly) was to say that the Scandinavian countries don't go far enough. They don't believe you can push capitalism to a point where it melts down anymore, but still believe in the same changes, but it needs to start in the political sphere.
Klein literally says "I'm a socialist" or "I write for a socialist magazine" (Jacobin). Yglesias is also part of that Jacobin group. They aren't social democracy supporters, but explicitly democratic socialists.
I'm pretty well read on my socialists and anarchists, because I used to be one (I worked in am opposition campaign against the democratic Berkeley mayor because she wasn't liberal enough when I was in college).
In what way? He has never even written an article for Jacobin. The closest thing is a Jacobin article interviewing him and other pundits for some analysis of the Democratic primaries.
Never said sunkara was and iirc the article I'm referring to was written by Day, just pointing out those with jacobin are strict socialists, not some watered down social democrats.
The second quote was to that, not the first. He seems to quote it though as self descriptive.
Ive been reading yglesias and Klein since before dailykos started and they had a name.
And ygeslesias has beam interviewed by them, sat on panels with them, written in other places with those from jacobin and mother Jones. You can't seriously be claiming he isn't in that intellectual cohort. That rich.
Not that they didn't start out with throwing the entire existing theory out of the window. I'm no expert, but, in any discipline remotely aspiring to be a "science", I'd take it that would be that one glaring sign by which I think one would be able to tell.
>That shift could change economics itself, by attracting a new breed of students who are intrigued by the field’s new empiricism, not put off by its mathiness and high theory. It could make economics departments more diverse, and more open to new perspectives from women and students of color.
>He also gives Mankiw credit for moving the curve of Ec 10 to match the curves of other large Harvard classes, based on research showing that unnecessarily tough grading of economics classes disproportionately discourages women from taking them.
I do not understand how attracting members of underrepresented classes benefits them when standards need to be lowered to do so. Sure, the practice opens up opportunities to a more diverse sampling of individuals in the short term, but in the long term you devalue the credentials and, as bad or worse, you risk churning out graduates who are more qualified on paper than in reality.
I feel like this is a dangerous, growing trend in modern Western society and I do not understand how nobody seems to see it as a problem.
> I do not understand how attracting members of underrepresented classes benefits them when the standard needs to be lowered to do so.
I’m a woman with an undergrad economics degree. I struggled with intermediate micro and econometrics so much that I nearly lost my scholarship. The professor was known for having one of the toughest curves and certainly didn’t “lower the standards” for anyone. I was woefully unprepared for these classes. All of my classmates had taken statistics and calculus and I had only gotten as far as y=mx+b. My peers were learning econometrics. I was learning econometrics and statistics. My peers were learning micro. I was learning micro and calculus.
I had always been a good student but I was really proud of those two terrible grades because I worked my ass off to get them and I learned more (and taught myself more) from those two classes than any other undergrad course I took.
Not only did struggling through undergrad econ put me head and shoulders above my peers in law school where a lot of people are afraid of math, it gave me a whole lot of confidence. It was a small miracle that I managed to pass those classes. But I did. Not because the standard was lower, but because I didn’t give up or let myself feel too embarrassed by how little I knew. And now when I don’t understand something or I want to learn about something that’s completely over my head and outside my comfort zone, there’s not a doubt in my mind that I can. And that’s been pretty fucking beneficial.
That's not what that quote implies. It implies that underrepresented demographics were less likely to enroll in classes that were known as having harsher grading curves than other classes. It doesn't imply that the standards for passing the class were lowered or the performance of underrepresented demographics was lower than non-underrepresented demographics once they enrolled in the course.
>It doesn't imply that the standards for passing the class were lowered
That's literally what it implies. The curve was relaxed. It became easier to meet the standard of a minimum pass grade for a given level of performance.
> or the performance of underrepresented demographics was lower than non-underrepresented demographics once they enrolled in the course
Nor do I. But lower standards (easier to pass/earn a high grade) implies poorer qualification/ability.
Curves by definition bring up the grades of those who perform worse, without changing how they perform. In this example, if we peel away the euphemism, women were deterred from enrolling because the class was too hard to pass, so it was made easier to pass without changing the content. That does not necessarily mean that the women now taking the course are poor performers, but it does mean that on average the course will be graduating more students who would not have met previous standards.
>That's literally what it implies. The curve was relaxed. It became easier to meet the standard of a minimum pass grade for a given level of performance.
A curve can be shifted without changing the standards for passing the class. Eg the original curve allowed only for 5% of the students to get A's while the new curve allows 10%. You can get that additional 5% by shifting B's, C's, and D's upward which doesn't change the cutoff for failing with an F.
>Curves by definition bring up the grades of those who perform worse
Curves can also drag down students who are performing at a high level. If 10% of the class has performance deserving of A's but the curve says only 5% of the class can have an A, then some group is getting B's instead.
What it really implies is that minorities are more interested when the class is not an obvious rationale for the perpetuation of a status quo they know from personal experience is unjust. There's good money to be made in being an Uncle Tom, but Harvard students have more attractive options.
- Part I: Equality of Opportunity
- Part II: Education
- Part III: Racial Disparities
- Part IV: Health
- Part V: Criminal Justice
- Part VI: Climate Change
- Part VII: Tax Policy
- Part VIII: Economic Development and Institutional Change
This really belongs in Harvard's "JFK School of Government", not economics.
Possible topics for a modern economics intro class:
- Instability and equilibrium, or why markets oscillate.
- From zero to one, the tendency to and effects of monopoly and near-monopoly.
- Externalities, their uses and discontents.
- Debt vs. equity vs. what tax policy rewards
- Scarce resources that don't map to money - attention and time.
- Finance as a system decoupled from productive activity
[1] https://opportunityinsights.org/wp-content/uploads/2019/05/E...