Simon has a nice post on the contrast between economic models which are theoretically coherent but empirically weak, such as microfounded DSGE models, and empirically stronger but theoretically weak models such as VARs. This poses the question: why do we need both?
To see why, think about American football. A team has four attempts (“downs”) to advance ten yards. If it doesn’t do so, its opponents gain possession. Many teams therefore often punt the ball downfield on the fourth down, so that they concede possession as far from their goal as possible.
How would an economist model this behaviour?
We could do so in atheoretical statistical terms, by simply regressing the probability of a team punting upon a few variables: distance from goals, yardage needed, quality of running backs and so on. This should yield decent predictions.
But what if the rules were to be changed one season, so a team were allowed six downs before conceding possession? Teams would then be much less likely to punt, and more likely to try to win yardage by running or passing. Our atheoretical model would fail. But a microfounded model based upon teams trying to rationally maximize expected points would probably do better.
My analogy is not original. It’s exactly the one Thomas Sargent used (pdf) back in 1980 to argue for what we now call microfounded models. Such models, he said, allow us to better predict the effect of changes in policy:
The systematic behavior of private agents and the random behavior of market outcomes both will change whenever agents' constraints change, as when government policy or other parts of the environment change. To make reliable statements about policy interventions, we need dynamic models and econometric procedures which are consistent with this general presumption.
The question is: how widely applicable is Sargent’s metaphor?
I suspect it is in many contexts, not least of which is regulatory behaviour. It implies, for example, that simple regulations requiring banks to hold more capital will lead not necessarily to safer banks but to them shovelling risk into off-balance sheet vehicles.
I’m not so sure, however, about its applicability to macroeconomic policy, in part simply because people have better things to do than pay attention to policy changes. For example, the Thatcher government in 1980 announced targets for monetary growth which, it hoped, would lead to lower inflation expectations and hence to lower actual inflation without a great loss of output. In fact, output slumped, perhaps in part because inflation expectations didn’t fall as the government hoped. And more recently, households’ inflation expectations have been formed more by a rule of thumb (“inflation will be the same as it has been, adjusted up if it’s been low and down if high”) than by the inflation target.
This is not to say that the Sargent metaphor (and Lucas critique) are always irrelevant. They might well be useful in analysing big changes to policy. The question is: what counts as big?
My point here is the one Dani Rodrik has made. The right model is a matter of horses for courses. Atheoretical statistical relationships serve us well most of the time. But common sense tells us they will sometimes fail us. Our problem is to know when that “sometimes” is.
This is not to say that the microfounded model must always be one based upon rational utility maximization; it could instead be one in which agents use rules of thumb.
In fact, Sargent’s metaphor tells us this. David Romer has shown (pdf) that football teams’ behaviour on the fourth down “departs from the behavior that would maximize their chances of winning in a way that is highly systematic, clear-cut, and statistically significant.” There’s much more to microfoundations than simple ideas of rational maximization.
There are also situations where there is simply no good data to rely on, and statistical work like in a VAR is out of the question.
Posted by: Christian Zimmermann | August 28, 2017 at 02:58 PM
yes I wonder about this quite a lot - where, in context of macro, would you sometimes go wrong if you went off an empirical regularity but get it right if you modelled behaviour?
it seems plausible that sometimes things like consumption or business investment might, say, usually rise in response to a fall in interest rates, but sometimes fall. Maybe you could catch that empirically by conditioning on the right variables, but that might be asking too much of the data. And maybe a model would catch it. That would be more convincing if macro models were better!
Posted by: Luis Enrique | August 28, 2017 at 04:04 PM
I think you have to ask a bigger question - to big for most of the economics profession - not least Rodick - to contemplate: When it is it appropriate to use a model at all?
A model will only tell us what a model tells us.You have the answer, right or wrong, before you have started. An econometric model (such as VAR) says nothing about causation (ie it does not tell us why things happen). Both restrict us to quantifiable analysis. Do you think you could get anywhere in understanding, say, the real causes of a financial crisis, world wars or what shapes a persons personality by working this way?
No, I like any investigator, you need to put the pieces together, and work from the ground up. That requires that you engage with real primary documentation, quantifiable and non-quantifiable.
You claim to be Marxian. Marxians are sceptical of models. I think you need to think about why.
Posted by: Nanikore | August 28, 2017 at 05:52 PM
The problem with DSGE models is far worse than you suggest. They might be theoretically coherent, but their theoretical basis is bollucks.
Posted by: reason | August 29, 2017 at 09:44 AM
There is a much bigger problem in this discussion, that is usually ignored by those with the usual "practical" approach to statistics and econometrics:
* Statistical modelling is based on the assumption that all samples are drawn from the same population (ergodicity).
* It is left as an exercise for the econometrician to prove that assumption is valid.
* That validity is easier to construct with synchronic studies than diachronic (time-series) samples.
JM Keynes was quite alert to this issue ("you can never bathe in the same river twice").
The "Austrian Approach" of the "Lucas critique" and of the "Friedman as-if Principle" of microfoundations from axioms are attempts to hand-wave the issue: by merely positing that there are universal and time-independent axioms of Economics from which every Economics fact can then be deduced, matching actual sample becomes a simple matter of "calibration".
Posted by: Blissex | August 29, 2017 at 12:53 PM
".You have the answer, right or wrong, before you have started."
this is really not true. honestly, from personal experience plus this is a common reason for e.g. rejecting a paper, if the model results are clear from the set up.
there will be things non-quantitative investigation is good for, other things you need a model.
Blissex, I don't see the force of that criticism, it's just a fancy way of saying the past isn't always a good guide to the future. But it's often the best guide we've got, what are you going to do, ignore historical experience just because the most sensible statistical method formally implies your are estimating a constant data generating process?
Posted by: Luis Enrique | August 29, 2017 at 05:50 PM
All good common sense, Chris. Which general approach to use depends very much on the questions you're trying to answer (as well, of course, on what data you actually have). "All models are wrong but some are useful" - but only for specific uses. While Reason and Blissex are right that real life DSGE's have problems beyond being the wrong tool for short term forecasting, they are in fact the wrong tool for that anyway.
Nanikore is simply wrong because the only alternative to clearly specified testable formal models is not no modelling but vague, untestable, informal and internally inconsistent modelling. Read your Hume and Kant - we can only understand anything at all in the world through models (will the sun rise tomorrow? Your model, an atheoretic determinate model calibrated to experience, says "yes").
Certainly we should recognise that there are many things in life for which we will never be able to formalise the model, and there are also things where we may never find the RIGHT formal model. But where we can formalise we should at least try.
Posted by: derrida derider | August 30, 2017 at 04:44 AM
Read your Hume and Kant - we can only understand anything at all in the world through models (will the sun rise tomorrow?
There is a lot more to this Derrida derider, and with a name like yours, you should know this better than anyone. There were the Popper - Habermas debates, but really the limitations of formalism in the social sciences was well understood by the late 1960s. Economics ended up being a rogue subject.
Of course you can pretend to formally model social behaviour - that part, as you say, that you can model, but it will almost certainly tell you the least interesting things about what we need to know.
Can you tell me one thing in economics we have learned about the real economy through formal modelling?
Posted by: Nanikore | August 30, 2017 at 09:16 AM
«don't see the force of that criticism»
Well, if anything it has pretty big force as to the claim by the "austrian approach" that all economic facts, including macroeconomic ones, can be deduced from eternally valid axioms-based microfoundations.
«just a fancy way of saying the past isn't always a good guide to the future.»
Well, it is not quite as negative as that: the argument is that the past is an approximate guide to the future as long as the samples are drawn from the same population, but not when there is a "regime change" and that therefore two things matter a lot from from an econometric and theoretical point of view:
* Detecting population changes (econometric).
* Changing the model when that happens (theoretical).
That's pretty much what JM Keynes wrote too...
«But it's often the best guide we've got, what are you going to do, ignore historical experience»
It is only a good guide between regime (that is population) changes, and when that happens *parts* (because usually regime changes don't require a total rewrite of the model) of historical experience must be ignored.
«just because the most sensible statistical method formally implies your are estimating a constant data generating process?»
That "implies" should be "is entirely dependent on this critical assumption".
Not checking the applicability of that assumption makes a lot of studies rather dubious, even for large-sample diachronic studies like in medicine, never mind short time-series based ones like in econometrics.
Posted by: Blissex | August 30, 2017 at 01:50 PM
«Certainly we should recognise that there are many things in life for which we will never be able to formalise the model, and there are also things where we may never find the RIGHT formal model. But where we can formalise we should at least try.»
Ahhhhhhhh, but this implies a much bigger discussion on "what is the study of the political economy", and this is an approach to that discussion that is not productive.
For me the issue is not so much finding the "find the RIGHT formal model", because that presumes that such exists, that is that the political economy has a time-invariant structure, like so far the physical sciences have assumed and found for natural phenomena.
My contention is that the political economy does not admit of unchanging models/laws, and models/laws change within human lifetime scales.
Therefore studies of the political economy are not a (hard) science, but what I would call a "discipline", that is a systematic and partially quantitative approach to figuring out the currently applicable models/laws and detecting when they change.
It is not unrelated to C Dillow's channeling of A Glyn that the proper object is the study of contingent mechanisms rather than universal laws or models.
Posted by: Blissex | August 30, 2017 at 02:19 PM
BTW L Syll at RWER has made a collection of posts about the critique by JM Keynes of mindless application of econometrics to changing political economies, e.g.
https://rwer.wordpress.com/2012/12/04/put-mindless-econometrics-and-regression-analysis-where-they-belong-in-the-garbage-can/
https://rwer.wordpress.com/2016/04/08/keynes-critique-of-scientific-atomism/
https://rwer.wordpress.com/2016/12/03/keynes-on-the-devastating-inconsistencies-of-econometrics/
https://rwer.wordpress.com/2013/02/25/keynes-on-mathematical-economics/
https://rwer.wordpress.com/2016/12/23/keynes-critique-of-econometrics-the-nodal-point/
https://rwer.wordpress.com/2017/01/18/keynes-critique-of-econometrics-as-valid-today-as-it-was-in-1939/
I am mentioning L Syll because it is a righteous pet peeve of his, but I learned much the same things in introductory courses of statistics and econometrics decades ago (taught by diabolically interesting people), and it saddens me that they have be made again... Nothing new there though :-).
Posted by: Blissex | August 30, 2017 at 02:50 PM
it's not as if econometricians don't think about testing for structural breaks etc. but fretting about 'unchanging' laws is a red-herring. The model might be unchanging if taken literally but nothing stops you from using it to describe something contemporary and then throwing it away later when you think things have changed. ditto this non-ergodic rubbish. Suppose you tell me that Tory governments are good for the rich and bad for the poor. How impressed would you be if I responded by "well, the real world is non-ergodic, don't be assuming the data generating process is unchanging, who knows what future Tories will be like?" - to a point, fine, maybe May will crack down on inequality. I shan't hold my breath. but also you're a fool if you ignore history. But if I'd run a regression showing how Tory govts tend to be bad for the poor, you'd be saying, well now, we can't be using statistical methods in this context because blah blah?
Posted by: Luis Enrique | August 31, 2017 at 10:32 AM
(when I say non ergodic rubbish, I mean rubbish if used stupidly. of course there are times where things really are changing enough to make trying to estimate a constant DGP misguided)
Posted by: Luis Enrique | August 31, 2017 at 10:34 AM
«it's not as if econometricians don't think about testing for structural breaks etc.»
Perhaps some do, but I keep seeing in the literature lots of regression that go back decades just-like-that. Especially by the well-rewarded sell-side Economists.
«model might be unchanging if taken literally but nothing stops you from using it to describe something contemporary and then throwing it away later when you think things have changed.»
Excellent idea :-).
«Suppose you tell me that Tory governments are good for the rich and bad for the poor.»
P Sylos-Labini introduced in his models several decades ago a "political variable" which had value 1 in election years. Amazingly R^2 improved a lot :-). IIRC A Alesina then introduced this into anglo-american Economics and made a career starting from that.
«How impressed would you be if I responded by "well, the real world is non-ergodic,»
Then I would say: "Then political economy studies are a branch of history studies".
But that is not at all what myself (or I think JM Keynes) would argue: the political economy is (mostly) ergodic in the short-medium term (and small-medium scale), and non-ergodic in the long-term (and large scale). That's part of why I say that studying the political economy is a "discipline" and not a (hard) science.
But while there are some parts of the literature, especially in finance that do seem to be based on that understanding, I keep seeing arguments that every economy or every period is "sui generis", or that there are unchanging laws of Economics that apply to all times and all economies.
In that regard to laughable "end of history" claims have translated into the claim that "calibration" could be done once and forever :-).
Posted by: Blissex | August 31, 2017 at 12:37 PM
«non-ergodic in the long-term (and large scale).»
As to this JM Keynes is not totally an extremist: he says nobody can estimate what the price of a commodity will be in 20 years time ("uncertainty") but then he did have expectations on what coarse aggregates could be like in 50-80 years time in his other famous essay "Economic Possibilities for our Grandchildren". That to me seems quite fair, and I should have written "weakly ergodic" rather than “non-ergodic”. So weakly ergodic that I think the best we can do is back-of-the-napkin estimates rather than models.
Posted by: Blissex | August 31, 2017 at 06:43 PM
But Blissex a back-of-the-napkin estimate IS a model. In fact it's a rather more formal and testable one than those we generally use to form our opinion on whether Tories are bad for the poor.
That was my point.
Posted by: derrida derider | September 01, 2017 at 06:20 AM
«a back-of-the-napkin estimate IS a model»
Uhmmmm that is stretching things a bit: it is simple arithmetic based on assumptions about autocorrelation. A model in an ideal world would be explicative (at some level, not necessarily "microfoundations") or at least analytic, but then Economics is full of much worse hand-waving that gets called "model" so admittedly the standards are very low.
«In fact it's a rather more formal and testable one than those we generally use to form our opinion on whether Tories are bad for the poor.»
Well, general voters are not model builders or users, they use proxies. But I guess it is possible to build fairly accurate models to verify either claim, while looking for structural breaks, using "political variables", etc.
Not that it matters: to make the poor cheaper (more "affordable" social insurance, more "competitive" wages) is quite clearly the main electoral claim of tory campaigns, and voters trust them to deliver on that claim.
If anything any back-of-the-napkin or proper model that showed that they do deliver on that would satisfy tory voters that democracy works.
Posted by: Blissex | September 01, 2017 at 12:53 PM