For those who are not economists, let’s be clear what the microfoundations project in macro is all about. The idea is that a macro model should be built up from a formal analysis of the behaviour of individual agents in a consistent way. There may be just a single representative agent, or increasingly heterogeneous agents. So a typical journal paper in macro nowadays will involve lots of optimisation by individual agents as a way of deriving aggregate relationships.
And later on:
The trouble with this approach, as New Classical economists demonstrated, was that the theoretical rationale behind equations often turned out to be inadequate and inconsistent. The Lucas critique is the most widely quoted example where this happens. So the microfoundations project said let’s do the theory properly and rigorously, so we do not make these kind of errors. In fact, let’s make theoretical (‘internal’) consistency the overriding aim, such that anything which fails on these grounds is rejected. There were two practical costs of this approach. First, doing this was hard, so for a time many real world complexities had to be set aside (like the importance of banks in rationing credit, for example, or the reluctance of firms to cut nominal wages). This led to a second cost, which was that less notice was taken of how each aggregate macro relationship tracked the data (‘external’ consistency). To use a jargon phrase that sums it up quite well: internal rather than external consistency became the test of admissibility for these models.Now let's review the scientific method:
2. Formulation of an hypothesis to explain the phenomena. In physics, the hypothesis often takes the form of a causal mechanism or a mathematical relation.
3. Use of the hypothesis to predict the existence of other phenomena, or to predict quantitatively the results of new observations.
4. Performance of experimental tests of the predictions by several independent experimenters and properly performed experiments.
If the experiments bear out the hypothesis it may come to be regarded as a theory or law of nature (more on the concepts of hypothesis, model, theory and law below). If the experiments do not bear out the hypothesis, it must be rejected or modified. What is key in the description of the scientific method just given is the predictive power (the ability to get more out of the theory than you put in; see Barrow, 1991) of the hypothesis or theory, as tested by experiment. It is often said in science that theories can never be proved, only disproved. There is always the possibility that a new observation or a new experiment will conflict with a long-standing theory.
Do you notice the discrepancy? In the formal scientific method external validity is the be all and end all of the scientific method. Internal validity is only the scientific in the sense that Marxism is "scientific socialism." Internal validity is the mark of pseudo-science, it's a large part of what marks the dividing line between the two. It's nice when theories are internally consistent but most often they are not in the early stages. Internal validity gets added as a natural part of the research project (think developments in the theory of evolution); trying to force internal validity is a mistake that will rapidly push a theory away from science and into the realm of pseudo-science. Let it happen naturally as more observations come in and more parts of the theory are tested more rigorously.
As a side note, I do have to express my dissatisfaction with the emphasis on experiments, no one disputes that astronomy is a science and experiments are very rarely possible. Testable predictions and observations are the true benchmark, in astrophysics if I predict a black hole will show up in certain conditions and further observations do indeed confirm that black holes do show up in those conditions I've successfully made a testable hypothesis and confirmed it without experimentation, if it doesn't show up than I have evidence against my hypothesis. Same thing with economics.
A second point regarding the first paragraph. A huge and far from sufficiently grounded assumption is being made when it is asserted that the behavior of individual agents should be the basic unit. While we experience the world in terms of an individual agent our experience is in no way sufficient observational data to use this as a starting assumption.
While hardly an area of expertise for me, what I do know about the behavior of groups is that they don't appear to act as individual maximizing agents. While Americans tend to believe that our thoughts and actions originate internally this is not entirely consistent across cultures nor does it seem to hold up all that well in observations. People seem to unconsciously coordinate and alter their behavior according to the actions of others, groups behave differently from the sum of the individuals in them. Even business articles on leadership or teamwork will get at this point, it's pretty obvious if you stop and think hard about your own actions rather than trying to rationalize them away. Given this, it's not clear this is a good starting foundation. In fact, it may be actively detrimental to making models that make realistic predictions. While this may make for an easy stopgap for creating models far more extensive research needs to be done in human behavior before this can be accepted as a foundational assumption rather than as a messy shortcut to make modeling more tractable.
The fact that it continues to be used in a supposedly scientific context without its shortcomings being acknowledged is another major strike against economics as a science. There is nothing about the subject matter that makes it intractable for scientific analysis but if those doing the analysis aren't using the scientific method and grounding their theory in observations and testing it against further observations then we can hardly call what they're doing a science. It sounds a lot more like medieval scholasticism to me.