The “Marginalist” Revolution
One of the watersheds in the development of economic analysis in the nineteenth century was the widespread acceptance among economists of a price theory based on the demands of consumers, rather than just on the costs of producers. It was revolutionary not only as a theory of price but also in introducing new concepts and new methods of analysis that spread into other branches of economics.
Classical economics had regarded the amount of labor and other inputs as crucial factors determining the price of the resulting output. Karl Marx had taken this line of thinking to its logical extreme with his theory of the exploitation of labor, which was seen as the ultimate source of wealth, and therefore as the ultimate source of the income and wealth of the non-working classes, such as capitalists and landowners.{xli}
Although the cost-of-production theory of value had prevailed in England since the time of Adam Smith, an entirely different theory had prevailed in continental Europe, where value was considered to be determined by the utility of goods to consumers, which was what would determine their demand. Smith, however, disposed of this theory by saying that water was obviously more useful than diamonds, since one could not live without water but many people lived without diamonds—and yet diamonds sold for far more than water.{1000} But, in the 1870s, a new conception emerged from Carl Menger in Austria and W. Stanley Jevons in England, both basing prices on the utility of goods to consumers—and, more important, refining and more sharply defining the terms of the debate, while introducing new concepts into economics in general.
What Adam Smith had been comparing was the total utility of water versus the total utility of diamonds. In other words, he was asking whether we would be worse off with no water or no diamonds. In that sense, the total utility of water obviously greatly exceeded the total utility of diamonds, since water was a matter of life and death. But Menger and Jevons conceived of the issue in a new way—a way that could be applied to many other analyses in economics besides price theory.
First of all, Menger and Jevons conceived of utility as entirely subjective.{1001} That is, there was no point in third party observers declaring one thing to be more useful than another, because each consumer’s demand was based on what that particular consumer considered useful—and consumer demand was what affected prices. More fundamentally, utility varies, even for the same consumer, depending on how much of particular goods and services that consumer already has.
Carl Menger pointed out that an amount of food necessary to sustain life is enormously valuable to everyone. Beyond the amount of food necessary to avoid starving to death, there was still value to additional amounts necessary for health, even though not as high a value as to the amount required to avoid death, and there was still some value to food to be eaten just for the pleasure of eating it. But eventually “satisfaction of the need for food is so complete that every further intake of food contributes neither to the maintenance of life nor to the preservation of health—nor does it even give pleasure to the consumer.”{1002} In short, what mattered to Menger and Jevons was the incremental utility, what Alfred Marshall would later call the “marginal” utility of additional units consumed.
Returning to Adam Smith’s example of water and diamonds, the relative utilities that mattered were the incremental or marginal utility of having another gallon of water compared to another carat of diamonds. Given that most people were already amply supplied with water, the marginal utility of another carat of diamonds would be greater—and this would account for a carat of diamonds selling for more than a gallon of water. This ended the difference between the cost-of-production theory of value in England and the utility theory of value in continental Europe, as economists in both places now accepted the marginal utility theory of value, as did economists in other parts of the world.
Essentially the same analysis and conclusions that Carl Menger reached in Austria in his 1871 book Principles of Economics appeared at the same time in England in W. Stanley Jevons’ book The Theory of Political Economy. What Jevons also saw, however, was how the concept of incremental utility was readily expressed in graphs and differential calculus, making the argument more visibly apparent and more logically rigorous than in Menger’s purely verbal presentation. This set the stage for the spread of incremental or marginal concepts to other branches of economics, such as production theory or international trade theory, where graphs and equations could more compactly and more unambiguously convey such concepts as economies of scale or comparative advantage.
This has been aptly called “the marginalist revolution,” which marked a break with both the methods and the concepts of the classical economists. This marginalist revolution facilitated the increased use of mathematics in economics to express cost variations, for example, in curves and to analyze rates of change of costs with differential calculus. However, mathematics was not necessary for understanding the new utility theory of value, for Carl Menger did not use a single graph or equation in his Principles of Economics.
Although Menger and Jevons were the founders of the marginal utility school in economics, and pioneers in the introduction of marginal concepts in general, it was Alfred Marshall’s monumental textbook Principles of Economics, published in 1890, which systematized many aspects of economics around these new concepts and gave them the basic form in which they have come down to present-day economics. Jevons had been especially at pains to reject the notion that value depends on labor, or on cost of production in general, but insisted that it was utility which was crucial.{1003} Alfred Marshall, however, said:
We might as reasonably dispute whether it is the upper or the under blade of a pair of scissors that cuts a piece of paper, as whether value is governed by utility or cost of production.{1004}
In other words, it was the combination of supply (dependent on the cost of production) and demand (dependent on marginal utility) which determined prices. In this and other ways, Marshall reconciled the theories of the classical economists with the later marginalist theories to produce what became known as neo-classical economics. His Principles of Economics became the authoritative text and remained so on into the first half of the twentieth century, going through eight editions in his lifetime.{xlii}
That Alfred Marshall was able to reconcile much of classical economics with the new marginal utility concepts was not surprising. Marshall was highly trained in mathematics and first learned economics by reading Mill’s Principles of Political Economy. In 1876, he called it “the book by which most living English economists have been educated.”{1005} Before that, Alfred Marshall had been a student of philosophy, and was critical of the economic inequalities in society, until someone told him that he needed to understand economics before making such judgments. After doing so, and seeing circumstances in a very different light, his continuing concern for the poor then led him to change his career and become an economist. He afterwards said that what social reformers needed were “cool heads” as well as “warm hearts.”{1006} As he was deciding what career to pursue, “the increasing urgency of economic studies as a means towards human well-being grew upon me.”{1007}
Equilibrium Theory
The increased use of graphs and equations in economics made it easier to illustrate such things as the effects of shortages and surpluses in causing prices to rise or fall. It also facilitated analyses of the conditions in which prices would neither rise nor fall—what have been called “equilibrium” conditions. Moreover, the concept of “equilibrium” applied to many things besides prices. There could be equilibrium in particular firms, whole industries, the national economy or international trade, for example.
Many people unfamiliar with economics have regarded these equilibrium conditions as unrealistic in one way or another, because they often seem different from what is usually observed in the real world. But that is not surprising, since the real world is seldom in equilibrium, whether in economics or in other fields. For example, while it is true that “water seeks i
ts own level,” that does not mean that the Atlantic Ocean has a glassy smooth surface. Waves and tides are among the ways in which water seeks its own level, as are waterfalls, and all these things are in motion at all times. Equilibrium theory allows you to analyze what that motion will be like in various disequilibrium situations found in the real world.
Similarly, students in medical school study the more or less ideal functioning of various body parts in healthy equilibrium, but not because body parts always function ideally in healthy equilibrium—since, if that were true, there would then be no reason to have medical schools in the first place. In other words, the whole point of studying equilibrium is to understand what happens when things are not in equilibrium, in one particular way or in some other way.
In economics, the concept of equilibrium applies not only in analyses of particular firms, industries or labor markets, but also in the economy as a whole. In other words, there are not only equilibrium prices or wages but also equilibrium national income and equilibrium in the balance of trade. The analysis of equilibrium and disequilibrium conditions in particular markets has become known as “microeconomics,” while analyses of changes in the economy as a whole—such as inflation, unemployment or rises and falls in total output—became known as “macroeconomics.” However, this convenient division overlooks the fact that all these elements of an economy affect one another. Ironically, it was two Soviet economists, living in a country with a non-market economy, who saw a crucial fact about market economies when they said: “Everything is interconnected in the world of prices, so that the smallest change in one element is passed along the chain to millions of others.”{1008}
For example, when the Federal Reserve System raises the interest rate on borrowed money, in order to reduce the danger of inflation, that can cause home prices to fall, savings to rise, and automobile sales to decline, among many other repercussions spreading in all directions throughout the economy. Following all these repercussions in practice is virtually impossible, and even analyzing it in theory is such a challenge that economists have won Nobel Prizes for doing so. The analysis of these complex interdependencies—whether microeconomic or macroeconomic—is called “general equilibrium” theory. It is what J.A. Schumpeter’s History of Economic Analysis called a recognition of “this all-pervading interdependence” that is the “fundamental fact” of economic life.{1009}
The landmark figure in general equilibrium theory was French economist Léon Walras (1834–1910), whose complex simultaneous equations essentially created this branch of economics in the nineteenth century. Back in the eighteenth century, however, another Frenchman, François Quesnay (1694–1774), was groping toward some notion of general equilibrium with a complex table intersected by lines connecting various economic activities with one another.{1010} Karl Marx, in the second volume of Capital, likewise set forth various equations showing how particular parts of a market economy affected numerous other parts of that economy.{1011} In other words, Walras had predecessors, as most great discoverers do, but he was still the landmark figure in this field.
While general equilibrium theory is something that can be left for advanced students of economics, it has some practical implications that can be understood by everyone. These implications are especially important because politicians very often set forth a particular economic “problem” which they are going to “solve,” without the slightest attention to how the repercussions of their “solution” will reverberate throughout the economy, with consequences that may dwarf the effects of their “solution.”
For example, laws setting a ceiling on the interest rate that can be charged on particular kinds of loans, or on loans in general, can reduce the amount of loans that are made, and change the mixture of people who can get loans—lower income people being particularly disqualified—as well as affecting the price of corporate bonds and the known reserves of natural resources, {xliii} among other things. Virtually no economic transaction takes place in isolation, however much it may be seen in isolation by those who think in terms of creating particular “solutions” to particular “problems.”
Keynesian Economics
The most prominent new developments in economics in the twentieth century were in the study of the variations in national output from boom times to depressions. The Great Depression of the 1930s and its tragic social consequences around the world had as one of its major and lasting impacts an emphasis on trying to determine how and why such calamities happened and what could be done about them. {xliv} John Maynard Keynes’ 1936 book, The General Theory of Employment Interest and Money, became the most famous and most influential economics book of the twentieth century. By mid-century, it was the prevailing orthodoxy in the leading economics departments of the world—with the notable exception of the University of Chicago and a few other economics departments in other universities largely staffed or dominated by former students of Milton Friedman and others in the “Chicago School” of economists.
To the traditional concern of economics with the allocation of scarce resources which have alternative uses, Keynes added as a major concern those periods in which substantial proportions of a nation’s resources—including both labor and capital—are not being allocated at all. This was certainly true of the time when Keynes’ General Theory was written, the Great Depression of the 1930s, when many businesses produced well below their normal capacity and as many as one-fourth of American workers were unemployed.
While writing his magnum opus, Keynes said in a letter to George Bernard Shaw: “I believe myself to be writing a book on economic theory which will largely revolutionize—not, I suppose, at once but in the course of the next ten years—the way the world thinks about economic problems.”{1012} Both predictions proved to be accurate. However, the contemporary New Deal policies in the United States were based on ad hoc decisions, rather than on anything as systematic as Keynesian economics. But, within the economics profession, Keynes’ theories not only triumphed but became the prevailing orthodoxy.
Keynesian economics offered not only an economic explanation of changes in aggregate output and employment, but also a rationale for government intervention to restore an economy mired in depression. Rather than wait for the market to adjust and restore full employment on its own, Keynesians argued that government spending could produce the same result faster and with fewer painful side-effects. While Keynes and his followers recognized that government spending entailed the risk of inflation, especially when “full employment” became an official policy, it was a risk they found acceptable and manageable, given the alternative of unemployment on the scale seen during the Great Depression.
Later, after Keynes’ death in 1946, empirical research emerged suggesting that policy-makers could in effect choose from a menu of trade-offs between rates of unemployment and rates of inflation, in what was called the “Phillips Curve,” in honor of economist A.W. Phillips of the London School of Economics, who had developed this analysis.
Post-Keynesian Economics
The Phillips Curve was perhaps the high-water mark of Keynesian economics. However, the Chicago School began chipping away at the Keynesian theories in general and the Phillips Curve in particular, both analytically and with empirical studies. In general, Chicago School economists found the market more rational and more responsive than the Keynesians had assumed—and the government less so, at least in the sense of promoting the national interest, as distinguished from promoting the careers of politicians. By this time, economics had become so professionalized and so mathematical that the work of its leading scholars was no longer something that most people, or even most scholars outside of economics, could follow. What could be followed, however, was the slow erosion of the Keynesian orthodoxy, especially after the simultaneous rise of inflation and unemployment to high levels during the 1970s undermined the notion of the government making a trade-off between the two, as suggested by the Phillips Curve.
When Professor Milton Friedman of the Un
iversity of Chicago won a Nobel Prize in economics in 1976, it marked a growing recognition of non-Keynesian and anti-Keynesian economists, such as those of the Chicago School. By the last decade of the twentieth century, a disproportionate share of the Nobel Prizes in economics were going to economists of the Chicago School, whether located on the University of Chicago campus or at other institutions. The Keynesian contribution did not vanish, however, for many of the concepts and insights of John Maynard Keynes had now become part of the stock in trade of economists in all schools of thought. When John Maynard Keynes’ picture appeared on the cover of the December 31, 1965 issue of Time magazine, it was the first time that someone no longer living was honored in this way. There was also an accompanying story inside the magazine:
Time quoted Milton Friedman, our leading non-Keynesian economist, as saying, “We are all Keynesians now.” What Friedman had actually said was: “We are all Keynesians now and nobody is any longer a Keynesian,” meaning that while everyone had absorbed some substantial part of what Keynes taught no one any longer believed it all.{1013}
While it is tempting to think of the history of economics as the history of a succession of great thinkers who advanced the quantity and quality of analysis in this field, seldom did these pioneers create perfected analyses. The gaps, murkiness, errors and shortcomings common to pioneers in many fields were also common in economics. Clarifying, repairing and more rigorously systematizing what the giants of the profession created required the dedicated work of many others, who did not have the genius of the giants, but who saw many individual things more clearly than did the great pioneers.
David Ricardo, for example, was certainly far more of a landmark figure in the history of economics than was his obscure contemporary Samuel Bailey, but there were a number of things that Bailey expressed more clearly in his analysis of Ricardian economics than did Ricardo himself.{1014} Similarly, in the twentieth century, Keynesian economics began to be developed and presented with concepts, definitions, graphs and equations found nowhere in the writings of John Maynard Keynes, as other leading economists extended the analysis of Keynesian economics to the profession in scholarly writings, and its presentation to students in textbooks, using devices that Keynes himself never used or conceived.