Next Article in Journal
Bayesian Analysis of Bubbles in Asset Prices
Next Article in Special Issue
Do Seasonal Adjustments Induce Noncausal Dynamics in Inflation Rates?
Previous Article in Journal
Non-Causality Due to Included Variables
Previous Article in Special Issue
Twenty-Two Years of Inflation Assessment and Forecasting Experience at the Bulletin of EU & US Inflation and Macroeconomic Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

An Interview with William A. Barnett †

Department of Economics, University of Calgary, Calgary, AB T2N 1N4, Canada
“Copyright © 2017 by Center for Financial Stability. Reprinted by permission of Center for Financial Stability.”
Econometrics 2017, 5(4), 45; https://doi.org/10.3390/econometrics5040045
Submission received: 30 September 2017 / Revised: 30 September 2017 / Accepted: 30 September 2017 / Published: 17 October 2017
Econometrics 05 00045 i001
William (Bill) Barnett is an eminent econometrician and macroeconomist. He has made fundamental contributions to the applied neoclassical economic theory of consumer and producer behavior and pioneered a scientific approach to economics, based on state-of-the-art micro- and macro-econometrics.
Bill Barnett has been highly influential in shaping academic research on monetary and financial aggregation, using index number and aggregation theory. He is the inventor of the Divisia monetary aggregates and founder of the modern field of aggregation-theoretic monetary aggregation. Over the years, he has argued that the official simple-sum monetary aggregates, produced by the Federal Reserve and other central banks around the world, are inconsistent with neoclassical microeconomic and aggregation theory. The resulting internal inconsistency of the monetary aggregates with the neoclassical models within which the aggregates are used has become known as the “Barnett critique.”
His work on monetary aggregation is more timely today than ever, in the aftermath of the global financial crisis, with the mainstream (interest-rate-based) approach to monetary policy being ineffective at the zero lower bound. His book, Getting It Wrong: How Faulty Monetary Statistics Undermine the Fed, the Financial System, and the Economy, published by MIT Press, won the American Publisher’s Award for Professional and Scholarly Excellence for the best book published in the field of economics during 2012.
Bill Barnett has also made fundamental contributions to the associated fields of demand-system and flexible-functional-form modeling. Early in his career, he proved that Theil and Barten’s Rotterdam model could be aggregated over consumers under remarkably weak assumptions, with the addition of a remainder term having properties he explored. He also derived and applied that model’s test for blockwise weak separability, which is the necessary condition for quantity aggregation. Moreover, he was the first to prove the asymptotic normality and efficiency properties of the maximum likelihood estimator for the relevant class of models, consisting of closed form nonlinear systems of equations.
To address issues relating to the economic properties of flexible functional forms derived from second-order Taylor series approximations, Barnett proposed the use of the second-order Laurent series and identified a parsimonious special case, called minflex Laurent. The minflex special case retains the flexibility property. He proved that the second-order Laurent series and its minflex parsimonious special case have better economic properties, over a very large region, than the second order Taylor series flexible functional forms. Also, motivated by Ron Gallant’s insightful analysis of asymptotic global flexibility using seminonparametric estimation converging globally to unknown functions, Barnett invented the Asymptotically Ideal Model (AIM), based on the Müntz-Szatz series expansion.
Bill Barnett has also shown the way for research beyond the mainstream’s state of the art. His work on numerical solutions for bifurcation boundaries raises questions about robustness of dynamical macroeconometric inferences. In a series of journal articles, he has found Hopf, transcritical, and singularity bifurcation boundaries crossing the parameter estimates’ confidence regions. He has found this phenomenon in all classes of dynamical models in widespread use in macroeconometrics. His conclusion is that dynamical policy inferences should not be based on simulations conducted solely at parameter point estimates, but rather at various points within the confidence regions.
Bill Barnett has published close to 200 articles in professional journals and 32 books as either author or editor. His research has been published in 7 languages. He has received over 43 different awards and honors, including being a Fellow of the American Statistical Association, Fellow of the World Innovation Foundation, Fellow of the IC2 Institute at the University of Texas at Austin, Fellow of the Johns Hopkins Institute for Applied Economics, Honorary Professor at Henan University in China, Charter Fellow of the Society for Economic Measurement, and Charter Fellow of the Journal of Econometrics.
Bill Barnett is Founder and Editor of the Cambridge University Press journal, Macroeconomic Dynamics (see http://econ.tepper.cmu.edu/barnett/MD.html). He is also Founder and President of the rapidly growing Society for Economic Measurement (see http://sem.society.cmu.edu). In 2011, he was appointed Director of the program, Advances in Monetary and Financial Measurement, at the Center for Financial Stability, in New York City. He manages that program building on his research in monetary aggregation. The program he directs can be found at http://www.centerforfinancialstability.org/amfm.php, along with an online library linked to Divisia monetary aggregates data and studies for over 40 countries throughout the world. The Center for Financial Stability provides monthly releases of Divisia monetary aggregates for the United States, and soon will begin doing so for Europe, China, and India. He recently also became Founder and Director of the new Institute for Nonlinear Dynamical Inference in Moscow.
Recently, James J. Heckman and I edited two special issues in Bill Barnett’s honor.
The Journal of Econometrics special issue appeared in 2014 and the Econometric Reviews special issue in 2015. Those special issues contain contributions by many of the world’s most eminent economists. A conference in honor of his work in monetary aggregation is to be held at the Bank of England on 23–24 May 2017.
We agreed to have this interview over dinner at the third annual conference of the Society for Economic Measurement in Thessaloniki, Greece. That dinner took place at Palaios Panteleimonas, a village on Mount Olympus overlooking the Castle of Platamon and the Aegean Sea, about 100 km from Thessaloniki. We were so enthused about the interview proposal that night, we even danced “zebeikiko,” in the spirit of Zorba the Greek.
The interview was conducted by email over several months after we returned to North America. I have edited the script for clarity and continuity and slightly rearranged the questions and answers to fit into the following broad topic areas:
• Work before Economics  p. 4
• Graduate Study  p. 4
• Early Research at the Federal Reserve Board  p. 7
• Monetary and Financial Aggregation  p. 11
• Demand Systems and Flexible Functional Forms  p. 15
• Nonlinear and Complex Dynamics  p. 20
• Founding of Journals, Monograph Series, and Societies  p. 23
• Reflections  p. 27
• Advice for Students  p. 29
• Selected Bibliography  p. 30
I hope that you get as much out of this interview with Bill as I did. In case you do not know Bill Barnett, I hope that you meet him in this interview.
Keywords: Divisia monetary aggregates; Minflex Laurent model; Generalized Barnett model; Asymptotically Ideal Model (AIM); bifurcation; chaos and nonlinear dynamics
Econometrics 05 00045 i002
Apostolos Serletis and William Barnett at dinner on the side of Mt. Olympus, where the plan for this interview began, July 2016.
Econometrics 05 00045 i003
At conference in honor of Roko Aliprantis.

1. Work before Economics

Serletis: I will begin by asking you about your work before you got interested in economics.
Barnett: After I graduated from MIT in engineering, I accepted an R & D position working as a systems development engineer at Rocketdyne Division of North American Aviation in Los Angeles. Rocketdyne produced most of the rocket engines for the American space program. I worked on the development of the F-1 rocket engine, which was the booster engine for the first stage of Apollo. In those days, America thought it was in a race with Russia to send astronauts to the moon. As a result, the opportunities for engineers in America’s heavily funded space program were extraordinary. I am often amused, when I hear some economists called “rocket scientists.” Well, I really was one.
Serletis: How did you get interested in economics?
Barnett: During my senior year at MIT, I was permitted to take Franco Modigliani’s graduate course about the research he was doing with Merton Miller on the cost of capital. He would walk into class, often without notes, and start deriving results on the board with enthusiasm. His results, using economic theory and mathematics, were far beyond the mainstream of corporate finance at the time. It was a large class, including some of the other economics and finance professors and a few ambitious young officers sent by the US military. People in the class would sometimes try to dispute Franco’s results. Franco loved it. With excitement, he would return to the board to defend his results. Although Paul Samuelson was also on the MIT faculty at the time, I did not get to meet and work with him until many years later. There was a required term paper in Franco’s class. He wrote on mine that he wanted to talk with me in his office. When I came to his office, he said he wanted to correspond with me after I graduated from MIT. I was surprised, since he knew I was an engineering student. But we did occasionally correspond after I had become an engineer at Rocketdyne. The experience in Franco’s dynamic class remained in the back of my mind at Rocketdyne. Even the rocket engine tests in the Santa Susana Mountains could not match the excitement of Franco’s class.

2. Graduate Study

Serletis: After working for six years as an engineer at Rocketdyne, you left to study economics and statistics at Carnegie Mellon University, where you earned M.A. and Ph.D. degrees. Why did you choose Carnegie Mellon?
Barnett: Franco Modigliani told me he had done his most important research while he and Merton Miller were on the faculty at Carnegie Mellon University. In addition, Carnegie Mellon had become a “hot spot” in economics and statistics, with Robert Lucas, David Cass, Allan Meltzer, John Ledyard, Herbert Simon, and Richard Cyert in economics and Joseph Kadane, Melvin Hinich, and Morris DeGroot in statistics, along with Ed Prescott and Finn Kydland among the students. Cyert became president of the university.
Rocketdyne paid for the mathematics and engineering courses I took at night at USC and UCLA, while employed full time as an engineer. Rocketdyne also had a policy of permitting one year educational leaves for every year worked on the space program, which was funded by very generous NASA contracts. I applied for and received two of those leaves prior to entering Carnegie Mellon. One was at the University of California at Berkeley and one was at the University of Chicago, both very turbulent places during the Vietnam War. While at Berkeley and Chicago, I further heard about what was happening at Carnegie Mellon, confirming Franco’s advice.
Although my PhD is from Carnegie Mellon, my ties to the faculties at Berkeley and Chicago, especially David Laidler at Berkeley and Arnold Zellner, Hirofumi Uzawa, and Henri Theil at Chicago, remained strong during and after my PhD studies at Carnegie Mellon. In fact, I dedicated two of my books since then to the memory of Henri Theil and coedited two journal special issues with Arnold Zellner. Although Theil and Zellner were on very bad terms with each other, I considered both to be friends.
Serletis: How did your experiences at Berkeley affect your plans for the future?
Barnett: In profound ways. My year at Berkeley was to acquire an MBA, which I completed in the one year, with emphasis on finance and economics. My objectives when I arrived at Berkeley were to remain in the aerospace industry and advance into engineering management. I had no plans to become a professor. But my year at Berkeley was during the explosive year of the historic “free speech movement,” which began the student protests that swept across the country against the Vietnam war. While I was a graduate student in the Business School at Berkeley, the free speech movement was largely an undergraduate phenomenon. Nevertheless, it was impossible to ignore the demonstrations, the speakers, and the hostility towards them in the media. Anyone who was at Berkeley during the free speech movement could not avoid becoming aware of the tragic mistake that America had made by getting militarily involved in Vietnam, going as far back as America’s misguided implicit support for the return of French colonialism to Vietnam at the end of World War 2. Vietnam had fought alongside us as an ally during the Second World War and should not have been recolonized after the end of that war. At the end of that year at Berkeley, upon return to Rocketdyne with my MBA and fast track status within the corporation, I was a changed person. I was opposed to the Vietnam War, while employed by a corporation that not only was a major player in the civilian space program but also a defense contractor. My employment at Rocketdyne provided me with an occupational deferment from the draft, since North American Aviation was a major defense contractor, as was its Rocketdyne Division. Although I worked only on civilian NASA contracts, the glamour of that sometimes exciting high-tech employment was fading in my mind, as my opposition to the war grew.
Serletis: What about your studies and experiences at the University of Chicago, while on a different leave from Rocketdyne?
Barnett: During my subsequent educational leave from Rocketdyne at Chicago, I was an experienced observer of the antiwar movement, having been located at the center of its formation at Berkeley in 1964–1965. I arrived at the University of Chicago shortly before the notorious 1968 Democratic Party Convention in Chicago. As you may know, there was a large and historic antiwar demonstration in Grant Park across the street from the Hilton Hotel at which the convention was held. I was at that demonstration. Mayor Daley had the demonstrators surrounded by a police line that arrived, marching in military formation, and by a second National Guard line, which arrived in military trucks. The lines, trapping the demonstrators, began and ended at the front of the Hilton Hotel, thereby enclosing not only Grant Park, but also the front of the Hilton Hotel. Having seen many demonstrations and police actions against them at Berkeley, I recognized that what was beginning to happen at Grant Park and at the front of the Hilton was something much more ominous than I had ever seen before. As a result, I left the demonstration in the Park and moved to the entrance of the Hilton Hotel, where some hotel guests were standing on the sidewalk watching. They were well dressed and clearly not demonstrators, but were inside the area surrounded and trapped by the police and National Guard.
One of the demonstrators crossed the street from Grant Park and began walking along the sidewalk in front of the hotel. He was quietly walking alone. A policeman walked over to him and began beating him on the head to the ground with his club—repeatedly. The people at the entrance to the hotel cheered and applauded, encouraging the officer to club the confused and dazed demonstrator. This was about 15 feet from the entrance to the hotel, where I was standing, and was sickening to watch. The policeman was smiling with glee and the people surrounding me were cheering him on and having a wonderful time. I knew what was coming, and I had to get out of there. From my position among the well-dressed cheering section at the hotel entrance, I slowly walked straight towards the police and National Guard lines while looking directly at them. I wanted them to think I was a hotel guest (I was not). They opened the two lines to let me through. Once outside the lines, I ran as fast as I could. When I was a block away, I heard horrifying screams behind me, saw the large cloud of tear gas, and everyone running towards me. That was the infamous Chicago “police riot,” which I fortunately escaped without injury. The hotel guests standing at the entrance to the hotel were not so lucky. I later saw on TV that they were tear gassed, and some of the gas even got into the convention hall.
Back on campus, an eminent senior professor in the Sociology Department, who was against the war, was stabbed in the stomach in his office by someone from off campus. When he recovered, he moved to Canada. There were student demonstrations on campus for various causes. Mayor Daley’s police department included a “red squad,” which sent photographers on campus to photograph those who attended the demonstrations, and would bring the photographs to professors, who were requested to identify the students. Most of the university’s professors refused to cooperate, but some did. One Economics Department PhD student spoke at a demonstration opposed to the university’s investments in apartheid South Africa. I did not know that student and did not attend that demonstration. But a memorandum was distributed to all of the PhD students saying that the faculty of the Economics Department had voted to boycott the student’s proposed dissertation committee, because of what he had said at that demonstration.
I attended Hirofumi Uzawa’s brilliant classes on mathematical economics. He had become militantly opposed to the war. Somehow the classrooms assigned to his classes always seemed to be unavailable, so his students had to meet with him in his office for his classes. He crammed many chairs into his office, so we would be able to sit and listen to his outstanding lectures. He subsequently left for Japan, where his controversial views grew and became legendary.
I thought the professors in the Economics Department at Chicago were extraordinary, and they have influenced my thinking in many permanent ways. But at the end of my leave at Chicago, the intended objective of remaining indefinitely at Rocketdyne no longer had the appeal it once did. I needed a different career direction.
Serletis: Can you tell me about your experiences as a graduate student at Carnegie Mellon?
Barnett: Compared to Berkeley and Chicago, Carnegie Mellon University (CMU) was a peaceful place. Although an ROTC building had previously been burned to the ground, that was long before I arrived. There was no violence at CMU, while I was there. In fact, the degree of tolerance for dissenting views was admirable. For example, Leonard Rapping, who had previously coauthored a famous new-classical paper with Robert Lucas, became an antiwar activist and was permitted to teach a course on “radical political economics.” While I was a student at CMU, Leonard and I joined the Union for Radical Political Economics (URPE) and went to its conferences together. Although I discontinued my membership at URPE when the war ended, Leonard continued as a member for the rest of his career, mostly at the University of Massachusetts, which had become a home for much of the economics profession’s left.
I was greatly impressed by the courses taught by Robert Lucas and John Ledyard, and I recognized the exceptional nature of David Cass, who later became a close friend, and Ed Prescott, who had returned from Penn after receiving his PhD from CMU. However, it was clear that Carnegie Mellon would not be able to retain them. Although I did not take Allan Meltzer’s class, while I was at CMU, I later got to know him and respect him, after I had subsequently moved to the Federal Reserve Board in Washington, DC. Finn Kydland was a classmate, but I did not become aware of his work until many years later. At CMU, my dissertation adviser was the eminent statistician, Paul Shaman, while my ties with Henri Theil at Chicago during that research continued to grow. The peaceful and productive research workshop environment at Carnegie Mellon was exactly what I needed at that time of national turmoil.
Before I had completed my dissertation, my earned leave time from Rocketdyne had expired. I was told that I either had to resign from Rocketdyne or return. The original plan was for me to return to the new research facility being built by Rocketdyne in Orange County, LA. I was to work primarily as a statistician on proposed advanced projects for space exploration. But because of the war, the national priorities had changed. Funds that previously were available to NASA for the ambitious civilian space program were being transferred to the Department of Defense. North American Aviation had divisions that produced fighter planes and bombers used in the war. The Rocketdyne advanced research facility was never completed. The handwriting was on the wall. Engineers who had previously been working on the space program were being transferred to military projects funded by the Air Force.
But fortunately, I had a much better option. The Federal Reserve Board had created an elite Special Studies Section focused on research and located in the Watergate Building with about two miles of distance from the Board Building, thereby providing unusual research independence. I was offered a research economist position in that section with full-time salary and permission to spend my first year exclusively completing my PhD research. I jumped at the opportunity and resigned from Rocketdyne. I also was approached by the CIA for a position at its headquarters in Langley, Virginia. I turned it down immediately.
During my first year at the Board, I spent more time at the University of Chicago working with Theil and Zellner than at Carnegie Mellon. In fact, upon completion of my dissertation, I was provided with a Research Associate position at Chicago permitting me to acquire an NSF grant, administered by Chicago, to fund work on a book based on my dissertation. The work on the book could not be done on Board time, so had to be funded by another source. I returned to Carnegie Mellon to defend my dissertation, and then returned to the Board for the next 7 years of my career. Everything had finally converged to a career path that motivated me without reservation.

3. Early Research at the Federal Reserve Board

Serletis: How was it, when you were working at the Federal Reserve Board?
Barnett: I was hired to replace Bill Poole, who had left for the Boston Federal Reserve Bank and then Brown University. He subsequently became President of the St. Louis Federal Reserve Bank. The Special Studies Section was unique in Washington, DC. Having a position in that section was somewhat like having a full time permanent NSF grant. Although the economists in the section sometimes served as in-house consultants to the rest of the Board’s staff, our primary function was to publish research in the profession’s best journals.
To the degree that we had contact with the Federal Reserve’s operations, it was primarily through our contact with a sister section, located next to the Special Studies Section on the same floor of the Watergate Building. That section was called Econometrics and Computer Applications (E & CA). Among its functions was maintenance of the Board’s quarterly econometric model, used to produce policy simulations for the Federal Open Market Committee (FOMC). The model manager in that section was Jerry Enzler, a very fine economist of high integrity and expertise. The policy simulations were collected together to display the policy target paths that would result from various choices of instrument paths. The model was very large, with hundreds of equations. Some economists advocated replacing the “menu” book of simulations with a single recommended policy, produced by applying optimal control theory to the model.
The model was called the FMP model, for Federal Reserve-MIT-Penn, since the origins of the model were with work done by Franco Modigliani at MIT and Albert Ando at the University of Pennsylvania, among others. That model’s simulations subsequently became an object of criticism by advocates of the Lucas Critique. The alternative optimal control approach became an object of criticism by advocates of the Kydland and Prescott (1977) finding of time inconsistency of optimal control policy. Regardless of those controversies, if I wanted information about what was happening in the economy and what to expect in the future, I would ask Jerry Enzler. Working on and struggling with that model’s frequent problems turned Jerry into an exceptionally well informed economist, for whom I had great respect.
While I was on the Board’s staff, Arnold Zellner asked me to edit a two-volume special issue of the Journal of Econometrics on Federal Reserve staff research. In my role as guest special issue editor, my obligations were to Arnold, not to the Board. I sent out a call for submissions to all economists within the Federal Reserve System, including the regional banks. I then was approached by two of the Board’s officers, requesting involvement in the decisions about papers to be included in the volume. Arnold instructed me to refuse any such involvement, and I did so. I also made it clear that all submissions would be refereed to the normal standards of the J. of Econometrics. As soon as those objectives and procedures were made clear, a large percentage of the submissions were withdrawn, including most of the submissions from economists at the regional Reserve Banks. Following the subsequent refereeing and revisions, I delivered the resulting two volumes of papers surviving review to Arnold. Most of the accepted papers were produced by Special Studies Section economists and some by E & CA economists, not because of any kind of bias, but because economists in those sections submitted the best papers. In those days, many of the economists capable of meeting the standards of the best journals were in those sections of the Board’s staff. I must say that it was not pleasant having to reject papers submitted by some of my own colleagues. One would not talk to me for a year afterwards, but is now a good friend.
We were located on the seventh floor of one of the Watergate Buildings. The sixth floor was vacant. It had previously been the headquarters of the Democratic Party at the time of the notorious Watergate break-in. No one was willing to lease that space, out of fear that there might still have been undetected listening-device “bugs” remaining in the walls. The famous writer, Norman Mailer, had written a conspiracy theory article about the break-in for Playboy Magazine. He theorized that the Watergate burglars were actually bond speculators, who planted bugs into the ceiling of the sixth floor to acquire inside information about interest rate policy from the Federal Reserve staff on the seventh floor. Of course, Mailer was wrong. But if he had been right, the burglars would have been very disappointed and confused by what they would have heard from the sophisticated research staff on the seventh floor.
The Special Studies Section and E & CA were moved to the Martin Building, when it was built next to the Board Building. The Federal Reserve discontinued its lease of Watergate office space. That move was the beginning of the end for the Special Studies Section. Once we were located close to the rest of the Board’s staff, our research independence started to become compromised. I think the Section had become an irritant to some of the other staff members. Even the informal manner in which we dressed seemed to annoy some of the economists in other sections. Economists in the Special Studies Section, who had high visibility in academia, began moving to faculty positions at universities.
This all worked out very well for me. I was offered a position as a full professor at the University of Texas at Austin. The position was in the Economics Department with a courtesy position in the Business School’s Finance Department. I had never been an untenured assistant professor or associate professor at any university. I was hired by the University of Texas into the best position in its Economics Department. Shortly after I arrived, I was awarded an endowed chair in the Economics Department and an endowed research fellowship at the newly created IC2 research institute, founded by the Dean of the Business School, George Kozmetsky. George was an extraordinary person, who had previously been on the faculties at Carnegie Mellon University and Harvard and was listed in Forbes magazine as one of the richest persons in the world, as a founder of Teledyne Corporation. I had four offices: one in the Economics Department, one in the Finance Department, one in the IC2 Institute, and one on a high floor of the Texas Tower. I accepted the Texas offer immediately.
The response by the Board’s staff was somewhat odd, or at least by one of its officers. A high-ranking officer walked into my office and threatened me. He said that if I ever became known as a critic of the Federal Reserve, the Board’s attorneys would harass me for the rest of my life. I did not work for that officer. Neither the Special Studies Section Chief nor anyone else above him reported to that officer. I had no interest in Federal Reserve “politics” and viewed that unauthorized “exit interview” as little more than a poor reflection on that officer. I assume he would not have been happy about my recently published book, Getting It Wrong: How Faulty Monetary Statistics Undermine the Fed, the Financial System, and the Economy. That book, published by MIT Press, won the American Publisher’s Award for Professional and Scholarly Excellence (the PROSE Award) for the best book published in the field of economics during 2012.
A few years after I left for Texas, I was informed that the Special Studies Section had been closed down, and the remaining staff members in that section had been transferred to operating sections on the Board’s staff. Nothing comparable to the Special Studies Section now exists in any US governmental agency in Washington, DC. I was very fortunate to have been there, when that unusual section was at its best in its Watergate office facilities. In some ways, the outstanding scientific research commitment of my colleagues in the Special Studies Section, when at its best, was a match for the intellectual excitement at Rocketdyne, when at its best—but without the earth-shaking roar, shock waves, and massive flames of the rocket engine tests. Such opportunities outside of academia tend to be rare and transitory. I was fortunate to have been able to move among them, when the opportunities arose.
Serletis: I am very familiar with your work and know that you have made pioneering contributions to economics and finance. What do you think are your most important contributions during the nine years that you worked at the Federal Reserve Board leading up to your work in monetary economics?
Barnett: During my first year at the Board, I worked exclusively on research relevant to my dissertation, which I completed at the end of that year, as agreed upon in the original offer. The primary focus of that research was to test the hypothesis implicit in the conventional dichotomy between labor economics and consumer demand systems economics. That implicit assumption is blockwise weak separability of goods from leisure in utility functions. This assumption did not seem reasonable to me. For example, it requires that all goods either be substitutes for leisure or complements for leisure. Hence there cannot be both time saving goods, such as washing machines, and time using goods, such as recreational goods. But as a committed scientist, I needed to solve other logically-prior problems, before I could run that test.
I needed the ability to estimate systems of nonlinear equations. At that time, the asymptotics of joint maximum likelihood estimators had not yet been derived for systems of nonlinear equations. The famous Koopmans, Rubin, and Leipnick (Koopmans et al. 1950) paper applied only to linear FIML; and the classical results in the statistics literature were for sampling from a fixed distribution. I completed the necessary proofs as my first order of research, and published the resulting paper, Barnett (1976), in JASA. I got to know Edmond Malinvaud in Paris and Peter Phillips, in England at the time, by corresponding with them about their important ongoing research, as I was completing mine.
There then was also a deeper unsolved problem about confidence regions. Statisticians had determined that random variables are Borel measurable point valued mappings, relative to a particular sigma field. But oddly no statisticians or mathematicians had ever identified the class of random sets that produce confidence regions. Clearly, they are measurable set-valued mappings relative to a particular measure space. I proved that the relevant class of mappings is the class of Borel measurable mappings relative to the sigma field generated by the neighborhood system topology in the mapping’s codomain. I published the proof in a mathematics journal. The paper was subsequently reprinted as chapter 21 in Barnett and Binner (2004). I then had all the relevant statistical tools.
Next I needed the specification of a system of consumer demand equations. I had long been fascinated by the work my friend Henri Theil had been doing with the Rotterdam model, which he had originated with Anton Barton. Theil produced a version aggregated over consumers by stochastic conversion aggregation. But in order to produce the simplest possible result, he had made a very strong assumption about statistical independence of income and marginal budget shares. That assumption drew justifiable criticism from Dan McFadden and others, since it implied integrability relative to a very restrictive class of tastes. I removed that independence assumption and derived an extended version of the model based on convergence in probability of a system of stochastic differential equations produced from the Slutsky equations aggregated over consumer. I published the results in the Review of Economic Studies in Barnett (1979).
I then had both all the econometric and all the statistical tools needed to test the conventional dichotomy between labor economics and consumption economics. I ran the test and rejected the weak separability assumption. The resulting paper, Barnett (1979), was published in Econometrica.
During that research, I observed that there was an unsolved problem in another area of labor economics: the literature on Becker’s household production function approach. That literature had conditioned on an unreasonable assumption about lack of joint production in household technology. Because of that assumption, Pollak and Wachter (1975) had published an insightful but pessimistic paper concluding that structural estimation of the household’s tastes and technology was impossible, and only reduced form estimation was possible. Reduced form estimation cannot serve the intended purpose of Becker’s approach to separate tastes from household technological change. I proved that even with joint production, the household’s structural form does exist, is identified, and can be estimated in accordance with the original intent of that approach. But to do so would involve simultaneous estimation of a nonlinear structural model. I published the paper, Barnett (1977), in the JPE. The resulting difficult—but correct—econometric approach has not been empirically implemented by labor economists to the present day.
During my first year at the Board, while completing the research on my dissertation, I was in the E & CA section prior to transferring to the Special Studies Section. The person who had been the Section Chief of E & CA was an exceptionally good administrator with a very amicable personality. As I recall, his name was Tommy Thompson. At the end of the year, he decided to redirect his professional future towards use of those skills in private sector management. He resigned from the Board and accepted a position as a high-level administrator at a large commercial bank. At the end of my first year at the Board, prior to his departure, he met with me to discuss my work. He praised me for having had more success with my research during that year than any of the other economists in E & CA or Special Studies, since I already had acceptances in hand from some of the profession’s best journals. But he also said: “your beard is too long.” I did not know whether he meant that literally or figuratively, but he said it in a friendly manner with a smile on his face. I thought the advice was amusing. Relative to my own sense of values at that time, I took it as a compliment.
After transfer to the Special Studies Section, I continued my research on consumer demand systems modelling. During the 1970s, inflation was accelerating within the US and much of the world. The senior staff at the Federal Reserve Board adopted a strange conspiracy theory blaming mysterious middle men in the food industry for increasing food prices and thereby creating cost push inflation. I was asked to produce a system of consumer demand equations for food in 10 categories of agricultural goods to be adjoined to the FMP quarterly model to assist in blaming food prices for creating the inflation. An agricultural economist on the Board’s staff was to produce the supply side food market model.
I thought the attempt to scapegoat invisible nameless local food wholesalers was silly. Indeed, my beard probably was “too long,” for me to buy into that theory. But I immediately recognized the opportunity to produce a new model with unique properties. For that reason, I agreed to the project. The Board’s staff wanted the 10 goods clustered into three groups. To me, that translated into blockwise weak separability of utility in three blocks. But no one had ever previously produced a demand system from a blockwise weakly separable utility function. The closest anyone had ever come was the S-Branch model, which was blockwise strongly separable. I knew I would be able to produce an inverse demand function system from a blockwise weakly separable utility function. A model of food demand was the perfect opportunity to create such a model, since agricultural goods supplies, predetermined by previous year farming decisions, tend to be highly inelastic. Hence inverse demand, with quantities exogenous and prices endogenous, was a good choice for that project. I produced the model by generalizing the hypocycloid in mathematics to produce my generalized hypocycloidal model. By the time the research was complete, interest in the conspiracy theory had waned, and the agricultural economist working on the supply side model had left the Board. But I was happy with the outcome: 1977 publication in Econometrica. To my knowledge, the generalized hypocycloidal model is, to the present day, the only available demand system derived from a blockwise weakly separable utility function.
Years later, Arthur Burns, after he had retired from the Federal Reserve Board as Chairman, told me he was himself responsible for the inflation. He said that he had been trained in the economics of the Great Depression with the view that unemployment should be the primary target of policy. He said he had been slower than most economists to recognize that the natural rate of unemployment had increased. As a result, he adopted an excessively expansionary monetary policy intended to decrease unemployment to levels no longer attainable, thereby causing accelerating inflation and “stagflation.”1

4. Monetary and Financial Aggregation

Serletis: How did you become interested in monetary aggregation issues? What influenced you?
Barnett: At this point, I had published extensively in consumer demand modelling, along with the associated areas of aggregation theory, index number theory, functional structure, and systemwide econometrics. Suddenly an extraordinary opportunity came my way: the opportunity to use my expertise in those areas to create a new field of research—the field of monetary aggregation and index number theory. Oddly a large gap existed between the work done by macroeconomists in those areas and the fundamental and much more highly advanced methodologies that had been developed in aggregation theory, index number theory, and micro-founded consumer demand modelling. The work done by monetary economists and macroeconomists in those areas seemed decades behind the state of the art in the related literatures for other goods, services, and assets. In some ways, this gap was analogous to the one I had recognized previously in the field of labor economics, but with much greater policy relevance. I could hardly believe my good fortune, when all of this was dropped into my lap at the Federal Reserve, and indeed I immediately knew how to proceed. But this is another long story, and I am sure you are aware of where this all went and is still continuing to move to this day.
Serletis: How would you describe the origins of the Divisia monetary aggregates?
Barnett: Friedman and Schwartz (1970, pp. 151–52) had written that simple-sum aggregation over monetary assets is a special case, implicitly assuming the monetary assets are perfect substitutes. They concluded that “The more general approach has been suggested frequently but experimented with only occasionally. We conjecture that this approach will get far more attention than it has so far received.” The relevancy of the more general approach increased rapidly during the 1970s. Long ago, when monetary aggregates first began appearing from central banks, those aggregates included only currency and demand deposits, neither of which yielded interest and both of which were legal means of payment. Indeed, that was the special case mentioned by Friedman and Schwartz, and simple sum aggregation was correct for those early aggregates. But as interest-bearing substitutes for money began appearing and evolving, simple sum aggregation was no longer consistent with microeconomic index-number theory and aggregation theory.
As Irving Fisher (Fisher 1922, p. 29) concluded in his famous book, The Making of Index Numbers, “the simple arithmetic average produces one of the very worst of index numbers, and if this book has no other effect than to lead to the total abandonment of the simple arithmetic type of index number, it will have served a useful purpose.” In the early days of monetary aggregation, including only currency and demand deposits, central banks were correct in ignoring Irving Fisher’s conclusion and in remaining the only governmental agencies left in the world still using simple-sum or arithmetic average aggregation. But those days are long gone. Even the narrowest of monetary aggregates today includes assets yielding interest, such as NOW checking accounts.
In index number theory, data on both quantities and prices during two time periods are needed to measure the growth rate of the quantity aggregate. Both quantities and prices are needed, whether to produce price indexes or quantity indexes. When a good is a durable, index number theory requires use of the rental price or user cost price of the good’s services, not the purchase price of the stock. Since money is a durable, its user cost price is needed.
It is interesting to ask why Milton Friedman or one of his students did not succeed in applying the literature on aggregation and index number theory to monetary aggregation, since Friedman and Schwartz had so clearly recognized its relevancy. The primary reason was the lack of availability of a rigorous, formal derivation of the user cost price of monetary assets. Hundreds of papers had appeared speculating about that formula, with no agreement reached among them. The formula was not derived until my papers, Barnett (1978, 1980), appeared in Economics Letters and the Journal of Econometrics, respectively. Without that formula, the literature on index number and aggregation theory could not have been applied to monetary aggregation.
Francois Divisia (Divisia 1925) proved that the continuous time index named after him would exactly track any exact aggregator function without error. That index is directly derived, without approximation, from the first order conditions for constrained utility maximization. Unlike the severely defective simple-sum aggregate, which requires the utility function to be a simple sum, the Divisia index makes no assumptions on substitutability among components or on the form of the aggregator function, other than its existence. While remarkably elegant in theory, the Divisia index assumes continuous time. Since economic data are available only in discrete time, there is a need for a discrete time approximation to the Divisia index. The discrete time approximation accepted in the field of index number theory is the Törnqvist index, which can be viewed as the trapezoidal rule approximation. In his extensive research on consumer demand systems modeling, Theil called that approximation the Divisia index in discrete time, although Diewert calls it the Törnqvist-Theil index.
I follow Theil’s lead in calling it just the Divisia index. Although the Divisia index in continuous time is exact, the discrete time approximation has a remainder term. That remainder term is third order in the changes. That negligible remainder term is usually less than the roundoff error in the component data. It should be observed that Diewert has defined a class of index numbers, called “superlative” indexes, similarly having third order remainder terms and thereby being equally good approximations to the discrete time Divisia index. The well-known Fisher ideal index is in that class. Much of the work by Diewert and Theil in index number theory had not yet appeared, at the time that the Friedman and Schwartz book appeared.
In practice, the choice among indexes in that class is of little importance, since their growth rates are nearly identical. The discrete time Divisia index has the advantage of derivation from the continuous time Divisia index, providing easier interpretation in theory and relevance to the parallel literature on “statistical index number theory.” In my first Journal of Econometrics paper, Barnett (1980), I defined both the Fisher ideal monetary aggregates and the Divisia monetary aggregates, both using the user cost price of component services in the respective formulas. If one were to compute the aggregate from the Fisher ideal formula and call it the Divisia aggregate, no one would likely ever know, since the differences in the aggregates’ growth rates are within the roundoff error of the component data.
Serletis: Can you explain the basic concept of Divisia monetary aggregation?
Barnett: There are two flawless ways to understand that aggregation and one other way that requires careful reasoning to avoid misunderstanding.
(1) The first flawless interpretation is that the Divisia monetary aggregates remove the interest rate investment motive for holding money and aggregate over all other services of the monetary components. To remove from the aggregate any services other than the explicit interest rate, those removed services must be measured at the margin and added into the interest rate to produce an implicit interest rate. The investment motive captured by the explicit interest rate itself must be removed. Otherwise, monetary aggregates would have to include all land and capital yielding a financial return.
(2) The other flawless interpretation is in terms of the microeconomic derivation of the Divisia index. Investing the time to understand that derivation avoids the possible misunderstandings arising from the third method, which requires careful interpretation to avoid misunderstanding.
(3) That third method just looks at the formula and tries to interpret it from its appearance. Unfortunately, that approach, overlooking the microeconomic foundations, easily produces misunderstandings, as I now will attempt to explain.
The Divisia index measures the growth rate of the aggregate as the weighted average of the growth rates of the components assets. The weights are the expenditure shares computed using user cost prices. Since the user cost price plays an important role in the index, it is tempting to think that the “weight” on a component quantity is its user cost. The user cost is proportional to the forgone interest from holding the component asset, where the foregone interest is the difference between the rate of return on pure capital (the “benchmark asset”) and the own rate of return on holding the asset. The highest user cost is on currency, having a zero own-rate of return, and hence the highest foregone interest from holding that asset. But the user cost price is not the weight on an asset. The user cost price is the marginal utility from holding the asset, not the average or total utility and not its weight. To get this wrong is to make the famous “diamonds versus water paradox” error. Water has low price and hence low marginal utility, but high average and total utility.
Since the user cost price of an asset is in the numerator of its share weight, while all other user cost prices are in the denominator of its share weight, it is tempting to think that the share weights are proportional to the user cost prices of the assets. But that also is wrong. Increasing the price of a good does not necessarily increase its share weight. In fact, the direction in which the share will move, if a price is changed, depends upon whether the good’s own price elasticity is greater or less than 1.0. Consider, for example, Cobb Douglas utility. In that case, the shares are independent of own prices.
Another source of misunderstanding is misinterpretation of the share weights as level weights. The shares are growth rate weights, not level weights. The aggregate’s level is not a weighted average of its components levels. In fact, the level of the Divisia index is a line integral having no interpretation as a weighted average. It is the growth rates that have a weighted average interpretation.
Also, sometimes people look at the formula and conclude that changes in interest rates, by changing user cost prices, can be causal in changes in the Divisia monetary aggregate. This conclusion also is wrong. Recall that the Divisia index in continuous time exactly tracks the quantity aggregator function, which contains only quantities. There are no user cost prices or interest rates in the quantity aggregator function. An analogy would be revealed preference theory, which uses both quantities and prices to reveal the utility function, which depends only upon quantities. Similarly, there is a dual user-cost aggregator function depending only on component user costs, although the Divisia user cost approximation that tracks the user-cost aggregator function, contains both quantities and user cost prices.
In short, the first two interpretations are the most straightforward ones. The third cannot be used correctly without careful reference to the underlying microeconomic theory. Fortunately, people looking at the quantity and price aggregates in the national accounts have become accustomed to interpreting the index numbers relative to their intent and their underlying research, without looking at the Commerce Department’s Fisher ideal quantity and price index number formulas. To look at those formulas without reference to the underlying theory could produce the same kinds of possible misinterpretations described above. In fact, the Commerce Department’s Fisher ideal indexes, which are the square roots of the Laspeyres and Paasche indexes, are even more difficult to interpret than the Divisia index, without reference to the underlying microeconomic theory.
Serletis: You continue to study monetary aggregation issues. You have extended the theory to the case of risk, to the case of multilateral aggregation over multicountry economic unions, and currently to incorporation of credit card transactions services. Can you discuss the importance of the latter extension that you are working on?
Barnett: I am sure you know my initial motivation to work on that subject came from you. You suggested it, while serving as discussant of my Presidential Address at the 2014 conference of the Society for Economic Measurement at the University of Chicago. When I began looking into that subject after the conference, I found published empirical research showing that when credit card use increases, the demand for money goes down, and vice versa. As a result, monetary aggregates that omit credit card services are failing to aggregate over a very important substitute for monetary services. Critics of the use of monetary aggregates have correctly been complaining about this omission for years. In fact, credit cards likely provide more transactions services to the economy than some of the components of existing central bank monetary aggregates, such as nonnegotiable certificates of deposit, which are highly illiquid.
However, advocates of simple-sum monetary aggregates also correctly insist that credit card transactions volumes cannot be added to monetary assets, since credit card balances are liabilities. In accounting conventions, liabilities cannot be added to assets. This paradox produces a “Catch 22” dilemma. Monetary aggregates need to include credit card transactions services, but credit cards cannot be added to monetary assets.
As I have shown, this paradox disappears as soon as it is recognized that economic aggregation theory, unlike accounting, aggregates over service flows, regardless of whether they are produced by assets or liabilities. Whether or not the components are assets or liabilities becomes irrelevant. I have derived the formula for aggregating jointly over monetary asset services and credit card transactions-volume services. The nature of the transactions services provided by credit cards is deferred payment. You cannot go into a store with cash and say you have the money to pay for the goods you want to buy, but refuse to pay until the end of this or next month. To an accountant, the fact that credit card bills are ultimately paid off with money might make credit card balances seem redundant with money, but not to an aggregation theorist measuring service flows. If deferred payment of goods purchased were not a distinct transactions service to the economy, there would be no value added by credit card companies, and hence in equilibrium credit cards would not exist.
Serletis: Over the years it has been shown, by you and also by a lot of other people, that your Divisia monetary aggregates are superior to the simple-sum aggregates. Yet central banks have been conducting monetary policy based on a short-term nominal interest rate. Can you explain their reluctance to switch to monetary policy strategies based on money measures?
Barnett: They are being very good to me by helping with sales of my book, Getting It Wrong. If I had to write a book with the title, Getting It Right, it would sell very few copies and would not have won for me the American Publisher’s Award.
Other than that generous assistance to me, I can only point to the literature on mechanism design, which is outside my area of expertise. Clearly there are substantial differences in the design of central banks throughout the world. For example, at the Bank of England, some academic economists have a vote on central bank policy. At the Federal Reserve, the panel of academic advisors have no vote. I have no inside information about why the Bank of England adopted an official Divisia monetary aggregate years ago and admirably continues to make it available to the public to the present day. We will likely learn more about this at the Bank of England conference being held in my honor on 23–24 May 2017. There also are differences in the mechanism design of the European Central Bank, which provides Divisia monetary aggregates to its Governing Council for its meetings and uses monetary aggregates as long-run anchors of policy.
Central to the literature on mechanism design is the concept of “incentive compatibility.” I have my views about that difficult problem at central banks, based on my years on the Federal Reserve Staff, but I am not an expert on mechanism design, which is a deep area of microeconomic theory. Someone like Leonid Hurwicz would have had more relevant expertise to answer your question. However, the long run is a very, very long time, and I do not put a lot of effort into worrying about the motives for central bank policy in recent years. That is the central banks’ problem, not mine.
Even John Taylor, the originator of the Taylor rule, has written on his personal blog2 on 8 June 2014 that “Michael Belongia and Peter Ireland report new empirical results with relevance to monetary policy. They show that the Divisia index of money supply…has effects on the economy over and above the effects of the short-term interest rate…I agree with this view, and have for a long time pushed back against the trend of central banks—including the Fed—to ignore money growth.” In that blog post, Taylor also wrote: “In situations where the interest rate hits the lower bound or more generally in situations of deflation or hyperinflation, I have argued that central banks need to focus on a policy rule which keeps the growth rate of the money supply steady.”
Taylor continued: “In one of his last research papers, Milton Friedman argued that the Taylor rule for the interest rate worked well, because it was a way to keep the growth rate of the money supply constant, another way to make the connection between money growth rates and interest rate rules.” It is interesting that Friedman’s statement was in the past tense. It also is interesting to observe the following statement in Robert Lucas’s well known Econometrica paper on inflation and welfare: “I share the widely-held opinion that M1 is too narrow an aggregate for this period, and I think that the Divisia approach offers much the best prospects for resolving this difficulty.” (Lucas 2000, p. 270)
I am sure the time will eventually come, when those advocating central bank policy focused solely on short term interest rates will be asking why central banks are no longer doing what they advocate. You can bank on that!3
In the meantime, the Center for Financial Stability in New York City has continued to show convincingly that the Divisia monetary aggregates are valuable indicators of the state of the economy, as I also have shown in my recent work with Chauvet and Leiva-Leon on monthly Nowcasting of nominal GDP.

5. Demand Systems and Flexible Functional Forms

Serletis: During your time at the Federal Reserve Board you also started working on flexible functional forms. How did you get interested in that area?
Barnett: When I was at the University of Chicago, both Henri Theil and Erwin Diewert were on the faculty. I attended their classes and continued as a Research Associate at the Chicago after employed at the Board. As I’ve explained above, I felt that one of Theil’s assumptions regarding his Rotterdam model was too strong. I resolved that problem by removing that assumption in my extension to the model published in the Review of Economic Studies in 1979. I similarly felt there were problems with the flexible functional forms literature being advocated by Diewert. I very much welcomed the idea that flexible functional forms, consisting of quadratic local approximations, had resolved the Uzawa impossibility theorem problem. That problem had undermined attempts to extend the CES model. But the early flexible functional forms, produced by second order Taylor series, brought back worrisome memories of challenging difficulties I had encountered with similar models in an analogous context at Rocketdyne.
One of my projects at Rocketdyne was to estimate an equation that would predict the start time of the F-1 booster rocket engine. That start time was measured in milliseconds. The four explanatory variables were the inlet pressures and the inlet temperatures to the fuel pump and to the oxidizer pump. If you were to plot that start time against any one of the four explanatory variables with the other three variables held constant, the resulting curve looked like an indifference curve or isoquant in economics. In fact, if you were to replace the left side of the equation with “utility” and the right side by four goods quantities, the function would look just like a monotonically increasing, quasiconcave utility function.
But unlike economists, who usually do not have experimental data, I had a vast amount of experimental data from rocket engine tests previously conducted at Edwards Airforce Base. In addition, I could acquire more data from controlled experiments, run at my request, with settings of the variables determined by a Latin square statistical design. My work was assisted by a staff of professional statisticians. The cost of the rocket engine tests was staggering, but the importance of the estimated equation cannot be overemphasized. It was a matter of life or death to the future astronauts.
The experiments were run on a test stand in California. But the launch of the vehicle was to be in Florida with five such engines, each producing 1.5 million pounds of thrust, clustered together in the first stage of the Saturn V vehicle. The environmental conditions inside the vehicle in Florida were different from those on the test stand at Edwards Airforce Base in California. Under NASA contract, we had to be able to predict the start times of each engine in the vehicle, based upon its prior tests at Edwards. The contractually required accuracy of the prediction was demanding. It took me a year to complete the project. The reason for the needed accuracy was the risk of a catastrophic failure called the “pogo effect.” If all five engines were to start at the same instant of time, the impulse to the tall thin vehicle could cause structural oscillations (like a pogo stick) and failure of the vehicle structure. The fuel and oxidizer tanks would burst. The fuel and oxidizer would mix and explode. The astronauts could not survive that failure. The engines had to start in a safe sequence to avoid activating resonant frequency oscillations of the vehicles structure.
Using Rocketdyne’s mainframe computer, I ran large numbers of regressions with every kind of specification I could think up, including such exotic models as high order approximations in hyperbolic functions. After a year of attempts, it became clear to me that any unconstrained model capable of getting close to the correct equation would not attain the relevant first and second derivatives everywhere within the needed range of the variables. The biggest problems were signs of curvatures, which could oscillate between the correct sign and the wrong sign along the function, while remaining close to the correct function. But I could not permit the model to violate the first and second derivatives I knew to be correct. Even if a high order polynomial might have been able to predict well, NASA would not have accepted an equation that sometimes locally violated the laws of physics. A simple quadratic model was too primitive to get adequately close to the correct equation within the relevant range of the variables. But even with such an elementary model, the first derivatives and curvatures could not both retain the correct signs throughout the relevant region. I ended up estimating a very complicated high-order model with the correct signs of first derivatives and curvatures imposed throughout the relevant region—not an easy task.
That experience had disturbing implications for the theoretically unconstrained second order Taylor series models being used as “flexible functional forms” in econometrics. I knew that such models could reject economic theory even when true—and probably would. In addition, it was known that imposing monotonicity and concavity on those functions globally would severely damage their flexibility. For example, the translog would collapse to Cobb Douglas. It was clear to me that more research was needed on this subject. In fact, the Federal Reserve itself was not willing to use such “flexible form” specifications for analogous reasons, but without the experimental confirmation that had been seared into my mind by my one year struggle to estimate a single hauntingly similar equation.
Serletis: How did you decide to deal with those concerns in economics?
Barnett: The class of flexible functional forms was becoming very popular in modeling tastes and technology, especially the translog and the generalized Leontief. Both of those early models were produced from second order Taylor series expansions. But Taylor series expansions are inherently local with poor properties away from the center of the approximation. Caves and Christensen (1980) had shown that those models often violated the regularity conditions of microeconomic theory within the likely region of the data. As a result, those models often reject theory, even in Monte Carlo studies in which theory holds globally. In addition, violations of the theoretical regularity conditions negate the assumptions of the duality theorems upon which the models are based. This internal contradiction in the foundations of those models was damaging to that literature. The resulting tangencies of the budget constraint to an indifference curve implied locally constrained minimization of utility, in regions of indifference curves violating the theoretical curvature condition.
But I was aware that the Laurent series expansion is not inherently local and has better regional properties than Taylor series. I proposed a flexible functional form based on a second order Laurent series. I also produced a special case that was not only flexible but also parsimonious, having no more parametric freedom than the second order Taylor series models, but with better regularity properties relative to microeconomic theory. I published three papers on that approach, including one in Econometrica (Barnett and Lee 1985) and two in the Journal of Econometrics.
Meanwhile Ron Gallant had published a brilliant paper advocating a series expansion that could approximate tastes and technology globally, not just regionally. His approach, using semiparametric methodology, was based on the Fourier series. Choosing the Fourier series for initial research on that approach to global approximation made good sense, since many relevant lemmas were known about that series expansion. Gallant used those lemmas in his proofs of global convergence in Sobolev norm. But unfortunately, the basis-functions of that expansion are periodic: sine and cosine functions. With a finite sample size, his approach produced a finite order Fourier series, likely to have periodic properties violating the microeconomic regularity conditions for tastes or technology. In effect, the basis-functions spanned the space of neoclassical functions from outside the set of those functions, thereby treating the theoretically admissible functions in microeconomics to be within a measure-zero set reached only upon convergence with an infinite number of terms.
I was aware that another series expansion had similar global convergence capabilities, but with the basis-functions spanning from within the theoretically admissible set. That series expansion is the Müntz-Szatz series expansion. This expansion, when estimated semiparametrically, not only can span the entire space of increasing concave functions, but can do so with its partial sums remaining within that space and thereby not violating the theory. The model can span the neoclassical function space from within and can reach any function globally as sample size increases. As a result, I named that model the Asymptotically Ideal Model (AIM). Of the papers I have published on that ambitious model, perhaps the most interesting was coauthored with John Geweke and Michael Wolfe (1991) in the Journal of Econometrics.
Serletis: There is a large number of flexible functional forms in the literature, including the Almost Ideal Demand System (AIDS) of Deaton and Muellbauer (1980) and the Normalized Quadratic models proposed by Diewert and Wales (1987). Do you have any advice for empirical researchers as to which flexible functional form they should be using?
Barnett: I can only speak for myself on that question, since my preferences are somewhat different from those of many mainstream economists working in that area. Because of the nature of my intellectual origins, I tend to think like a physical scientist. I choose to work at the state of the art of the profession, as do such econometricians as Peter Phillips and Ron Gallant, although to do so can seem unnecessarily difficult to many applied economists. In economics, we usually do not have available data from controlled experiments that can be used to confirm or contradict our results in a definitive manner, and even published replications are rare. In addition, simplifications are unavoidably necessary, since the economy is a far more complex system than any rocket engine or other system commonly analyzed by engineers. Although simplifications are indeed important and necessary in economics, I am less comfortable with some such simplifications than many other economists. The Asymptotically Ideal Model (AIM) mentioned above is far better than the models you have mentioned—but much more difficult to use.
The origins of the Normalized Quadratic are in a paper published by Diewert and Wales (1987), in which they proposed the Generalized McFadden model and the Generalized Barnett model. A year later, they based the Normalized Quadratic on the Generalized McFadden. The Generalized Barnett is based on my Laurent Series model. I don’t know why they changed the name of the Generalized McFadden. Because of its negative exponents, the Generalized Barnett is more difficult to estimate than the Generalized McFadden. I have used both the Generalized Barnett and the Generalized McFadden in my published research and have found both to be useful under different circumstances. Between the two, I prefer the Generalized McFadden only when other aspects of the research are so challenging as to make use of the Generalized Barnett model prohibitively difficult. The reason is provided in Barnett (2002) in the Journal of Econometrics.
Deaton and Muellbauer’s AIDS model is based on its nonlinear version, derived from the PIGLOG specification of tastes. That nonlinear version is a flexible functional form and is very interesting, because of its connection with Muellbauer’s important approach to aggregation over consumers. But few economists use the original nonlinear form. They use a linearized version. The linearization compromises the model’s integrability and hence damages the model’s claim to be a “flexible functional form” in the usual sense. In fact, the linearized version has more in common with the absolute price version of the Rotterdam model than with the nonlinear AIDS model. In published Monte Carlo studies, I have repeatedly found that the absolute price version of the Rotterdam model should be preferred to the linearized AIDS model.
The nonlinear relative price version of the Rotterdam model is better than either the linearized AIDS or the absolute price version of the Rotterdam model, because of the deep insights that the relative price version can produce about tastes and its ability to test for and impose blockwise separability. However, the relative price version of the Rotterdam model is not only more difficult to estimate, but also requires careful separation between its ordinal implications, which are important, and its cardinal implications, which are inadmissible. Unfortunately, Theil’s unwillingness to make that distinction, and his willingness to impute meaning to the cardinal implications, resulted in misunderstandings about that model’s limitations and its unique capabilities.
The two models you have mentioned are reputable, easy to use, and do not interfere with publishability. Having no need to worry about contradiction from controlled experiments, many applied economists view the characteristics of “reputable” and “easy to use” to be dominant. I do not think that way. For analogous reasons, real business cycle theorists often calibrate, simulate, and publish models using Cobb Douglas tastes and technology, having no estimable elasticities at all. I appreciate and respect that literature. But that is not me.
Serletis: I have always been wondering why most of the literature on dynamic stochastic general equilibrium (DSGE) modelling uses simple functional forms for the aggregator functions such, as for example, logarithmic utility functions and Cobb-Douglas or CES production functions, as you just mentioned. Is it because of difficulties in handling dynamic stochastic general equilibrium models with flexible functional forms, or is it because flexible functional forms are not relevant in that literature?
Barnett: The ability to model real world economies accurately is far beyond the state of the art. As a result, all areas of applied economics condition upon simplifying assumptions. Those assumptions are usually based on conventions that become accepted within that area of applied research, reflecting its primary focus. No area of applied research is immune to dependence upon such customary assumptions. As recently emphasized by Paul Romer, many of the customary assumptions in DSGE have become targets of criticism, sometimes misdirected. But the use of Cobb Douglas utility is currently the target of less controversy than the existence of that utility function and thereby the existence of a “representative consumer.”
The assumptions necessary for existence of a representative consumer rule out distribution effects of policy. The existence of distribution effects is a primary source of policy differences among political parties. Heterogeneous-agents models seek to address that particular criticism. But the customs of the profession do not rule out DSGE models having a single representative consumer. The existence of a representative firm is not a problem under perfect competition, since there is no budget constraint producing distribution effects. Debreu’s Theory of Value (Debreu 1959) contains a proof that a representative firm, aggregated over all firms, exists under perfect competition with no additional assumptions at all. But New Keynesians do not assume perfect competition. In that literature, aggregation over firms presents serious problems in theory.
The logic behind DSGE’s customary assumptions is the following. If a policy problem exists and can be solved in an idealized, simplified model of the economy, then that problem most likely is relevant to the real world’s much more complicated economies. I agree with that and recognize the contributions of such models. But of course, the converse is not true. The existence of a policy problem in a real-world economy is not necessarily reflected in an idealized, simplified economy. For example, much of the reason for the existence of central banks is largely assumed away in many real business cycle models. Those models often omit much of the monetary transmission mechanism by not including bank technology and thereby value added in the production of financial intermediary deposit services. Entering that value added into a model presents a major and very important measurement problem, assumed away in most, but admirably not all, DSGE models.
Serletis: I have used many flexible functional forms over the years, including your Asymptotically Ideal Model. In terms of the locally flexible functional forms, I have found that your Minflex Laurent model and Diewert and Wales' Normalized Quadratic model are the best models to use. However, when I use both models for comparison purposes, I might get results that are not quantitatively consistent. How do you suggest we deal with this problem?
Barnett: I would suggest a Monte Carlo study, to see which is best at approximating known utility functions used to generate the data, as I have done in my comparisons between the AIDS model and the Rotterdam model. But I would not limit the comparison to only Minflex Laurent and Normalized Quadratic. The Asymptotically Ideal Model would blow the others away. But it is more than just a “locally” flexible functional form, so perhaps not relevant to your question.
Serletis: Your work on flexible functional forms indirectly relates to your work on monetary aggregation. The flexible functional forms that you have proposed, the Minflex Laurent, the Generalized Barnett, and the Asymptotically Ideal Model, are better than the translog model, yet the Divisia index is exact to the linearly homogeneous translog, as shown by Diewert. Have you ever attempted to come up with a statistical index that is exact to one of the flexible functional forms that you invented?
Barnett: No, I see no reason to do so, since all index numbers within the flexible functional forms class track each other to within a tiny third order remainder term. For measurement purposes, the distinction among them is of little importance. But more to the point, Ki-Hong Choi and I have provided a more general approach to locating the class of “flexible functional forms” consistent with third order remainder terms. Our approach, using Galois theory, contains Diewert’s operational class as a strict subset. For example, our methodology includes the Sato-Vartia index as a flexible functional form, but Diewert’s approach does not. In 2008, Ki-hong Choi and I published our approach in the Journal of Mathematical Economics.
Serletis: I think that the future of flexible functional forms looks brighter than ever. Do you agree with my assessment? Do you have any suggestions for future research in this area?
Barnett: I am often asked to predict the future, but I rarely do so. In a recent exception to that rule, I did predict that Hillary Clinton would defeat Donald Trump. I should have stuck to my policy of never predicting the future.

6. Nonlinear and Complex Dynamics

Serletis: Let’s move now to another area of your research, that about numerical solutions for bifurcation boundaries in dynamical macroeconometric models. This work is very different from your work on monetary aggregation and flexible functional forms. How did you get interested in this area?
Barnett: When I was on the faculty of the University of Texas at Austin, I got to know Ilya Prigogine in the Physics Department. I was fascinated by his famous book with Stengers, Order Out of Chaos (Prigogine and Stengers 1984), and the emerging research on chaos in the social sciences. Ilya had won the Nobel Prize for his theoretical research on chaos in physics and chemistry. Meanwhile, Harry Swinney was also doing research on chaos in the Physics Department, but his work was experimental in a lab that he directed. I did some empirical research on chaos with Ping Chen, who was associated with Ilya’s center at the University of Texas and at the Free University of Brussels. In Brussels, Ilya directed the Solvay Institute, famous for its historic conferences, including landmark papers by the Curies and Einstein. I presented my results with Ping Chen at a conference Ilya ran in Brussels. The conference included a session at the Palace with the King and Queen attending.
Ping Chen and I were the first to detect chaos in economic data, using tests developed by experimental physicists. But those time series tests provided no way to isolate the source of the chaos to the economy. The source could have been chaos in the weather, already well established by climatologists. The next step would have been to condition on a macroeconomic model and test the hypothesis that the parameters are within the subset of the economy’s parameter space supporting chaos. Jean-Michel Grandmont, who had done important research on chaos, informed me that analytically locating the chaotic subset in a model with more than three parameters was beyond the state of the art of the mathematics profession. That left the possibility of numerical search for that region. But even if the chaos-supporting subset were found numerically, testing the hypothesis posed troubling problems for statisticians, since the likelihood function is neither differentiable nor continuous as it crosses that region. The likelihood function contains singularities within that region. The existence of those singularities violates the regularity conditions for most statistical tests.
I explained to Prigogine that testing for chaos subject to reasonable economic models, while potentially permitting isolation of the chaos to within the economy, would pose enormous mathematical, numerical, and statistical problems. He replied that the parameter space of dynamical models contains a large number of bifurcation subsets, with monotonic stability and chaos being only two of them. He suggested that I investigate bifurcation in general. This suggestion was far more tractable than looking solely for the chaotic subset. In theory, there are an infinite number of possible forms of instability, as well as an infinite number of forms of stability, such as monotonic stability, damped periodic stability, and damped multiperiodic stability. Numerically locating bifurcation subsets in general is far less difficult than testing specifically for chaos alone.
I then began searching numerically for bifurcation boundaries in various well known macroeconometric models. I have so far not been able to find a single reputable dynamic macroeconometric model that does not have bifurcation boundaries within its parameter space. What is more remarkable is that those boundaries often cross the confidence region around the point estimate of the parameters. That phenomenon damages robustness of dynamical inferences, since more than one kind of dynamics can be produced by the same model with settings of the parameters remaining within the confidence region. My conclusion is that the common procedure of simulating policy models solely with the parameters set at their point estimates is misguided, by imputing all emphasis on only one of the statistically significant dynamical solution possibilities. The simulations should be conducted at various settings within the confidence region to determine all possible dynamics consistent with the model and the data.
This finding does not imply there is anything wrong with economic models. The existence of bifurcation subsets of the parameter space is well known and understood in systems science and is not a negative reflection on the model. I presented these results at a conference attended by Peter Tinsley, for whom I had worked in the Special Studies Section at the Federal Reserve Board many years earlier. He walked up to me afterwards and with much enthusiasm informed me that he had encountered such phenomena with macroeconometric models at the Board and never understood the source at that time. Now he understood.
Serletis: Did the subject of bifurcation have relevancy for your earlier work as an engineer on the space program?
Barnett: Yes, very much so. Although I worked primarily on the F-1 booster engine, I also did some work on the second stage J-2 engine. That engine was state of the art, using liquid hydrogen as the fuel. Liquid hydrogen is extremely cold. Mixing and burning liquid hydrogen with liquid oxygen in a rocket engine yields very powerful thrust relative to weight, but requires challenging cryogenic engineering. Unpredictably, the rocket engine occasionally blew up on the test stand. Those explosions created crisis situations, with NASA officials and engineers arriving in a state of great alarm.
Rocketdyne employed a very sophisticated mathematical systems theorist, who was asked to investigate the stability of the J-2 engine’s design. His resulting paper was extremely complicated and fully understood only by a few of the firm’s engineers. But his conclusion was clear. The design was fundamentally stable, but small changes in the design’s parameters could bifurcate the engine’s dynamics into an unstable region. This insight was passed on to the workers in the factory. One of those highly skilled machinists then determined that there was a possible very small manufacturing discrepancy, previously viewed as negligible, in a mechanical part of the engine. The catastrophic consequences of that minor discrepancy had not been anticipated. Once that potential small change in a parameter was prevented in the factory, the problem was solved.
There is a moral to the story. Econometricians tend to think about “errors in the variables” in terms of a mapping from one Euclidian space to another. For example, a small change in a quantity can cause a small change in an estimated elasticity of substitution. But engineers and systems theorists tend to think about mappings from a Euclidian space into a dynamical function space. A small change in an initial condition or parameter in the Euclidian space can produce fundamentally different dynamical solution paths in the function space. An objective of quality control in manufacturing and engineering is to avoid bifurcations of the mapping to the dynamical solution space.
In contrast, macroeconomists who judge policy prescriptions by simulations conducted only at point estimates of parameters ignore the compromises in robustness caused by bifurcation boundaries crossing the confidence region of the parameter estimates. Many of the most controversial differences between the physical sciences and economics are caused by this one fundamental difference in emphasis.
Serletis: What about the empirical tests for chaos you mentioned earlier? There was a period when people were publishing interesting papers in top journals, but this does not seem to be an active research area these days.
Barnett: It remains a hot topic in other fields, especially with laboratory data from controlled experiments and data from closed systems. But unfortunately, in macroeconomics we rarely have those kinds of data. Our sample sizes are relatively small and produced from uncontrolled open systems—open to phenomena outside the field of economics, such as the weather. Initially, it was believed that findings of chaos with economic data would be surprising and informative. In fact, the only surprises that could have been found from that literature would have been failures to detect chaos in economic data, since the economy is subject to chaotic shocks from outside the system. As a result, that literature hit a dead end, with findings of chaos potentially disconnected from the economy’s structure.
The possible solution would be to test for structural chaos from within an econometric model, so that the finding could be imputed to the economy. But as mentioned in my prior reply, testing for chaos conditionally upon a macroeconometric model is enormously difficult. Economists who have tried have usually given up and backed out of that research, when they recognized how difficult it was.
Finding a Hopf bifurcation boundary is far easier than finding a chaotic bifurcation boundary. Finding a chaotic bifurcation boundary within the economy would be potentially much more important, because of the highly informative nature of the resulting fractal attractor set. But the increase in research complexity seems out of proportion to the potential gain. Economists who previously were looking for chaos are more likely now looking for Hopf, period doubling, transcritical, or singularity bifurcation. In short, it is a matter of the research investment’s “cost-benefit analysis.” This might change in the future with advances in computer science, mathematics, and statistics. But Grandmont was probably right, when he said that the tools to bring down the research cost of structural chaos identification without experimental data are not likely to become available soon.
Serletis: This area of research is outside the modern core of macroeconomics, which includes both the real business cycle approach and the New Keynesian approach. Do you think that our recent experience with the global financial crisis and Great Recession, and the fact that a large number of economists have raised questions about the value of modern macroeconomics, could help stimulate further research in this area?
Barnett: I have great respect for all modern macroeconomic research, and I display no biases when serving in my role as editor of the journal, Macroeconomic Dynamics. But if this area of research is “outside the modern core of macroeconomics,” then that is a sad commentary on the modern core. This research is not about the choice of model or the estimation of the model. It is about accurately and honestly extracting from an estimated model the dynamical information contained in the model and data. This area of research is equally as applicable to all macroeconomic models, whether real business cycle, New Keynesian, Austrian, post-Keynesian, Marxist, monetarist, or DSGE.
The advances in microfoundations for macro in recent decades have been dramatic. But because of the exceptional policy relevance of macroeconomics, many macroeconomists feel obligated to reach definitive conclusions about policy. This kind of pressure can compromise the normal standards of science.
Suppose you were employed by a central bank, and you were asked to determine a macroeconometric model’s solution path for the final targets of policy, conditionally upon a particular instrument path being considered by the bank’s governors. Would you want to reply that the model says “maybe this would happen or maybe that would happen, but the model cannot distinguish between the two, since a hypothesis test cannot reject either of the two possibilities”? That would probably not go over too well. Instead you might decide to simulate the model with the parameters set only at their point estimates, and ignore other points in the estimator’s confidence region.
What macroeconometric models can do well is rule out possibilities that would arise with the parameters outside the confidence region. That is very valuable information. Most of the world’s economic catastrophes were caused by ignoring what macroeconomics can rule out. Examples could include Pol Pot’s Khmer Rouge Cambodian economic disaster and Mao’s catastrophic Cultural Revolution and Great Leap Forward. Khieu Samphan, put in charge of the Cambodian economy by Pol Pot, had a PhD from the University of Paris, but ignored modern macroeconomic theory. If he had not done so, he would have known better than to close down the central bank, abolish money, and institute a barter economy. But the tendency of modern macroeconomists to ignore outcomes that the estimated model cannot rule out is inconsistent with the normal standards of science. No physical scientist or engineer would do that.
There are many reasons for the current controversies about macroeconomics. Some of those controversies reflect more on the limitations of the critics than on the research they criticize. But one of the sources is the profession’s tendency to overstate its capabilities, provided to the public without suitable qualifications. Indeed, the results on bifurcation stratification of confidence regions could help to decrease that problem by strengthening the macroeconomic profession’s ties to the normal standards of science. Physical scientists are careful to qualify their conclusions relative to the current state of their knowledge. Macroeconomists should do the same.
Another source of controversies about recent macroeconomic research is the tendency to seek to explain national income determination while evading the need to measure money, an omission that troubles many people.

7. Founding of Journals, Monograph Series, and Societies

Serletis: You are the Founder and Editor of the Cambridge University Press journal, Macroeconomic Dynamics. Can you share with me your experiences in starting up the journal?
Barnett: There was a conflict between another well-known journal and its society at around the time that I started up Macroeconomic Dynamics in 1996–1997. The journal was the Journal of Economic Dynamics and Control (JEDC). The society was called the Society for Economic Dynamics and Control. I was a member of the society and knew people involved on both sides of the conflict. The society wanted to be able to select that journal’s editorial board, which presumably meant changing existing members. But the Society did not own the journal. Elsevier owned the journal and wanted to retain control of the editorial board’s membership. Consequently, the society approached Academic Press with a proposal to start up a new journal, with the society being authorized to appoint the editorial board. Tom Cooley was to be the managing editor. There was to be heavier emphasis on real business cycle theory and less emphasis on optimal control theory than was the case with the JEDC. Academic Press turned down the proposal.
There were bad feelings about this conflict, both within the society and on the journal’s editorial board. I called Ed Prescott, who was an officer of the society, regarding the society’s concerns, and I also called Steve Turnovsky, one of the JEDC journal’s editors at the time, regarding the journal’s concerns. I explained that I could start up a new journal that would be purely scientific and neutral regarding the differences of opinion between the society and the journal. I explained that I could propose the new journal to Cambridge University Press, with which I had good relations, since I was editor of one of that publisher’s monograph series. I was advised by Ed and by Steve that it would be a good idea, and I should do it as a possible means of solving the problem. I proposed the new journal to Cambridge University Press, which accepted the proposal.
Serletis: Macroeconomic Dynamics is now an established international journal, publishing high quality articles in a wide variety of areas in economics. Are you satisfied with the journal’s growth over the last 20 years?
Barnett: Perhaps a better question would be whether Cambridge University Press is satisfied, since CUP owns the journal. The answer to that question is definitely—yes!
Regarding my own views, I have consistently underestimated the journal’s growth. The journal has rapidly increased from publishing an annual total of 576 printed pages, spread over four quarterly issues, to printing 2208 annual pages, spread over 8 issues. I have always been cautious about requesting increases in the annual printed pages budget, since the journal’s priority is quality. Our average rejection rate is 90%. But the growth in submission rates has exceeded my expectations. As a result, the journal’s printed page budget has consistently lagged behind the need to keep down the backlog to a comfortable level. The backlog of accepted papers at CUP is now about 2 years long and has remained at 2 years for a few years, despite large annual increases in the printed space budget. It takes about 2 years for an accepted paper delivered to the publisher to appear in print. Publication in the journal’s online edition is faster, but the two-year backlog for the in-print edition is still too long.
The problem is that the growth in submissions has repeatedly been higher than my requested increase in the printed pages budget. There are only two ways to decrease the backlog: increase the printed pages budget or increase the rejection rate. But increasing the rejection rate to over 90% would require rejections of papers without adequate justification from the referees’ reports. The problem with my forecasts of needed print space has been the international explosion of research in dynamic macroeconomics. Initially, I had underestimated that growth in Europe. Now the growth in Asia is simply astonishing.
About a year after the startup of Macroeconomic Dynamics, the Society for Economic Dynamics and Control (SEDC) changed its name to the Society for Economic Dynamics (SED) and started up a new Elsevier journal, the Review of Economic Dynamics (RED). As a single journal, Macroeconomic Dynamics would not have been able to handle the explosive growth in submissions without spinning off a second CUP journal, if it had not been for the fortuitous RED startup. The growth of RED has been very helpful in absorbing some of that growth and keeping the growth of Macroeconomic Dynamics under control.
Serletis: I can only imagine the complex academic politics involved in editing a major journal like Macroeconomic Dynamics. Are you willing to share some of your experiences?
Barnett: I did so in the article, “The Internal Politics of Journal Editing,” which appeared in Michael Szenberg and Lall Ramrattan (eds.), Shared Secrets of Economic Editors: Experiences of Journal Editors, MIT Press, 2014, pp. 163–69. Perhaps it would be best if I did not do that again. Discretion and cautious wording are necessary for editors of highly selective journals.
In addition to having been offered bribes (which I never accept), I have received multiple threats, have had my personal Wikipedia entry hacked by an angry author, have been attacked by a group of angry authors on a blog, and have had to deal with repeated hacks of the online membership file of the Society for Economic Measurement, most likely from an angry author. In one such case, an author had sequentially threatened editors of three well known journals, while including death threats within his repertoire. He was arrested (in Canada), jailed, and then deported.
The percent of authors who are mentally unbalanced is very small. But with the high submission rate to Macroeconomic Dynamics, occasional encounters with such people are unavoidable. As a result, I try to keep a low profile and as much distance as possible between me and anything that might set off such a person. For that reason, I would prefer not to expand further on what I have already written in the article, “The Internal Politics of Journal Editors.”
Serletis: I have found the Interviews Section of Macroeconomic Dynamics very useful and interesting. Have you interviewed all the people that you wanted to interview?
Barnett: There are always young economists moving up into the ranks of the greats and meriting interviews. Regarding those previously invited, I can think of only two who declined. They were Jean-Michel Grandmont and Robert Solow. Solow explained that he does not believe in the “cult of personality,” and wants to be judged solely by his published research. Grandmont’s reason was more puzzling. He complained that I had published a disproportionate number of interviews with Americans. I could not help the fact that a disproportionate number of the most senior famous economists, including many Nobel Prize winners, were Europeans who decided to move to America during and shortly after the Second World War.
Interviews are quotations, and hence cannot be altered by editors of journals. Even the most famous economists normally are not free to say whatever they might want to say in a regular peer reviewed journal article. Since published interviews provide that ability, economists invited for published interviews rarely decline. For example, David Cass, in his interview in Macroeconomic Dynamics, used the four letter “f” word in one of his replies, regarding a former dean. Normally Cambridge University Press would not have agreed to publish that sentence, but could not remove or reword it over David’s objection, since it was a quotation.
As editor and founder, Peter Phillips pioneered high level professional interviews with senior scientists in the Cambridge University Press journal, Econometric Theory (ET). The ET model was followed later by Statistical Science and other journals, including Macroeconomic Dynamics.
Serletis: You collected some of these interviews in the book, Inside the Economist’s Mind: Conversations with Eminent Economists, that you edited with Paul Samuelson and published in 2007 by Wiley-Blackwell. This book must have been received very well, judging from the fact that it has been published in a number of languages. Is this correct?
Barnett: Yes, the book has been very successful and has been translated into seven languages, with authorization by the original publisher, who owns the copyright. But it has likely been translated into more languages than are known to the book’s publisher. Some countries are not signators to the international agreement on copyright. For example, a journalist in Iran wrote to me a few years ago, that he planned to translate the book into Farsi. He explained that he did not need to acquire permission from, or pay royalties to, Wiley-Blackwell, since Iran is not a signator to that international agreement. I would have no way of knowing whether he ever did that, since such unauthorized translations do not appear in Books in Print. Aside from keeping such translations out of Books in Print, there is nothing that publishers can do about such unauthorized translations in countries that are not signators to the international copyright agreement.
Serletis: You are also the Founder and Editor of the Emerald Press monograph series, International Symposia in Economic Theory and Econometrics. Were your experiences with this series similar to those with Macroeconomic Dynamics?
Barnett: The history of that monograph series is very different. The Berkeley Symposia in Statistics monograph series had grown into annual multi-volume sources of outstanding research and had published many famous papers, such as the Kuhn Tucker paper. But when Reagan became Governor of California, his cuts to the budget of the University of California resulted in the termination of that monograph series. The sad outcome seemed to me to cause harm to some of the participating professions, including statistics, econometrics, operations research, and economic theory.
As I have mentioned above, I was on good terms with George Kozmetsky, while I was on the faculty of the University of Texas at Austin. I proposed to him that a new monograph series be created, with similar objectives to the famous Berkeley Symposia, to be sponsored by the IC2 Institute at the University of Texas and by his family foundation, the RGK Foundation. He agreed and provided the funds for the annual conferences. They were held at the IC2 Institute. The conferences and the monograph series were very successful. George not only reimbursed the travel expenses of invited speakers, but also paid large honoraria to speakers. In addition, he invited speakers to a dinner at his ranch in the Texas Hill Country. Our invitations to speakers were rarely turned down. George also had an incredible house in Austin and a mansion in Bel Air, California, next door to Walt Disney’s house. No, we were not invited to those residences.
After I left the University of Texas, the conferences’ connection with the IC2 Institute and the RGK Foundation decreased, although I still am a Fellow of the IC2 Institute. The conferences no longer are held in Texas, but instead are held anywhere in the world where a conference is being held and meets the standards of the monograph series. I receive such proposals often, but the number accepted for inclusion in that monograph series is small, since the volumes are peer reviewed in a manner more common for journals than for monograph series. In recent years, the successful proposals have most often come from Europe.
Serletis: You are also the Founder and President of the Society for Economic Measurement (SEM). What motivated you to start the Society?
Barnett: This goes back to my experiences at Rocketdyne as a systems development engineer. In engineering and the physical sciences, investment in measurement is very high. I would not even guess what it must have cost to run the rocket engine tests at Rocketdyne. In economics, the allocation of the profession’s resources to measurement is much lower than in other scientific fields, and economists seem to be willing to use whatever data are provided by governments, even when internally inconsistent with the economic theory that produces the models within which the data are used. This has long bothered me, going back to my days as a student on leave from Rocketdyne. Although I never met Simon Kuznets, I did admire his work when I was a student and used his data in my dissertation research.
With the proliferation of economics societies in many areas, including some rather obscure, narrowly defined areas, I was struck by the fact that there was no society for economic measurement. This deficiency tended to widen the gap between economics and other sciences. I mentioned this concern to some relevant economists, and all were enthusiastic about creating such a society, dedicated to “measurement with theory.” In addition, Steve Spear rapidly was able to acquire agreement from Carnegie Mellon University to host the society. Some of the officers of the Society for Advancement of Economic Theory (SAET) agreed to provide advice and information needed to assist in the start-up of the new society, based on their experience with the highly successful SAET. Following the subsequent creation of SEM’s Executive Committee, all fell into place rapidly.
Serletis: Are you satisfied with the growth and achievements of the Society for Economic Measurement so far?
Barnett: The society’s growth and achievements in North America and Europe have been excellent, with the membership now rapidly approaching 1000 economists. But membership from Asian and South American economists is relatively low. The first four conferences included two in North American and two in Europe, but none in Asia or South America. Since growth of the economics profession in Asia has been remarkable, it now seems time to run a conference in Asia. We are planning that the 2018 conference will be in Xiamen, China.
Although the growth of the Latin American economics profession has been less dramatic than the growth of the Asian economics profession, the society probably should eventually run a conference in South America. At some point in the future, a conference in Australasia (Australia or New Zealand) might also be justified, and even further into the future, perhaps eventually South Africa. But I currently am agreeing to continue as the society’s President only through 2018. That already exceeds the bylaws’ three-year term in office of the president. Future SEM presidents might wish to consider expanding the society’s scope and reach to extend outside of North America, Asia, and Europe.
Serletis: How does the Center for Financial Stability (CFS) fit into your research program on Divisia monetary aggregation, and how did your relationship with CFS in New York City develop?
Barnett: When the financial crisis hit, the St. Louis Federal Reserve froze its Divisia monetary aggregates data. This was a serious problem for my research and the research of others who wanted to investigate the role of monetary policy in the crisis and the subsequent Great Recession. At the time that the Federal Reserve was becoming a less dependable source of Divisia monetary aggregates data, I began working on my book, Getting It Wrong, and wrote an opinion editorial article published in the New York Times. My New York Times article fortunately was read by Steve Hanke at Johns Hopkins University. Although we had not previously known each other, he liked the article and got in touch with me. I sent him the manuscript of my unfinished book, on which he provided very valuable comments, used in revising the book before publication.
Having invested his time in reading and commenting on the book’s manuscript, he grasped its significance fully and contacted Larry Goodman about my work. Larry had recently founded the nonprofit Center for Financial Stability, an exceptionally admirable venture, providing professionally produced data and research to the public at no charge. The CFS began as a trustworthy service—exactly when needed the most by a disillusioned and confused public. At Steve’s suggestion, Larry read my book’s manuscript and provided extensive helpful comments used in further revising the manuscript. Larry and I thereby got to know each other.
When we first met, the CFS was well on its way with infrastructure. The CFS had substance—a spectacular Board, group of experts, and vision. Funded by donations, the CFS is an independent, nonpartisan, nonprofit organization focused on financial markets for officials, investors, and the public. When Larry contacted me about having the CFS develop and supply my database and assist in my research, I jumped at the opportunity to work with people of such high integrity.
He offered to set up a CFS program, Advances in Monetary and Financial Measurement (AMFM), with my serving as Director. The CFS offered to run algorithms developed by me and to provide the results to the public through releases, as well as to maintain the historical data online with exceptional professionalism and expertise. They created a tremendous amount of computer code and developed, under my leadership, reports to engage a wide range of audiences. Additionally, CFS offered an incredible platform. They host visitors at the highest levels in finance, government, and academia from over 187 of the 195 countries in the world. Since then, the role that the CFS, Larry, and Steve have played in this research has continued to grow. The CFS has gone from Larry’s idea to an extraordinary institution in four years, with AMFM playing a central part of the institution and story.

8. Reflections

Serletis: What are you working on these days, in addition to your extension of the Divisia monetary aggregates to incorporate credit card transactions?
Barnett: I work with many PhD students. What they are willing to do influences what I am willing to try to attempt, if a lot of computing is involved. For example, I am interested in trying to solve the problem of how to model bank behavior econometrically, when bank managers behave in a manner that appears to be risk averse. No bank manager is willing to make loans of unlimited quantities. But under Arrow-Debreu perfect market theory, risk neutrality of managers would be incentive compatible with risk aversion of bank owners. Perhaps this paradox suggests that contingent claims markets are incomplete or that there is asymmetric information. In either case, how to incorporate such complications into an econometric model of financial intermediation is a challenging problem. This might be relevant to understanding the monetary transmission mechanism and value added in banking. A next stage would be to extend to shadow banking.
I am also interested in investigating macroeconometric stochastic bifurcation and nonlinear bifurcation, either of which would involve a great deal of difficult iterative computing. I currently have students beginning to work with me on each of those difficult problems. Whether we will succeed is not yet clear.
Serletis: Most of your work has been about nonlinearity and measurement. Is this because of your intellectual origins as a rocket scientist?
Barnett: I am not sure that the causation is so clear, since economic theory almost never produces linearity. But certainly, my prior life as a rocket scientist has influenced me in those ways.
Serletis: Are the measurement issues in macroeconomics different from those in rocket science?
Barnett: Yes. Measurement in rocket science, as well as in many other areas of engineering, is in continuous time. The machine is heavily instrumented, and variables are measured and recorded as continuous time plots. Discrete time modeling and data are much less common in the physical sciences than in economics. In fact, I’ve always been somewhat uncomfortable about discrete time models in economics. Although markets are open in continuous time throughout the day, discrete time economic theory says that nothing happens in the interior of time intervals, with all transactions happening at boundaries between time intervals. Hence markets open and close at those boundary points, while remaining closed within the interior of the time interval. Since the sequence of boundary points is Lebesgue measure zero on the line, the unavoidable conclusion is that the economy exists “almost nowhere,” in measure theoretic terminology. Bergstrom and Wymer attempted to remedy that problem, but with limited success (Bergstrom and Wymer 1976).
Like almost all other economists, I regularly close my eyes to that unpleasant theoretical implication and often use discrete time models with discrete time data. There is, of course, an internal inconsistency in using data produced by the economy in continuous time throughout the interior of intervals, as if the data had appeared as instantaneous spikes at the boundaries between intervals. Physical scientists don’t make that “mistake.” When they use a discrete time model, they sample the variables at an instant of time at the boundaries between intervals, and derive their discrete time models from the continuous time theory, subject to the sampling convention. But discrete time sampling in the physical sciences produces its own problem, called “aliasing” by time series statisticians.
Rocketdyne never sampled in discrete time. Their instrumentation recorded continuous time plots of all measured variables. Continuous time modeling and measurement in finance is closer to the physical sciences approach than discrete time modeling and measurement in economics.
Serletis: Most of your publications have been coauthored. Is this also reflecting your intellectual origins as a collaborative rocket scientist?
Barnett: Again, that causation is not so clear, although probably relevant. A significant percent of my research at the Federal Reserve Board was single authored, when I did not have PhD students, except as Federal Reserve interns.
Serletis: Can you tell me about your experiences with Ph.D. students, since you started at the University of Texas at Austin? Did you have any superstar students?
Barnett: Many of my students have become very successful, but one became particularly famous. Salam Fayyad was my PhD student at the University of Texas at Austin. After retiring from his career at the IMF, he became Minister of Finance and then Prime Minister of the Palestinian Authority (PA). He is best known internationally for his courageous attempts to combat corruption within the PA.
When he revealed the theft of PA funds by Yasser Arafat, many media reporters thought Salam would be assassinated. But he was protected by the PA Police, since he had doubled their salaries by direct depositing their pay checks to their bank accounts, thereby circumventing the prior skimming of half of their salaries by one of the PA’s other Ministers.
Of my more recent students, the ones who have become very successful comprise such a long list that it would difficult to know whom to mention. But one, who has been rising especially rapidly, is Travis Nesmith. He was one of my students at Washington University and is now an Assistant Director at the Federal Reserve Board in Washington, DC.
Serletis: You worked at a number of university economics departments (the University of Texas at Austin, Washington University, and now the University of Kansas), the Federal Reserve Board, and as an engineer at a high-tech aerospace firm. How different were these jobs, in terms of life style?
Barnett: By far, the life style at Rocketdyne was the most dramatically different from the others. I had a Secret Security Clearance and had to be at work at no later than precisely 8 a.m. each week day. If I arrived at 8:01 a.m., the gates around the building were locked. I had to go to a security guard shack, where the guard would call my boss. He had to come to the gate to authorize my entry. The work was exciting, but the stress level was very high, whenever something went wrong with an engine test.
We worked under NASA cost-plus-incentive-fee contracts. In terms of the incentive compatibility of the firm’s mechanism design, it was nearly perfect. We knew exactly what we had to do, by when, and at what cost. No one who stays in that industry until retirement age continues working. They really need to retire by then.
Serletis: How do you manage being so productive after so many years?
Barnett: On that question, I will defer to Solow’s statement about the “cult of the personality.” I’ll only say that I do what I believe in, and that kind of motivation is very compelling.

9. Advice for Students

Serletis: Do you have any advice for Ph.D. economics students?
Barnett: Never look back, always look forwards, except when you are being interviewed by Apostolos Serletis. Never dance the “zebeikiko” on the side of a mountain in Greece, without first having had a couple of glasses of ouzo. If your work is not also self-motivated recreation, then you are doing something wrong.
Macroeconomists have done much research for years on policy rules versus discretionary policy. Discretionary policy with continuous replanning has been shown sometimes to be time inconsistent. The same issues arise in career planning. At one time, most people committed to a career trajectory at an early age, and stuck to it until retirement. Despite the risk of time inconsistency, resulting in a nonoptimal solution path, discretionary policies, with possible frequent replanning, are becoming increasingly relevant to career choices, as the world becomes a smaller place and change becomes more rapid.
I have followed a winding career path, with many twists and turns, focusing on different fields and different kinds of employment, including private sector, government, and academia. While my career trajectory might seem time inconsistent to some, I do not hesitate to replan and to adjust to different circumstances and changing intellectual interests. Although I cannot speak Russian, I am currently considering an offer to direct a research institute at a university in Moscow. Who would have thought?
Serletis: This is a good place to end. Thank you for the interview.
Econometrics 05 00045 i004

10. Selected Bibliography of William A. Barnett

Books
1981
Consumer Demand and Labor Supply: Goods, Monetary Assets, and Time. Amsterdam: North-Holland.
2000
The Theory of Monetary Aggregation. Coedited with Apostolos Serletis. Amsterdam: Elsevier.
2004
Functional Structure and Approximation in Econometrics. Coedited with Jane Binner. Amsterdam: Elsevier.
2007
Inside the Economist’s Mind: Conversations with Eminent Economists. Coedited with Paul A. Samuelson. Hoboken: Blackwell Publishing.
2011
Financial Aggregation and Index Number Theory. With Marcelle Chauvet. Singapore: World Scientific.
2012
Getting It Wrong: How Faulty Monetary Statistics Undermine the Fed, the Financial System, and the Economy. Cambridge: The MIT Press.
Articles
1976
Maximum likelihood and iterated Aitken estimation of non-linear systems of equations. Journal of the American Statistical Association 71: 354–60.
1977
Recursive subaggregation and a generalized hypocycloidal demand model. Econometrica 45: 1117–36.
Pollak and Wachter on the household production function approach. Journal of Political Economy 85: 1073–82.
1978
The user cost of money. Economics Letters 1: 145–49.
1979
Theoretical foundations for the Rotterdam model. Review of Economic Studies 46: 109–30.
The joint allocation of leisure and goods expenditure. Econometrica 45: 1117–36.
1980
Economic monetary aggregates: An application of index number and aggregation theory. Journal of Econometrics 14: 11–48.
1983
New indices of money supply and the flexible Laurent demand system. Journal of Business and Economic Statistics 1: 7–23.
1984
The new Divisia monetary aggregates, with Edward Offenbacher and Paul Spindt. Journal of Political Economy 92: 1049–85.
1985
The global properties of the minflex Laurent, generalized Leontief, and translog flexible functional forms, with Yul Lee. Econometrica 53: 1421–37.
The minflex Laurent translog flexible functional form. Journal of Econometrics 30: 33–44.
The three dimensional global properties of the minflex Laurent, generalized Leontief, and translog flexible functional forms. With Yul Lee and Michael Wolfe. Journal of Econometrics 30: 3–31.
1987
The global properties of the two minflex Laurent flexible functional forms. With Yul Lee and Michael Wolfe. Journal of Econometrics 36: 281–98.
1988
The aggregation-theoretic monetary aggregates are chaotic and have strange attractors: An econometric application of mathematical chaos. With Ping Chen. In Dynamic Econometric Modeling. Edited by W.A. Barnett, E. Berndt, and H. White. Cambridge: Cambridge University Press. Paper present at the Third International Symposium in Economic Theory and Econometrics.
Semiparametric estimation of the Asymptotically Ideal Model: The AIM demand system. With Pi-Yu Yue. Advances in Econometrics 7: 229–51.
1990
A dispersion-dependency diagnostic test for aggregation error: With applications to monetary economics and income distribution. With Apostolos Serletis. Journal of Econometrics 43: 5–34.
1991
Seminonparametric Bayesian estimation of the Asymptotically Ideal Production model. With John Geweke and Michael Wolfe. Journal of Econometrics 49: 5–50.
1992
Consumer theory and the demand for money. With Douglas Fisher and Apostolos Serletis. Journal of Economic Literature 30: 2086–119.
1995
Exact aggregation under risk. In Social Choice, Welfare, and Ethics. Edited by William A. Barnett, Maurice Salles, Herve Moulin, and Norman Schofield. Cambridge: Cambridge University Press.
Robustness of nonlinearity and chaos test to measurement error, inference method, and sample size. With A. Ronald Gallant, Melvin Hinich, Mark Jensen, and Jochen A. Jungeilges. Journal of Economic Behavior and Organization 27: 301–20.
1997
Which road leads to stable money demand? Economic Journal 107: 1171–85.
The CAPM risk adjustment for exact aggregation over financial assets. With Yi Liu and Mark Jensen. Macroeconomic Dynamics 1: 485–512.
A single-blind controlled competition among tests for nonlinearity and chaos. With A. Ronald Gallant, Melvin J. Hinich, Jochen A. Jungeilges, Daniel T. Kaplan, and Mark J. Jensen. Journal of Econometrics 82: 157–92.
2000
Martingales, nonlinearity, and chaos. With Apostolos Serletis. Journal of Economic Dynamics and Control 24: 703–24.
2002
Tastes and technology: Curvature is not sufficient for regularity. Journal of Econometrics 108: 199–202.
Stabilization policy as bifurcation selection: Would stabilization policy work if the economy really were unstable?. With Yijun He. Macroeconomic Dynamics 6: 713–47.
2005
On user costs of risky monetary assets. With Shu Wu. Annals of Finance 1: 35–50.
2007
Multilateral aggregation-theoretic monetary aggregation over heterogeneous countries. Journal of Econometrics 136: 457–82.
2008
Operational identification of the complete class of superlative index numbers: An application of Galois theory. With Ki-Hong Choi. Journal of Mathematical Economics 44: 603–12.
Consumer preferences and demand systems. With Apostolos Serletis. Journal of Econometrics 147: 210–24.
2010
Empirical assessment of bifurcation regions within New Keynesian models. With Evgeniya A. Duzhak. Economic Theory 45: 99–128.
2011
How better monetary statistics could have signaled the financial crisis. With Marcelle Chauvet. Journal of Econometrics 161: 6–23.
Bifurcation analysis of Zellner’s Marshallian macro model. With Sanjibani Banerjee, Evgeniya Duzhak, and Ramu Gopalan. Journal of Economic Dynamics and Control 35: 1577–85.
2015
Nonlinear and complex dynamics in economics. With Apostolos Serletis and Demitre Serletis. Macroeconomic Dynamics 19: 1749–79.
2016
An analytical and numerical search for bifurcations in open economy New Keynesian models. With Unal Eryilmaz. Macroeconomic Dynamics 20: 482–503.
Real-time nowcasting of nominal GDP under structural break, with Marcelle Chauvet and Danilo Leiva-Leon. Journal of Econometrics 191: 312–24.

References

  1. Bergstrom, Albert R., and C. R. Wymer. 1976. A model of disequilibrium neoclassical growth and its application to the United Kingdom. In Statistical Inference in Continuous Time Economic Models. Edited by Albert R. Bergstrom. Amsterdam: Elsevier, pp. 267–328. [Google Scholar]
  2. Caves, Douglas W., and Laurits R. Christensen. 1980. Global properties of flexible functional forms. American Economic Review 70: 422–32. [Google Scholar]
  3. Deaton, Angus, and John Muellbauer. 1980. An almost ideal demand system. American Economic Review 70: 312–26. [Google Scholar]
  4. Debreu, Gérard. 1959. Theory of Value. New York: Wiley. [Google Scholar]
  5. Diewert, W. Erwin, and T. J. Wales. 1987. Flexible functional forms and global curvature conditions. Econometrica 55: 43–68. [Google Scholar] [CrossRef]
  6. Divisia, Francois. 1925. L’Indice Monétarie et la Thēorie de la Monnaie. Revue d’Economie Politique 39: 980–1008. [Google Scholar]
  7. Fisher, Irving. 1922. The Making of Index Numbers: A Study of Their Varieties, Tests, and Reliability. Boston: Houghton Mifflin. [Google Scholar]
  8. Friedman, Milton, and Anna J. Schwartz. 1970. Monetary Statistics of the United States: Estimates, Sources, Methods and Data. New York: Columbia University Press (for the NBER). [Google Scholar]
  9. Koopmans, Tjalling C., H. Rubin, and R. B. Leipnick. 1950. Measuring the equation systems of dynamic economics. In Statistical Inference in Dynamic Economic Models, Cowles Commission Monograph 10. Edited by Tjalling C. Koopmans. New York: Wiley, pp. 53–237. [Google Scholar]
  10. Kydland, Finn E., and Edward C. Prescott. 1977. Rules rather than discretion: The inconsistency of optimal plans. Journal of Political Economy 85: 473–91. [Google Scholar] [CrossRef]
  11. Lucas, Robert E. 2000. Inflation and Welfare. Econometrica 68: 247–74. [Google Scholar] [CrossRef]
  12. Pollak, Robert A., and Michael L. Wachter. 1975. The relevance of the household production function and its implications for the allocation of time. Journal of Political Economy 83: 255–77. [Google Scholar] [CrossRef]
  13. Prigogine, Ilya, and Isabelle Stengers. 1984. Order Out of Chaos: Man’s New Dialogue with Nature. New York: Bantam. [Google Scholar]
  14. Taylor, John. 1995. The Monetary Transmission Mechanism: An Empirical Framework. Journal of Economic Perspectives 9: 11–26. [Google Scholar] [CrossRef]
1
What he said to me was consistent with what he said in less detail in his lecture in Yugoslavia, “The Anguish of Central Banking,” The 1979 Per Jacobsson Lecture, The Per Jacobsson Foundation, Belgrade, Yugoslavia, September 30, 1979.
2
3
As John Taylor observed about research on monetary aggregation: “in my view, such research is very useful. If there were a measure of the money supply with a reasonably stable or predictable velocity, monetary policy could focus on such a quantity and place less emphasis on the interest rate. With a more stable velocity, money supply targets would have advantages over interest rate-oriented policies. Money supply targets are explicit about the nominal anchor for the price level and thereby give policy a long-run focus. Money targets also imply a quick and automatic response of interest rates to business cycle fluctuations, and they provide an easy way to convey monetary policy goals and actions to the general public.” (Taylor 1995)

Share and Cite

MDPI and ACS Style

Serletis, A. An Interview with William A. Barnett. Econometrics 2017, 5, 45. https://doi.org/10.3390/econometrics5040045

AMA Style

Serletis A. An Interview with William A. Barnett. Econometrics. 2017; 5(4):45. https://doi.org/10.3390/econometrics5040045

Chicago/Turabian Style

Serletis, Apostolos. 2017. "An Interview with William A. Barnett" Econometrics 5, no. 4: 45. https://doi.org/10.3390/econometrics5040045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop