
with Timothy Cogley and Viktor Tsyrennikov
July 2012
In our heterogenousbeliefs incompletemarkets models, precautionary and speculative motives coexist. Missing markets for Arrow securities affect the size and avenues for precautionary savings. Survival dynamics suggested by Friedman (1953) and studied by Blume and Easley (2006) depend on whether agents can trade a disasterstate security. When the market for a disasterstate security is closed, precautionary savings flow into riskfree bonds, prompting lessinformed investors to accumulate wealth. Because speculation motives are strongest for the disasterstate Arrow security, opening this market brings outcomes close to those for a completemarkets benchmark where instead it is wellinformed investors who accumulate wealth. Speculation is more limited in other cases, and outcomes for wealth dynamics are closer to those in an economy in which only a riskfree bond can be traded.

with George Evans, Seppo Honkapohja, and Noah Williams
January 2012
Agents have two forecasting models, one consistent with the unique rational expectations equilibrium, another that assumes a timevarying parameter structure. When agents use Bayesian updating to choose between models in a selfreferential system, we find that learning dynamics lead to selection of one of the two models. However, there are parameter regions for which the nonrational forecasting model is selected in the longrun. A key structural parameter governing outcomes measures the degree of expectations feedback in Muth’s model of price determination.

with Timothy Cogley and Viktor Tsyrennikov
December 2011
This paper studies market prices of risk in an economy with two types of agents with diverse beliefs. The paper studies both a complete markets economy and a riskfree bonds only (Bewley) economy.

with Timothy Cogley and Viktor Tsyrennikov
November 2011
We study an economy in which two types of agents have diverse beliefs about the law of motion for an exogenous endowment. One type knows the true law of motion, and the other learns about it via Bayes’s theorem. Financial markets are incomplete, the only traded asset being a riskfree bond. Borrowing limits are imposed to ensure the existence of an equilibrium. We analyze how financialmarket structure affects the distribution of financial wealth and survival of the two agents. When markets are complete, the learning agent loses wealth during the learning transition and eventually exits the economy Blume and Easley 2006). In contrast, in a bondonly economy, the learning agent accumulates wealth, and both agents survive asymptotically, with the knowledgeable agent being driven to his debt limit. The absence of markets for certain Arrow securities is central to reversing the direction in which wealth is transferred.

with Timothy Cogley and Giorgio E. Primiceri
December 2007
We use Bayesian Markov Chain Monte Carlo methods to estimate two models of post WWII U.S. inflation rates with drifting stochastic volatility and drifting coefficients. One model is univariate, the other a multivariate autoregression. We define the inflation gap as the deviation of inflation from a pure random walk component of inflation and use both of our models to study changes over time in the persistence of the inflation gap measured in terms of short to mediumterm predicability. We present evidence that our measure of the persistence of the inflation gap increased until Volcker brought mean inflation down in the early 1980s and that it then fell during the chairmanships of Volcker and Greenspan. Stronger evidence for movements in inflation gap persistence emerges from the VAR than from the univariate model. We interpret these changes in terms of a simple dynamic new Keynesian model that allows us to distinguish altered monetary policy rules and altered private sector parameters.

with Timothy Cogley
July 2008
We study prices and allocations in a completemarkets, pure endowment economy in which agents have heterogenous beliefs. Aggregate consumption growth evolves exogenously according to a twostate Markov process. The economy is populated by two types of agents, one that learns about transition probabilities and another that knows them. We examine how the presence of the betterinformed agent influences allocations, the market price of risk, and the rate at which asset prices converge to values that would be computed under the typical assumption that all agents know the transition probabilities.

with Noah Williams and Tao Zha
October 2008
This is an extensively revised and shortened version of our 2006 paper. We infer determinants of Latin American hyperinflations and stabilizations in 5 countries by using the method of maximum likelihood to estimate a hidden Markov model that assigns roles both to fundamentals in the form of government deficits that are financed by money creation and to destabilizing expectations dynamics that can occasionally divorce inflation from fundamentals. Levels and conditional volatilities of monetized deficits drove most hyperinflations and stabilizations, with a notable exception in Peru where a cosmetic reform of the type emphasized by Marcet and Nicolini in 2003 seems to have been at work.

with InKoo Cho
December 2006
Prepared for the New Palgrave Dictionary of Economics.
This is an extensively revised and shortened version of our 2006 paper. We infer determinants of Latin American hyperinflations and stabilizations in 5 countries by using the method of maximum likelihood to estimate a hidden Markov model that assigns roles both to fundamentals in the form of government deficits that are financed by money creation and to destabilizing expectations dynamics that can occasionally divorce inflation from fundamentals. Levels and conditional volatilities of monetized deficits drove most hyperinflations and stabilizations, with a notable exception in Peru where a cosmetic reform of the type emphasized by Marcet and Nicolini in 2003 seems to have been at work.

with Timothy Cogley and Riccardo Colacito
September, 2005
A policy maker knows two models of inflationunemployment dynamics. One implies an exploitable tradeoff, the other does not. The policy maker's prior probability over the two models is part of his state vector. Bayes' law converts the prior into a posterior at each date and gives the policy maker an incentive to experiment. For a model calibrated to U.S. data through the early 1960s, we isolate the component of government policy that is due to experimentation by comparing the outcomes from two Bellman equations, the first of which `experiments and learns', the second of which `learns but doesn't experiment'. We interpret the second as an `anticipated utility' model and study how well its outcomes approximate those from the `experiment and learn' Bellman equation. The approximation is good. We use the model to study rates at which false models would be disposed of starting from initial conditions designed to emulate (a) a situation in which the SamuelsonSolow model is true, but some prior probability is attached to Lucas’s model, and (b) a situation in which the Lucas model is true but some prior probability attaches to the Lucas model. The rates differ in interesting ways.

with Timothy Cogley
August, 2007
A representative consumer is endowed with a prior that is pessimistic relative to a true data generating mechanism. True consumption growth follows a two state Markov chain with probabilities calibrated by Cechetti, Lam, and Mark. We obtain a pessimistic prior by using a calculation from robust control. Then we endow agents with Bayes’ Law and let time and chance remove their pessimism. From the view point of a rational expectations econometrician, the stochastic discount factor has a multiplicative adjustment, the RadonNikodym derivative of the decision maker’s model relative to the true one. We use this framework to study how market prices of risk behave as Bayes’ Law causes the `legacy of the Great Depression’ gradually to wear off. We obtain a high equity premium that gradually falls during the post WWII period.

with Timothy Cogley
March, 2005
We regard as an `anticipated utility’ model like ones advocated by David Kreps as an approximation to a model with a Bayesian agent and study the quality of approximation in several contexts of interest to a macroeconomist. We display some examples in which the approximation is quite good.

with Noah Williams and Tao Zha
April 2005
We use a Bayesian Markov Chain Monte Carlo algorithm jointly to estimate the parameters of a `true' data generating mechanism and those of a sequence of approximating models that a monetary authority uses to guide its decisions. Gaps between a true expectational Phillips curve and the monetary authority's approximating nonexpectational Phillips curve models unleash inflation that a monetary authority that knows the true model would avoid. A sequence of dynamic programming problems implies that the monetary authority's inflation target evolves as its estimated Phillips curve moves. Our estimates attribute the rise and fall of post WWII inflation in the US to an intricate interaction between the monetary authority's beliefs and economic shocks. Shocks in the 1970s made the monetary authority perceive a tradeoff between inflation and unemployment that ignited big inflation. The monetary authority's beliefs about the Phillips curve changed in ways that account for Volcker's conquest of US inflation.

with Timothy Cogley
November, 2003
An adaptive Fed’s model is a mixture of three models. The Fed uses Bayesian methods to update estimates of three models of the Phillips curve: a SamuelsonSolow model, a SolowTobin model, and a Lucas model. Each period, the central bank also updates the probability that it assigns to each of these three models, then determines its firstperiod decision by solving a `Bayesian linear regulator problem’. Although by the mid 1970s the U.S. data induce the Fed to assign very high probability to the Lucas model, the government refrains from adopting its lowinflation policy recommendation because that policy has very bad consequences under one of the other low (but not zero) probability models. The statistical model is thus able to explain the puzzling delay in the Fed’s decision to deflate after learning the natural rate hypothesis.

with Joseph G. Pearlman
April 10, 2004
This paper uses recursive methods to compute an equilibrium of the notorious model in section 8 of Townsend’s 1983 JPE paper `Forecasting the Forecasts of Others’. The equilibrium is of finite (and even low) dimension. Market prices fully reveal traders’ private information, making the equilibrium equivalent with one in which traders pool their information before trading. This means that the problem of forecasting the forecasts of others disappears in equilibrium. There is no need to keep track of higher order beliefs.

with Noah Williams
July 2004
This paper generalizes the learning model that Cho, Williams, and Sargent (2002) and Sargent (1999) attributed to the government, then calculates rates of convergence to and escape from selfconfirming equilibria.

with Timothy Cogley
August 26, 2002
This paper answers criticisms of our 2001 NBER Macro Annual paper made by Sims and Stock. We enrich our specification of a drifting coefficient VAR to allow stochastic volatility and study whether our earlier evidence for drifting coefficients survives this generalization. It does. We investigate the power of various tests that have been used to test time invariance of the autoregressive coefficients of VARs against alternatives like ours. All except one have low power. These results about power help reconcile our results with those of Sims and Bernanke and Mihov. We also study evidence that monetary policy rules have drifted.

with Timothy Cogley and Sergei Morozov
September, 2003
We use Bayesian methods to estimate a VAR with drifting coefficients and volatilities then construct fan charts that we compare with those of the Bank of England’s MPC. Our fan charts incorporate several sources of uncertainty, including a form of model uncertainty that is represented by drifting coefficients and volatilities.

with Timothy Cogley
June 2001
This paper uses Bayesian methods and post World War II data on inflation, unemployment, and an interest rate to estimate a `drifting coefficients' model. The model is used to construct characterizations of the data that make contact with various points in Lucas's Critique and Sargent's The Conquest of American Inflation published by Princeton University Press. The paper constructs various measures of posterior means and variances of inflation and unemployment and studies their relationships. This paper will appear in the 2001 Macroeconomic Annual.

with InKoo Cho and Noah Williams
May 31, 2001
This paper analytically computes the `escape route' for a special case of the model in my Marshall lectures The Conquest of American Inflation published by Princeton University Press. We show that theoretical computations of the mean dynamics and escape dynamics do a good job of explaining simulations like those in The Conquest of American Inflation. The paper uses the mysterious insight of Michael Harrison: `If an unlikely event occurs, it is very likely to occur in the most likely way' .

with Jasmina Arifovic
August 22, 2001
Experiments with human subjects in a KydlandPrescott Phillips curve economy.

with InKoo Cho

with William Brock, Ramon Marimon, and John Rust
April 1988