I am an academic economist specializing in industrial organization.
My work leverages theory, empirics and modern computation to better understand the equilibrium implications of policies and proposals involving information revelation, risk sharing and commitment. My projects span a number of policy settings, including public procurement, pharmaceutical pricing and auto-insurance.
I am a postdoctoral fellow at the Stanford Institute for Economic Policy Research (SIEPR) for 2019-2020 and will be joining the Economics group at the Stanford GSB as an assistant professor starting in the summer of 2020.
Job Market Paper
with Valentin Bolotnyy
The U.S. government spends about $165B per year on highways and bridges, or about 1% of GDP. Much of it is spent through "scaling" procurement auctions, in which private construction firms submit unit price bids for each piece of material required to complete a project. The winner is determined by the lowest total cost—given government estimates of the amount of each material needed—but, critically, they are paid based on the realized quantities used.
This creates an incentive for firms to skew their bids—bidding high when they believe the government is underestimating an item's quantity and vice versa—and raises concerns of rent-extraction among policymakers. For risk averse bidders, however, scaling auctions provide a distinctive way to generate surplus: they enable firms to limit their risk exposure by placing lower unit bids on items with greater uncertainty.
To assess this effect empirically, we develop a structural model of scaling auctions with risk averse bidders. Using data on bridge maintenance projects undertaken by the Massachusetts Department of Transportation (MassDOT), we present evidence that bidding behavior is consistent with optimal skewing under risk aversion. We then estimate bidders' risk aversion, the risk in each auction, and the distribution of bidders’ private costs. Finally, we simulate equilibrium item-level bids under counterfactual settings to estimate the fraction of MassDOT spending that is due to risk and evaluate alternative mechanisms under consideration by MassDOT. We find that scaling auctions provide substantial savings to MassDOT relative to lump sum auctions and suggest several policies that might improve on the status quo.
The United States spends twice as much per person on pharmaceuticals as European countries, in large part because prices are higher in the US. This has led policymakers in the US to consider legislation for price controls. This paper assesses the effects of a hypothetical US reference pricing policy that would cap prices in US markets by those offered in Canada.
We estimate a structural model of demand and supply for pharmaceuticals in the US and Canada, in which Canadian prices are set through a negotiation process between pharmaceutical companies and the Canadian government. We then simulate the impacts of the counterfactual international reference pricing rule, allowing firms to internalize the cross-country impacts of their prices both when setting prices in the US and when negotiating prices in Canada.
We find that such a policy results in a slight decrease in US prices and a substantial increase in Canadian prices. The magnitude of these effects depends on the particular structure of the policy. Overall, we find modest consumer welfare gains in the US, but substantial consumer welfare losses in Canada. Moreover, we find that pharmaceutical profits increase in net, suggesting that reference pricing of this form would constitute a net transfer from consumers to firms.
with Yizhou Jin
This paper develops an empirical framework for direct transactions of consumer data. We use it to study the design and impact of auto-insurance monitoring programs, in which insurers incentivize consumers to opt into having their driving behavior monitored for a short period of time.
We acquire proprietary datasets from a major U.S. auto insurer that offers a monitoring program. The data is matched with price menus of the firm’s main competitors. We develop a model for consumers’ demand for insurance and for monitoring as well as the cost to insure them. Key parameters are estimated using rich data variation in insurance claims, prices, contract space, and monitoring status. We then conduct counterfactual simulations using a dynamic pricing model that endogenizes the firm’s information set.
We find three main results:
(i) Data collection changes consumer behavior. Drivers become 30% safer when monitored, which boosts total surplus and alters the informativeness of the data.
(ii) Demand for monitoring interacts with the product market. Safer drivers are more likely to opt in. But monitoring take-up is low due to both consumers’ innate preference against being monitored and attractive outside options from other insurers. Nonetheless, introducing monitoring raises consumer welfare by 3% of premium per year.
(iii) Proprietary data facilitate higher markups but protect the firm’s ex-ante incentives to produce the data. A counterfactual equilibrium in which the firm must share monitoring data with competitors harms both profit and consumer welfare. This is because the firm offers less upfront incentives for monitoring opt-in, so that fewer drivers are monitored.
A concern central to the economics of privacy is that firms may exploit consumer data to engage in greater price discrimination. A common response to these concerns is that consumers should have sovereignty over their own data, and choose whether firms access it. Since the market may draw inferences whenever a consumer withholds information about her preferences, the strategic implications of consumer data sovereignty are unclear.
This paper investigates whether such measures improve consumer welfare in both monopolistic and competitive environments. We consider the interaction between a consumer, whose preferences are not known to the market, and a market that makes price offers to that consumer based on verifiable disclosures about her type. We show that a consumer can optimally use verifiable information about her preference to create exclusive partial pools that guarantee gains relative to perfect price discrimination.
Fast Bayesian Inference on Large-Scale Random Utility Logit Models
Random coefficients logit is a benchmark model for discrete choice, widely used in marketing and industrial organization. In "conjoint analysis" conducted in experimental marketing, it has historically been estimated using a Metropolis-within Gibbs method, or with simulated likelihoods (Train 2009). In economic problems where only aggregate data are available, it is estimated using GMM using the BLP algorithm of Berry Levinsohn and Pakes (1995).
We propose a latent variable form of the model as in Yang, Chen and Allenby (2003) for both individual choice and aggregate data, which can be estimated efficiently using Hamiltonian Monte Carlo. This offers several benefits over the current standards. Relative to Metropolis-within-Gibbs, HMC allows efficient estimation of enormous parameter spaces, allowing much larger (and even context-dependent choice) models to be fit in hours (rather than days or weeks). The proposed approach models aggregate sales, not shares, and so unlike BLP, the method allows for measurement error due to differences in market size. Priors also regularize the loss surface, leading to estimates that are robust when GMM objectives would be susceptible to local minima.
with Muhamet Yildiz
RAND Journal of Economics, Vol. 50, No. 2 (Summer 2019)
Empirical evidence shows that equally informed, experienced negotiators may refuse to settle because they fundamentally disagree on each one's probability of success.
We study the dynamics of agreement to settle in pretrial negotiations when the negotiating parties are both optimistic and new information may arrive at any point.
We characterize the conditions under which the negotiators do or do not reach an agreement at every period of negotiation and discuss the implications for policy design such as timing periods of discovery and jury selection, and whether or not to allow the winning party to recover the legal costs incurred from the losing party.
How well can an informed central planner like Waze do at routing drivers on paths with uncertain wait times using an incentive compatible protocol?
We find that the mediation ratio is at most 8/7 in the case of two links with affine cost functions, and remains strictly smaller than the price of anarchy of 4/3 for any fixed m. However, it approaches the price of anarchy as m grows. For general (monotone) cost functions, the mediation ratio is at most m, a significant improvement over the unbounded price of anarchy.
Every year, the Harvard economics department celebrates the end of the year with a winter holiday party, featuring a "skit" by every cohort of graduate students, as well as the faculty and staff of the department.
My cohort has established a tradition of creating musical skits with parody lyrics to popular songs, chronicling our graduate student experience.
I Just Found a Dataset
G1: December 2013
G2: December 2014
Where is the [G3] Love?
G3: December 2015
[Economists] Rise Up
G4: December 2016
G5: December 2017