I am an economist and Principal Researcher at Microsoft. My recent research has focused on demand estimation, the economics of cloud computing, the use of Large Language Models for conducting market research.
Email: jamesbrand@microsoft.com, jamesbrandecon@gmail.com.
While at Microsoft, I’ve been able to work on internal and external research projects with fantastic PhD interns: Andres Mena (Brown), Avner Kreps (Northwestern), Rebekah Dix (MIT), Chinmay Lohani (Penn Econ), and Yihao Yuan (Wharton).
We often have openings for summer interns, so feel free to reach out if you’d like to work together.
-
In this paper I show that consumers in food stores and supermarkets/hypermarkets became significantly less price sensitive between 2006 and 2017. At the median, across thousands of stores and products in nine large categories, estimated own-price elasticities have declined by 25% over this period. I argue that these changes are likely due in part to improved supply chain management, which has led stores to offer a larger variety of goods which better match consumers’ individual preferences. I show that newer products are indeed more “niche” in this sense, and that other potential sources of rising differentiation including increases in quality and changes in consumer wealth play a smaller role. Markups implied by a monopolistic pricing rule suggest that the observed rise in differentiation was large enough to generate significant increases in firms’ markups absent any changes in pricing behavior or competition.
Estimating Productivity and Markups Under Imperfect Competition (Revision Requested, Journal of Econometrics)
-
This paper revisits the standard production function model and proposes an alternative identification and estimation procedure. Specifically, I argue that some of the assumptions of the standard production function model are inconsistent with the increasingly popular use of production function methods in the estimation of markups. I then show that the seminal nonclassical measurement error result in Hu and Schennach (2008) can be used to nonparametrically identify the production function under alternative assumptions which do not require specifying the demand firms face or any knowledge of firms’ input demand functions. I apply the intuition of this result to develop a GMM estimation procedure for the most practically relevant production function models, and explore the performance of the resulting estimates relative to workhorse methods in simulations.
Works in Progress
Dynamic Frictions and Cloud Computing: A Study of Misallocation and Business Dynamism (with Mert Demirer and Rebekah Dix)
Other work
Contributed to Microsoft’s New Future of Work Report, 2023, which details many ways in which LLMs will and are changing the way people work, including the work of researchers in various fields.
Working Papers
-
Recent advances in the development of large language models (LLMs) bring both disruptive opportunities and underlying risks to survey research. LLMs' capabilities for content generation and summarization tasks have already led to fast-paced innovation across social science research communities, including survey and market research, both academically and in practice. In this research note, we outline opportunities for LLMs to assist in survey creation, testing, analysis, and reporting. Backed by both practical examples and academic literature, we identify areas for research and development, distinguishing between challenges related to survey methods and the tools used to deploy surveys-a distinction necessary for the field to benefit from potential opportunities while minimizing potential risks. Further, we emphasize how different advances affect the degree of agency for the researcher. Overall, we are cautiously optimistic that LLM-based tools will augment, as opposed to replace, the researcher in the long-run, and will allow the survey research industry to scale.
-
Large language models (LLMs) have rapidly gained popularity as labor-augmenting tools for programming, writing, and many other processes that benefit from quick text generation. In this paper we explore the uses and benefits of LLMs for researchers and practitioners who aim to understand consumer preferences. We focus on the distributional nature of LLM responses, and query the Generative Pre-trained Transformer 3.5 Turbo (GPT-3.5 Turbo) model to generate dozens of responses to each survey question. We offer two sets of results to illustrate and assess our approach. First, we show that estimates of willingness-to-pay for products and features derived from GPT responses are realistic and comparable to estimates from human studies. Second, we demonstrate a practical method for market researchers to enhance GPT's responses by incorporating previous survey data from similar contexts via fine-tuning. This method improves the alignment of GPT's responses with human responses for existing and, importantly, new product features. We do not find similar improvements in the alignment for new product categories or for differences between customer segments.
-
This paper presents a quasi-Bayes approach to estimating nonparametric demand systems for differentiated products. We transform the GMM objective function developed by Compiani (2022) into a quasi-likelihood, specify priors that penalize violations of micro-founded economic constraints, and develop novel Bayesian inference procedures. We use simulations and retail scanner data from 12 consumer packaged goods categories to show that our quasi-Bayes approach improves both the accuracy of estimated elasticities and the validity of estimated demand functions. Together, our results demonstrate the value of (i) disciplining flexible nonparametric estimators with judicious economic constraints, and (ii) Bayesian methods for accommodating such constraints. Finally, we introduce a new Julia package (NPDemand.jl) that implements both GMM and quasi-Bayes approaches to estimation.
Firm Productivity and Learning in the Digital Economy: Evidence from Cloud Computing (with Mert Demirer, Connor Finucane, and Avner Kreps)
-
Computing technologies have become critical inputs to production in the modern firm. However, there is little large-scale evidence on how efficiently firms use these technologies. In this paper, we study firm productivity and learning in cloud computing by leveraging CPU utilization data from over one billion virtual machines used by nearly 100,000 firms. We find large and persistent heterogeneity in compute productivity both across and within firms, similar to canonical results in the literature. More productive firms respond better to demand fluctuations, show higher attentiveness to resource utilization, and use a wider variety of specialized machines. Notably, productivity is dynamic as firms learn to be more productive over time. New cloud adopters improve their productivity by 33% in their first year and reach the productivity level of experienced firms within four years. In our counterfactual calculations, we estimate that raising all firms to the 80th percentile of productivity would reduce aggregate electricity usage by 17%.