I am an economist and Principal Researcher at Microsoft. My recent research has focused on demand estimation, the economics of cloud computing, the use of Large Language Models for conducting market research.
Email: jamesbrand@microsoft.com, jamesbrandecon@gmail.com.
While at Microsoft, I’ve been able to work on internal and external research projects with fantastic PhD interns: Andres Mena (Brown), Avner Kreps (Northwestern), Rebekah Dix (MIT), Chinmay Lohani (Penn Econ), and Yihao Yuan (Wharton).
We often have openings for summer interns, so feel free to reach out if you’d like to work together.
-
In this paper I show that consumers in food stores and supermarkets/hypermarkets became significantly less price sensitive between 2006 and 2017. At the median, across thousands of stores and products in nine large categories, estimated own-price elasticities have declined by 25% over this period. I argue that these changes are likely due in part to improved supply chain management, which has led stores to offer a larger variety of goods which better match consumers’ individual preferences. I show that newer products are indeed more “niche” in this sense, and that other potential sources of rising differentiation including increases in quality and changes in consumer wealth play a smaller role. Markups implied by a monopolistic pricing rule suggest that the observed rise in differentiation was large enough to generate significant increases in firms’ markups absent any changes in pricing behavior or competition.
Estimating Productivity and Markups Under Imperfect Competition (Revision Requested, Journal of Econometrics)
-
This paper revisits the standard production function model and proposes an alternative identification and estimation procedure. Specifically, I argue that some of the assumptions of the standard production function model are inconsistent with the increasingly popular use of production function methods in the estimation of markups. I then show that the seminal nonclassical measurement error result in Hu and Schennach (2008) can be used to nonparametrically identify the production function under alternative assumptions which do not require specifying the demand firms face or any knowledge of firms’ input demand functions. I apply the intuition of this result to develop a GMM estimation procedure for the most practically relevant production function models, and explore the performance of the resulting estimates relative to workhorse methods in simulations.
Works in Progress
Dynamic Frictions and Cloud Computing: A Study of Misallocation and Business Dynamism (with Mert Demirer and Rebekah Dix)
A Quasi-Bayes Approach to Nonparametric Demand Estimation (with Adam Smith)
Other work
Contributed to Microsoft’s New Future of Work Report, 2023, which details many ways in which LLMs will and are changing the way people work, including the work of researchers in various fields.
Working Papers
Firm Productivity and Learning in the Digital Economy: Evidence from Cloud Computing (with Mert Demirer, Connor Finucane, and Avner Kreps)
Opportunities and risks of LLMs in survey research (with David Rothschild, Hope Schroeder, and Jenny Wang)
-
Recent advances in the development of large language models (LLMs) bring both disruptive opportunities and underlying risks to survey research. LLMs' capabilities for content generation and summarization tasks have already led to fast-paced innovation across social science research communities, including survey and market research, both academically and in practice. In this research note, we outline opportunities for LLMs to assist in survey creation, testing, analysis, and reporting. Backed by both practical examples and academic literature, we identify areas for research and development, distinguishing between challenges related to survey methods and the tools used to deploy surveys-a distinction necessary for the field to benefit from potential opportunities while minimizing potential risks. Further, we emphasize how different advances affect the degree of agency for the researcher. Overall, we are cautiously optimistic that LLM-based tools will augment, as opposed to replace, the researcher in the long-run, and will allow the survey research industry to scale.
-
Large language models (LLMs) have rapidly gained popularity as labor-augmenting tools for programming, writing, and many other processes that benefit from quick text generation. In this paper we explore the uses and benefits of LLMs for researchers and practitioners who aim to understand consumer preferences. We focus on the distributional nature of LLM responses, and query the Generative Pre-trained Transformer 3.5 Turbo (GPT-3.5 Turbo) model to generate dozens of responses to each survey question. We offer two sets of results to illustrate and assess our approach. First, we show that estimates of willingness-to-pay for products and features derived from GPT responses are realistic and comparable to estimates from human studies. Second, we demonstrate a practical method for market researchers to enhance GPT's responses by incorporating previous survey data from similar contexts via fine-tuning. This method improves the alignment of GPT's responses with human responses for existing and, importantly, new product features. We do not find similar improvements in the alignment for new product categories or for differences between customer segments.