Nested Pseudo-GMM Estimation of Demand for Differentiated Products
We propose a fast algorithm for computing the GMM estimator in the BLP demand model (Berry, Levinsohn, and Pakes, 1995). Inspired by nested pseudo-likelihood methods for dynamic discrete choice models, our approach avoids repeatedly solving the inverse demand system by swapping the order of the GMM optimization and the fixed-point computation. We show that, by fixing consumer-level outside-option probabilities, BLP’s market-share to mean-utility inversion becomes closed-form and, crucially, separable across products, yielding a nested pseudo-GMM algorithm with analytic gradients. The resulting estimator scales dramatically better with the number of products and is naturally suited for parallel and multithreaded implementation. In the inner loop, outside-option probabilities are treated as fixed objects while a pseudo-GMM criterion is minimized with respect to the structural parameters, substantially reducing computational cost. Monte Carlo simulations and an empirical application show that our method is significantly faster than the fastest existing alternatives, with efficiency gains that grow more than proportionally in the number of products.
💡 Research Summary
The paper introduces a novel algorithm for estimating the Berry‑Levinsohn‑Pakes (BLP) random‑coefficients demand model using the Generalized Method of Moments (GMM). The traditional nested‑fixed‑point (NFXP) approach solves a high‑dimensional fixed‑point problem for the mean utilities (δ) at every candidate parameter vector, which becomes computationally prohibitive as the number of products grows. Inspired by nested pseudo‑likelihood methods for dynamic discrete‑choice models, the authors propose a “nested pseudo‑GMM” (NP‑GMM) estimator that reverses the order of the fixed‑point iteration and the GMM optimization.
The key insight is to treat the vector of consumer‑level outside‑option choice probabilities (π₀ᵢ) as incidental statistics. Conditional on a fixed π₀, the market‑share‑to‑mean‑utility inversion admits a closed‑form expression that is separable across products: δⱼ = log sⱼ − log (1 − ∑ₖsₖ). Consequently, the inner loop no longer requires iterative inversion; it simply updates π₀ and computes δ analytically. The outer loop then minimizes a pseudo‑GMM objective G̃(θ) = m̃(θ)′W m̃(θ), where m̃(θ) = Z′
Comments & Academic Discussion
Loading comments...
Leave a Comment