Adam McCloskey

Adam McCloskeyAdam McCloskeyAdam McCloskey

Adam McCloskey

Adam McCloskeyAdam McCloskeyAdam McCloskey

Associate Professor

University of Colorado, Boulder

Department of Economics

Curriculum Vitae

Associate Professor

University of Colorado, Boulder

Department of Economics

Curriculum Vitae

Publications

Critical Values Robust to P-hacking (with Pascal Michaillat), forthcoming at Review of Economics and Statistics

Short and Simple Confidence Intervals when the Directions of Some Effects are Known (with Philipp Ketz), Review of Economics and Statistics, 107 (2025), 820-834.

Matlab Code, Stata code available from SSC archive: type "ssc install ssci"

Hybrid Confidence Intervals for Informative Uniform Asymptotic Inference After Model Selection, Biometrika, 111 (2024), 109-127.

Stata package implementation by Kirill Kushnarev (Trinity College Dublin)

Inference on Winners (with Isaiah Andrews and Toru Kitagawa), Quarterly Journal of Economics, 139 (2024), 305-358.

Stata Code, R Code

2019 Version (referenced in "Inference After Estimation of Breaks")

Inference for Losers (with Isaiah Andrews, Dillon Bowen and Toru Kitagawa), American Economic Association Papers and Proceedings, 112 (2022), 635-640.

Inference After Estimation of Breaks (with Isaiah Andrews and Toru Kitagawa),  Journal of Econometrics, 224 (2021), 39-59.

Asymptotically Uniform Tests After Consistent Model Selection in the Linear Regression Model, Journal of Business and Economic Statistics, 38 (2020), 810-825.

Estimation and Inference with a (Nearly) Singular Jacobian (with Sukjin Han), Quantitative Economics, 10 (2019), 1019-1068.

Bonferroni-Based Size-Correction for Nonstandard Testing Problems, Journal of Econometrics, 200 (2017), 17-35.

Parameter Estimation Robust to Low-Frequency Contamination (with Jonathan B. Hill), Journal of Business and Economic Statistics, 35 (2017), 598-610.

Memory Parameter Estimation in the Presence of Level Shifts and Deterministic Trends (with Pierre Perron), Econometric Theory, 29 (2013), 1196-1237.

Estimation of the Long-Memory Stochastic Volatility Model Parameters that is Robust to Level Shifts and Deterministic Trends, Journal of Time Series Analysis, 34 (2013), 285-301.

Working Papers

Uniform Critical Values for Likelihood Ratio Tests in Boundary Problems (with Giuseppe Cavaliere, Rasmus Pedersen and Anders Rahbek)

Limit distributions of likelihood ratio statistics are well-known to be discontinuous in the presence of nuisance parameters at the boundary of the parameter space, which lead to size distortions when standard critical values are used for testing. In this paper, we propose a new and simple way of constructing critical values that yields uniformly correct asymptotic size, regardless of whether nuisance parameters are at, near or far from the boundary of the parameter space. Importantly, the proposed critical values are trivial to compute and at the same time provide powerful tests in most settings. In comparison to existing size-correction methods, the new approach exploits the monotonicity of the two components of the limiting distribution of the likelihood ratio statistic, in conjunction with rectangular confidence sets for the nuisance parameters, to gain computational tractability. Uniform validity is established for likelihood ratio tests based on the new critical values, and we provide illustrations of their construction in two key examples: (i) testing a coefficient of interest in the classical linear regression model with non-negativity constraints on control coefficients, and, (ii) testing for the presence of exogenous variables in autoregressive conditional heteroskedastic models (ARCH) with exogenous regressors. Simulations confirm that the tests have desirable size and power properties. A brief empirical illustration demonstrates the usefulness of our proposed test in relation to testing for spill-overs and ARCH(-X) effects.

Identification and Estimation of Causal Effects in High-Frequency Event Studies (with Alessandro Casini)

Replication Code

We consider identification, estimation and inference in high-frequency event study regressions, which have been used widely in the recent macroeconomics, financial economics and political economy literatures. The high-frequency event study method regresses changes in an outcome variable on a measure of unexpected changes in a policy variable in a narrow time window around an event or a policy announcement (e.g., a 30-minute window around an FOMC announcement). We show that, contrary to popular belief, the narrow size of the window is not sufficient for identification. Rather, the population regression coefficient identifies a causal estimand when (i) the effect of the policy shock on the outcome does not depend on the other variables (separability) and (ii) the surprise component of the news or event dominates all other variables that are present in the event window (relative exogeneity). Technically, the latter condition requires the ratio between the variance of the policy shock and that of the other variables to be infinite in the event window. Under these conditions, we establish the causal meaning of the event study estimand corresponding to the regression coefficient and super-consistency of the event study estimator with rate of convergence faster than the parametric rate. We show the asymptotic normality of the estimator and propose bias-corrected inference. We also provide bounds on the worst-case bias and use them to quantify its impact on the worst-case coverage properties of confidence intervals, as well as to construct a bias-aware critical value. Notably, this standard linear regression estimator is robust to general forms of nonlinearity. We apply our results to Nakamura and Steinsson’s (2018a) analysis of the real economic effects of monetary policy, providing a simple empirical procedure to analyze the extent to which the standard event study estimator adequately estimates causal effects of interest. 

Inference for Interval-Identified Parameters Selected from an Estimated Set (with Sukjin Han)

Interval identification of parameters such as average treatment effects, average partial effects and welfare is particularly common when using observational data and experimental data with imperfect compliance due to the endogeneity of individuals' treatment uptake. In this setting, a treatment or policy will typically become an object of interest to the researcher when it is either selected from the estimated set of best-performers or arises from a data-dependent selection rule.  In this paper, we develop new inference tools for interval-identified parameters chosen via these forms of selection.  We develop three types of confidence intervals for data-dependent and interval-identified parameters, discuss how they apply to several examples of interest and prove their uniform asymptotic validity under weak assumptions.

Retired Papers

On the Computation of Size-Correct Power-Directed Tests with Null Hypotheses Characterized by Inequalities

Heavy Tail Robust Frequency Domain Estimation (with Jonathan B. Hill)

Supplemental Material

Semiparametric Testing for Changes in Memory of Otherwise Stationary Time Series

Contact Information

Adam McCloskey

Department of Economics

University of Colorado at Boulder

256 UCB

Boulder, CO 80309

Phone: (303) 735-7908 Fax: (303) 492-8960 E-mail: adam.mccloskey@colorado.edu

Powered by