Here is the CMT Uptime check phrase

Interdisciplinary Seminar in Quantitative Methods Archive 2019

About the workshops

The goal of the Interdisciplinary Seminar in Quantitative Methods is to provide an interdisciplinary environment where researchers can present and discuss cutting-edge research in quantitative methodology. The talks are aimed at a broad audience, with emphasis on conceptual rather than technical issues. The research presented is varied, ranging from new methodological developments to applied empirical papers that use methodology in an innovative way. We welcome speakers and audiences from all disciplines and fields, including the social, natural, biomedical, and behavioral sciences.

Organizers: Kevin Quinn
For additional information, please contact Kevin Quinn at: [email protected].

2019-2020 Series

Balancing covariates in randomized experiments using the Gram-Schmidt walk

Wednesday, Feb. 19, 2020 | noon – 1:30 pm
Fredrik Savje (Yale)

Click here to view the paper

Abstract

The paper introduces a class of experimental designs that allows experimenters to control the robustness and efficiency of their experiments. The designs build on a recently introduced algorithm in discrepancy theory, the Gram-Schmidt walk. We provide a tight analysis of this algorithm, allowing us to prove important properties of the designs it produces. These designs aim to simultaneously balance all linear functions of the covariates, and the variance of an estimator of the average treatment effect is shown to be bounded by a quantity that is proportional to the loss function of a ridge regression of the potential outcomes on the covariates. No regression is actually conducted, and one may see the procedure as regression adjustment by design. The class of designs is parameterized so to give experimenters control over the worse case performance of the treatment effect estimator. Greater covariate balance is attained by allowing for a less robust design in terms of worst case variance. We argue that the trade-off between robustness and efficiency is an inherent aspect of experimental design. Finally, we provide non-asymptotic tail bounds for the treatment effect estimator under the class of designs we describe.

 

The Effects of Firms’ Lobbying on Resource Misallocation

Wednesday, Feb. 12, 2020 | noon – 1:30 pm
In Song Kim (MIT)

Abstract

We study the causal effect of firms’ lobbying activities on the misallocation of resources through the distortion of firm size. To address the endogeneity between firms’ lobbying expenditure and their size, we propose a new instrument. Specifically, we measure firms’ political connections based on the geographic proximity between their headquarter locations and politicians’ districts in the U.S., and trace the value of these networks over time by exploiting politicians’ assignment to congressional committees. We find that a 10 percent increase in lobbying expenditure leads to a 3 percent gain in revenue. To investigate the macroeconomic consequences of these effects, we develop a heterogeneous firm-level model with endogenous lobbying. Using a novel dataset that we construct, we document new stylized facts about lobbying behavior and use them, including the one from the instrument, to estimate the model. Our counterfactual analysis shows that the return to firms’ lobbying activities amounts to a 22 percent decrease in aggregate productivity in the U.S.

The Blessings of Multiple Causes

Friday, Feb. 7, 2020 | 10 am – 11:00 am
David Blei (Columbia University)
Note: Presentation in 340 West Hall | Jointly sponsored by ISQM and the U-M Statistics Department

Abstract

Causal inference from observational data is a vital problem, but it comes with strong assumptions. Most methods require that we observe all confounders, variables that affect both the causal variables and the outcome variables. But whether we have observed all confounders is a famously untestable assumption. We describe the deconfounder, a way to do causal inference with weaker assumptions than the classical methods require.

How does the deconfounder work? While traditional causal methods measure the effect of a single cause on an outcome, many modern scientific studies involve multiple causes, different variables whose effects are simultaneously of interest. The deconfounder uses the correlation among multiple causes as evidence for unobserved confounders, combining unsupervised machine learning and predictive model checking to perform causal inference. We demonstrate the deconfounder on real-world data and simulation studies, and describe the theoretical requirements for the deconfounder to provide unbiased causal estimates.

Paper available at: https://www.tandfonline.com/doi/full/10.1080/01621459.2019.1686987

Discovery of Influential Text in Experiments Using Deep Neural Networks

Wednesday, Jan. 22, 2020 | noon – 1:30 pm
Molly Roberts (UCSD)

Abstract

We propose a method for discovering and testing influential concepts in media experiments. We apply a network with a recurrent and convolutional layer to experiments where unstructured text is given as treatment. Following existing models, our RCNN includes an intermediate layer specifically designed to make neural network classifications more interpretable. However we use this layer for a different purpose — to identify coherent phrases and sentences that are highly predictive of a human decision. We develop methods to interpret the filter layers of these RCNNs to facilitate the discovery of concepts within the text that are likely to have had the largest impact on the outcome. We validate our method by replicating a conjoint experiment on immigration using unstructured text in place of conjoint treatments. In addition, we apply the method to climate change communication where we discover which phrases that exert the most influence on learning and forming opinions about climate change. Last, we use our model to discover phrases that are predictive of censorship of Chinese social media posts.

The Emergence of Stable Political Choices from Incomplete Political Preferences

Tuesday, Jan. 14, 2020 | 2:00 – 3:30 pm
Ben Lauderdale (University College London)

Abstract

Survey research finds that citizens often give temporally unstable responses when asked their positions on policy issues, indicating a lack of ‘real’ attitudes on many issues. For some, this casts doubt on prominent conceptions of democracy that involve citizens making political choices based on policy considerations. In this paper, we show that despite average instability in issue opinions, voters can nevertheless make meaningful, stable multidimensional political choices based on issue considerations. We draw on a new three-wave survey of the UK public that includes repeated measurements of issue-specific opinions and of the political choices respondents make when confronted with hypothetical candidates taking positions on those issues. We show that candidate choices made after 6 months and 12 months have nearly as strong relationships to self-reported issue positions as do the candidate choices made in the same wave as those self-reports, and that choice stability is high when respondents choose between candidates who take clear and contrasting positions on the issues that respondents tend to care more about. Our findings demonstrate the mechanics underlying long-hypothesized theories of ‘issue publics’: stable political choices can arise from individuals making choices on the basis of the issues that they care about, even when most people lack real attitudes on many issues.

Conversational behaviors and their consequences

Tuesday, Nov. 19, 2019 | 2:00 – 3:30 pm
Justine Zhang (Cornell)

Abstract

Social interactions are enriched and shaped by conversations. Over the course of a conversation, people exhibit a rich array of linguistic behaviors as they engage with one another. These behaviors, in turn, could point to consequential outcomes in domains such as online discussions and mental health counseling.

In this talk, I will describe two complementary efforts in studying conversations that aim towards addressing their internal richness and their downstream effects. First, I will present an unsupervised methodology for discovering the various rhetorical intentions of participants in a conversation. I will illustrate how this methodology can be used to analyze British parliamentary question periods and to forecast the emergence of antisocial behavior in Wikipedia editing discussions.

Next, I will consider the problem of drawing causal links between conversational behaviors and outcomes. I will use arguments from causal inference to formalize and address key challenges that arise, and illustrate these ideas in the setting of crisis counseling conversations.

This talk includes joint work with Cristian Danescu-Niculescu-Mizil, Arthur Spirling, and Sendhil Mullainathan.

Rehabilitating the Regression: Honest and Valid Causal Inference through Machine Learning

Tuesday, Nov. 5, 2019 | 2:00 – 3:30 pm
Marc Ratkovic (Princeton)

Abstract

The linear regression suffers from several well-known flaws. First, specification choices by the researcher can affect inference. Second, the regression is primarily a correlative tool and generally does not estimate an average causal effect. We introduce a method that overcomes both shortcomings. First, the method combines a machine learning approach to control for background covariates with a regression to estimate the coefficient on the treatment variable of theoretical interest. Second, the method models the treatment variable as well as the outcome, allowing for the recovery of a causal effect. The method applies regardless of whether the treatment variable is a binary, continuous, or count variable. We prove that the method’s estimate is consistent for the average causal effect and that its standard errors are asymptotically valid and semiparametrically efficient. A simulation study and application to real-world datasets illustrates the method’s utility.

Tuesday, Oct. 22, 2019 | 2:00 – 3:30 pm

Jake Bowers (Illinois)