Query Result Set
Skip Navigation Links
   ActiveUsers:517Hits:19967133Skip Navigation Links
Show My Basket
Contact Us
IDSA Web Site
Ask Us
Today's News
HelpExpand Help
Advanced search

  Hide Options
Sort Order Items / Page
MONTGOMERY, JACOB M (2) answer(s).
 
SrlItem
1
ID:   116463


Ensemble predictions of the 2012 US presidential election / Montgomery, Jacob M; Hollenbach, Florian M; Ward, Michael D   Journal Article
Ward, Michael D Journal Article
0 Rating(s) & 0 Review(s)
Publication 2012.
Summary/Abstract For more than two decades, political scientists have created statistical models aimed at generating out-of-sample predictions of presidential elections. In 2004 and 2008, PS: Political Science and Politics published symposia of the various forecasting models prior to Election Day. This exercise serves to validate models based on accuracy by garnering additional support for those that most accurately foretell the ultimate election outcome. Implicitly, these symposia assert that accurate models best capture the essential contexts and determinants of elections. In part, therefore, this exercise aims to develop the "best" model of the underlying data generating process. Scholars comparatively evaluate their models by setting their predictions against electoral results while also giving some attention to the models' inherent plausibility, parsimony, and beauty.
        Export Export
2
ID:   156707


Pairwise comparison framework for fast, flexible, and reliable human coding of political texts / Carlson, David; Montgomery, Jacob M   Journal Article
Montgomery, Jacob M Journal Article
0 Rating(s) & 0 Review(s)
Summary/Abstract Scholars are increasingly utilizing online workforces to encode latent political concepts embedded in written or spoken records. In this letter, we build on past efforts by developing and validating a crowdsourced pairwise comparison framework for encoding political texts that combines the human ability to understand natural language with the ability of computers to aggregate data into reliable measures while ameliorating concerns about the biases and unreliability of non-expert human coders. We validate the method with advertisements for U.S. Senate candidates and with State Department reports on human rights. The framework we present is very general, and we provide free software to help applied researchers interact easily with online workforces to extract meaningful measures from texts.
        Export Export