Item Details
Skip Navigation Links
   ActiveUsers:647Hits:21867491Skip Navigation Links
Show My Basket
Contact Us
IDSA Web Site
Ask Us
Today's News
HelpExpand Help
Advanced search

In Basket
  Journal Article   Journal Article
 

ID193670
Title ProperEmbedding Regression
Other Title InformationModels for Context-Specific Description and Inference
LanguageENG
AuthorRODRIGUEZ, PEDRO L
Summary / Abstract (Note)Social scientists commonly seek to make statements about how word use varies over circumstances—including time, partisan identity, or some other document-level covariate. For example, researchers might wish to know how Republicans and Democrats diverge in their understanding of the term “immigration.” Building on the success of pretrained language models, we introduce the à la carte on text (conText) embedding regression model for this purpose. This fast and simple method produces valid vector representations of how words are used—and thus what words “mean”—in different contexts. We show that it outperforms slower, more complicated alternatives and works well even with very few documents. The model also allows for hypothesis testing and statements about statistical significance. We demonstrate that it can be used for a broad range of important tasks, including understanding US polarization, historical legislative development, and sentiment detection. We provide open-source software for fitting the model.
`In' analytical NoteAmerican Political Science Review Vol. 117, No.4; Nov 2023: p.1255 - 1274
Journal SourceAmerican Political Science Review Vol: 117 No 4
Key WordsEmbedding Regression ;  Models for Context-Specific Description and Inference


 
 
Media / Other Links  Full Text