|
Sort Order |
|
|
|
Items / Page
|
|
|
|
|
|
|
Srl | Item |
1 |
ID:
088555
|
|
|
Publication |
Princeton, Princeton University Press, 1994.
|
Description |
247p.
|
Standard Number |
9780691034713
|
|
|
|
|
|
|
Copies: C:1/I:1,R:0,Q:0
Circulation
Accession# | Call# | Current Location | Status | Policy | Location | IssuedTo | DueOn |
054229 | 300.72/KIN 054229 | Main | Issued | General | | RA71 | 23-Jun-2024 |
|
|
|
|
2 |
ID:
121124
|
|
|
Publication |
2013.
|
Summary/Abstract |
We offer the first large scale, multiple source analysis of the outcome of what may be the most extensive effort to selectively censor human expression ever implemented. To do this, we have devised a system to locate, download, and analyze the content of millions of social media posts originating from nearly 1,400 different social media services all over China before the Chinese government is able to find, evaluate, and censor (i.e., remove from the Internet) the subset they deem objectionable. Using modern computer-assisted text analytic methods that we adapt to and validate in the Chinese language, we compare the substantive content of posts censored to those not censored over time in each of 85 topic areas. Contrary to previous understandings, posts with negative, even vitriolic, criticism of the state, its leaders, and its policies are not more likely to be censored. Instead, we show that the censorship program is aimed at curtailing collective action by silencing comments that represent, reinforce, or spur social mobilization, regardless of content. Censorship is oriented toward attempting to forestall collective activities that are occurring now or may occur in the future-and, as such, seem to clearly expose government intent.
|
|
|
|
|
|
|
|
|
|
3 |
ID:
126355
|
|
|
Publication |
2013.
|
Summary/Abstract |
We marshal discoveries about human behavior and learning from social science research and show how these can be used to improve teaching and learning. The discoveries are easily stated as three social science generalizations: (1) social connections motivate, (2) teaching teaches the teacher, and (3) instant feedback improves learning. We show how to apply these generalizations via innovations in modern information technology inside, outside, and across university classrooms. We also give concrete examples of these ideas from innovations we have experimented with in our own teaching.
|
|
|
|
|
|
|
|
|
|
4 |
ID:
175298
|
|
|
Summary/Abstract |
The mission of the social sciences is to understand and ameliorate society’s greatest challenges. The data held by private companies, collected for different purposes, hold vast potential to further this mission. Yet, because of consumer privacy, trade secrets, proprietary content, and political sensitivities, these datasets are often inaccessible to scholars. We propose a novel organizational model to address these problems. We also report on the first partnership under this model, to study the incendiary issues surrounding the impact of social media on elections and democracy: Facebook provides (privacy-preserving) data access; eight ideologically and substantively diverse charitable foundations provide initial funding; an organization of academics we created, Social Science One, leads the project; and the Institute for Quantitative Social Science at Harvard and the Social Science Research Council provide logistical help.
|
|
|
|
|
|
|
|
|
|
5 |
ID:
131555
|
|
|
Publication |
2014.
|
Summary/Abstract |
The social sciences are undergoing a dramatic transformation from studying problems to solving them; from making do with a small number of sparse data sets to analyzing increasing quantities of diverse, highly informative data; from isolated scholars toiling away on their own to larger scale, collaborative, interdisciplinary, lab-style research teams; and from a purely academic pursuit focused inward to having a major impact on public policy, commerce and industry, other academic fields, and some of the major problems that affect individuals and societies. In the midst of all this productive chaos, we have been building the Institute for Quantitative Social Science at Harvard, a new type of center intended to help foster and respond to these broader developments. We offer here some suggestions from our experiences for the increasing number of other universities that have begun to build similar institutions and for how we might work together to advance social science more generally.
|
|
|
|
|
|
|
|
|
|
6 |
ID:
171341
|
|
|
Summary/Abstract |
We clarify the theoretical foundations of partisan fairness standards for district-based democratic electoral systems, including essential assumptions and definitions not previously recognized, formalized, or in some cases even discussed. We also offer extensive empirical evidence for assumptions with observable implications. We cover partisan symmetry, the most commonly accepted fairness standard, and other perspectives. Throughout, we follow a fundamental principle of statistical inference too often ignored in this literature—defining the quantity of interest separately so its measures can be proven wrong, evaluated, and improved. This enables us to prove which of the many newly proposed fairness measures are statistically appropriate and which are biased, limited, or not measures of the theoretical quantity they seek to estimate at all. Because real-world redistricting and gerrymandering involve complicated politics with numerous participants and conflicting goals, measures biased for partisan fairness sometimes still provide useful descriptions of other aspects of electoral systems.
|
|
|
|
|
|
|
|
|
|
7 |
ID:
076515
|
|
|
Publication |
2007.
|
Summary/Abstract |
Inferences about counterfactuals are essential for prediction, answering "what if" questions, and estimating causal effects. However, when the counterfactuals posed are too far from the data at hand, conclusions drawn from well-specified statistical analyses become based on speculation and convenient but indefensible model assumptions rather than empirical evidence. Unfortunately, standard statistical approaches assume the veracity of the model rather than revealing the degree of model-dependence, so this problem can be hard to detect. We develop easy-to-apply methods to evaluate counterfactuals that do not require sensitivity testing over specified classes of models. If an analysis fails the tests we offer, then we know that substantive results are sensitive to at least some modeling choices that are not based on empirical evidence. We use these methods to evaluate the extensive scholarly literatures on the effects of changes in the degree of democracy in a country (on any dependent variable) and separate analyses of the effects of UN peacebuilding efforts. We find evidence that many scholars are inadvertently drawing conclusions based more on modeling hypotheses than on evidence in the data. For some research questions, history contains insufficient information to be our guide. Free software that accompanies this paper implements all our suggestions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|