Testing the Validity of the Crosswise Model: A Study on Attitudes Towards Muslims

David Johann, German Centre for Higher Education Research and Science Studies, Berlin
Kathrin Thomas, City, University of London

This paper investigates the concurrent validity of the Crosswise Model when “high incidence behaviour” is concerned by looking at respondents’ self-reported attitudes towards Muslims. We analyse the concurrent validity by comparing the performance of the Crosswise Model to a Direct Question format. The Crosswise Model was designed to ensure anonymity and confidentiality in order to reduce Social Desirability Bias induced by the tendency of survey respondents to present themselves in a favourable light. The article suggests that measures obtained using either question format are fairly similar. However, when estimating models and comparing the impact of common predictors of negative attitudes …

, ,

No Comments

Nonsampling errors and their implication for estimates of current cancer treatment using the Medical Expenditure Panel Survey

Jeffrey M. Gonzalez, PhD, Office of Survey Methods Research, U.S. Bureau of Labor Statistics
Lisa B. Mirel, MS, Office of Analysis and Epidemiology, National Center for Health Statistics, Centers for Disease Control and Prevention
Nina Verevkina, PhD, Department of Health Policy & Administration, The Pennsylvania State University

Survey nonsampling errors refer to the components of total survey error (TSE) that result from failures in data collection and processing procedures. Evaluating nonsampling errors can lead to a better understanding of their sources, which in turn, can inform survey inference and assist in the design of future surveys. Data collected via supplemental questionnaires can provide a means for evaluating nonsampling errors because it may provide additional information on survey nonrespondents and/or measurements of the same concept over repeated trials on the same sampling unit. We used a supplemental questionnaire administered to cancer survivors to explore potential nonsampling errors, focusing …

No Comments

Question Order Experiments in the German-European Context

Henning Silber, GESIS - Leibniz-Institute for the Social Sciences, Germany
Jan Karem Höhne, University of Göttingen, Germany
Stephan Schlosser, University of Göttingen, Germany

In this paper, we investigate the context stability of questions on political issues in cross-national surveys. For this purpose, we conducted three replication studies (N1 = 213; N2 = 677; N3 = 1,489) based on eight split-ballot design experiments with undergraduate and graduate students to test for question order effects. The questions, which were taken from the Eurobarometer (2013), included questions on perceived performance and identification. Respondents were randomly assigned to one of two experimental groups which received the questions either in the original or the reversed order. In all three studies, respondents answered the questions about Germany and the …

, ,

No Comments

The effect of interviewers’ motivation and attitudes on respondents‘ consent to contact secondary respondents in a multi-actor design

Jette Schröder, GESIS – Leibniz Institute for the Social Sciences, Germany
Claudia Schmiedeberg, University of Munich (LMU), Germany
Laura Castiglioni, University of Munich (LMU), Germany

In surveys using a multi-actor design, data is collected not only from sampled ‘primary’ respondents, but also from related persons such as partners, colleagues, or friends. For this purpose, primary respondents are asked for their consent to survey such ‘secondary’ respondents. The existence of interviewer effects on unit nonresponse of sampled respondents in surveys is well documented, and research increasingly focuses on interviewer attributes in the non-response process. However, research regarding interviewer effects on unit nonresponse of secondary respondents, more specifically, primary respondents’ consent to include secondary respondents into the survey, is sparse. We use the German Family Panel (pairfam) …

, , , ,

No Comments

A Case Study of Error in Survey Reports of Move Month Using the U.S. Postal Service Change of Address Records

Mary H. Mulry, U.S. Census Bureau, Washington, DC
Elizabeth M. Nichols, U.S. Census Bureau, Washington, DC
Jennifer Hunter Childs, U.S. Census Bureau, Washington, DC

Correctly recalling where someone lived as of a particular date is critical to the accuracy of the once-a-decade U.S. decennial census. The data collection period for the 2010 Census occurred over the course of a few months: February to August, with some evaluation operations occurring up to 7 months after that. The assumption was that respondents could accurately remember moves and move dates on and around April 1st up to 11 months afterwards. We show how statistical analyses can be used to investigate the validity of this assumption by comparing self-reports and proxy-reports of the month of a move in …

, , ,

No Comments

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 4.0 International License. Creative Commons License