+1(978)310-4246 credencewriters@gmail.com
  

The article analysis is a “dissection” of an empirical study. You should identify all the important design elements as well as relate the author’s findings to relevant theories or other studies. To help you read the article, you can use the “How to Read a Journal” and/or the “Critiquing Research” handout. As a summary of the article and a way to facilitate discussing the article in class, you will need to answer the following questions:

NOTES: If the article reports on multiple experiments, focus your answers to the following questions on the last one. Pay

close attention to any figures and tables.

1) Identify the following:

a) Independent variable(s): Specify their names in APA format (i.e., capitalized) and the specific levels of each. (There could be one or more independent variable.)

b) Dependent variable(s): Specify in a name or short phrase and note how it was measured. If there are more than two,

select the two most important.

c) The primary experimental hypothesis?

d) Important concepts: State the main ideas and concepts underlying the research. Modern articles show this in the

form of keywords. If it is an older article, you will need to and for this information from the abstract and/or

Introduction section. If you are stuck, look up the article in sake info and see how it was cataloged via the keywords.

2) Identify at least one operational definition: just as with the Operational Definitions homework assignment, pick either an

IV or a DV and specify how it was manipulated or measured.

3) What are important theories/empirical findings related to the study?

a) Theories. What theory/ theories informed the work?

b) Empirical findings: most of this will be found in the Results section. State the main findings in your own words.

4) Specify the experimental design in the same way that you previously did in the Identifying Variables and Experimental

Design homework. Use these templates:

a) For one IV: “This was a [within-subjects; between groups] ___________ experiment with IV Name (Level

1, Level 2, etc.) with _____________ as the primary dependent variable.” [Choose

b) For multiple IVs: “This was a 2 (Variable 1 Name: Level 1, Level 2…) x 3 (Variable 2 Name: Level 1,

Level 2, Level 3…) [within-subjects; between groups] ___________ experiment with _____________ as

the primary dependent variable. THERE CAN BE MORE THAN ONE INDEPENDENT VARIABLE WITH DIFFERENT LEVELS.

5) Identify at least one control variables or conditions: These are factors that the experimenter held constant, but are not

independent variables. For instance, an experimenter might have used only males, or only students within a certain age

range, etc. They may have controlled the ambient level of noise, light, etc., or used a certain instrument calibrated a

certain way. They may have used remote recording devices that turned on and off for specified periods of time at random

intervals, etc. They may have recorded only those responses that occurred within a pre-defined time frame or only kept

data from participants who responded above a certain threshold of accuracy, etc.

6) Identify a confounding variable (if any): Confounds are difficult to identify as they are, almost by definition,

hidden, in the sense that they are either directly connected to, or correlated with, an independent variable. The classic

example is the highly reliable correlation between ice cream sales and murder rates. These are truly correlated, but the ice

cream sales variable is confounded with temperature. Hot weather causes people to get cranky and contributes to hire

murder rates, but if you did not recognize this, you might be tempted to infer that there is an actual connection between

ice cream sales and homicides. This is pretty common with correlations; with an experiment, it can be harder to find

compounds, because they are more controlled and take place in specified settings.

7) Address how the confound above could be corrected

8) What statistical analyses were conducted? Check Results for this!

9) Did the results support the hypothesis?

10) What critique did the author(s) offer for their own study or how did they suggest the study could be extended

See discussions, stats, and author profiles for this publication at:
https://www.researchgate.net/publication/232466827
‘Beauty Is Talent: Task Evaluation
as a Function of the Performer’s
Physical Attractiveness.’
Article in Journal of Personality and Social Psychology · February 1974
Impact Factor: 5.08 · DOI: 10.1037/h0036018
CITATIONS
READS
224
798
2 authors, including:
Harold Sigall
University of Maryland, College …
33 PUBLICATIONS 1,581 CITATIONS
SEE PROFILE
Available from: Harold Sigall
Retrieved on: 26 April 2016
Journal of Personality and Social Psychology
1974, Vol. 29, No. 3, 299-304
BEAUTY IS TALENT:
TASK EVALUATION AS A FUNCTION OF THE PERFORMER’S
PHYSICAL ATTRACTIVENESS
DAVID LANDY t AND HAROLD SIGALL 2
University of Rochester
Male college subjects read an essay that supposedly had been written by a
college freshman co-ed. They then evaluated the quality of the essay and the
ability of its writer on several dimensions. By means of a photo attached to
the essay, one third of the subjects were led to believe that the writer was
physically attractive and one third that she was unattractive. The remaining
subjects read the essay without any information about the writer’s appearance. In addition, one half of the subjects read a version of the essay that
was well written while the other subjects read a version that was poorly
written. Significant main effects for essay quality and writer attractiveness were
predicted and obtained. The subjects who read the good essay evaluated the
writer and her work more favorably than the subjects who read the poor
essay. The subjects also evaluated the writer and her work most favorably
when she was attractive, least when she was unattractive, and intermediately
when her appearance was unknown. The impact of the writer’s attractiveness
on the evaluation of her and her work was most pronounced when the “objective” quality of her work was relatively poor.
There is an increasing amount of research
data attesting to the relative importance of
physical attractiveness as a determinant and
moderator of a wide variety of interpersonal
processes: heterosexual liking (Berscheid,
Dion, Walster, & Walster, 1971; Walster,
Aronson, Abrahams, & Rottman, 1966),
person perception (Sigall & Landy, 1973),
persuasion (Mills & Aronson, 1965), peer
popularity in young children (Dion & Berscheid, 1972), reactions to evaluations (Sigall
& Aronson, 1969; Sigall, Page, & Brown,
1971), attributions of personal characteristics
and future success (Clifford & Walster, in
press; Dion, Berscheid, & Walster, 1972;
Miller, 1970), and adult judgments of children’s transgressions (Dion, 1972).
Several of these experiments have demonstrated that individuals tend to form impressions and make judgments about people on
the basis of their physical attractiveness. For
example, Miller (1970) found that subjects
made more favorable attributions to good-
looking people than to unattractive people,
and Dion et al. (1972) showed that college
students of both sexes expected physically
attractive men and women to possess more
socially desirable traits (sensitivity, strength,
modesty, etc.) than unattractive people. In
addition, in Dion et al.’s study, students expected attractive people to have more good
things in store for them in the future—more
prestigious occupations and happier marriages—than unattractive people. Dion and
Berscheid (1972) found a similar tendency
in nursery school children in that attractive
children, as judged by adults, were more popular with their peers and were seen to manifest
less socially undesirable interpersonal behavior
than their unattractive counterparts.
Clifford and Walster (in press) demonstrated that teachers expected physically attractive children to have greater academic
potential and better social relationships with
their peers than unattractive children. In this
experiment, fifth-grade teachers examined a
1
standardized report card containing identical
Presently unaffiliatcd.
2
The authors would like to thank James Gallant objective information about a fifth-grade pufor his assistance.
pil. The photograph of either an attractive or
Requests for reprints should be sent to Harold unattractive boy or girl was pasted in a corner
Sigall, who is now at the Department of Psychology,
University of Maryland, College Park, Maryland of the report card. The teachers’ assessment
of the pupil’s IQ, his parents’ attitudes
20742.
299
300
DAVID LANDY AND HAROLD SIGALL
toward school, his future level of educational
attainment, and his relationships with his
peers were higher or more favorable when
the pupil was attractive than when he was
unattractive.
In the experiments described above, it was
shown that individuals attribute more positive
characteristics to and expect better performances from attractive people than unattractive
people. Do individuals evaluate the actual
performances of attractive people more positively than those of unattractive people?
There is some evidence relevant to this question. In a correlational study of manipulative
strategies, Singer (1964) examined the relationship between the physical attractiveness
of female college students, based on ratings
of photographs by faculty members, and the
students’ grade point average. A significant
positive correlation (r = .40) was reported for
(firstborn) females. However, given the possibility that there is a positive relationship between physical attractiveness and intelligence,
attractive students may not merely have been
the beneficiaries of perceptual bias. It is possible that attractive individuals are genetically
endowed with greater ability or that they have
environmental histories which produce greater
intellectual capabilities, and that therefore
they are better students.
Thus while there is experimental evidence
that individuals expect attractive people to
perform better, that is, have greater potential,
and correlational evidence that individuals
rate the work of attractive people more favorably, a direct causal relationship between
physical attractiveness and performance
evaluation has not yet been demonstrated.
The present study was designed to experimentally determine the effect of physical attractiveness on performance evaluation while
controlling the quality of the task being evaluated and the evaluator’s exposure to the performer. We expected that physical attractiveness would strongly influence the evaluation
of an individual’s performance on a given task
even though the task was logically unrelated
to appearance. The more attractive the performer, the more positive we expected the
subjects’ evaluations of her work to be. In
addition to the attractiveness of the performer, we manipulated the quality of the
work being evaluated. While we expected the
predicted relationship to be manifest for both
high- and low-quality work, we wanted to
explore the possibility that the impact of
physical attractiveness on performance evaluation would vary with the (“objective”)
quality of the performance.
METHOD
Design
The subjects read a short essay and then evaluated
its overall quality and gave their impressions of its
writer. One half of the subjects read an essay that
was well written while the other subjects read an
essay that was poorly written. In addition, one third
of the subjects were led to believe that the essay
had been written by a physically attractive college
co-ed, another third of the subjects were led to believe it had been written by an unattractive co-ed,
and the remaining third—the control subjects—read
either the well-written or the poorly written essay
without being exposed to the attractiveness manipulation. This procedure yielded a 2 X 3 factorial design with two levels of essay quality (good and
poor) and three levels of physical attractiveness
(attractive, unattractive, and control).
Subjects
The subjects were 60 male undergraduates at the
University of Rochester who were recruited from an
introductory psychology course.
Essay Quality
Two standard untitled essays about the role and
effects of television in society were prepared. Each
essay was about 700 words in length and discussed
similar issues, such as the effects of televised violence
on children. However, one essay was well written,
grammatically correct, organized, and clear in its presentation of ideas. The other essay was poorly written,
contained numerous cliches and errors in usage, was
disorganized, and simplistic in its presentation of
ideas. Prestesting with a sample of 30 undergraduates
demonstrated that the “good” essay was indeed
viewed as a better essay than the “poor” essay. One
half of the pretest sample read the good essay, and
the rest read the poor essay. They then rated the
essay’s “general quality” on a 9-point scale labeled
poor (1) and good (9) at the end points. The mean
rating given to the good essay was 6.47 (SD = 1.36),
while the mean rating given the poor essay was 4.40
(SD — 1.76). These means were significantly different
(F = 10.40, dj = 1/28, p < .005). Physical Attractiveness of the Writer Two college yearbook photographs of female students were selected for use in the experiment. One of the photographs was of a physically attractive woman, and one was of a physically unattractive woman. The selection of photographs was based on 301 BEAUTY Is TALENT the judgments of six male graduate students who were asked to rank order, in terms of physical attractiveness, a set of IS facial photos of women from a recent college yearbook. The photo that had the highest mean ranking and the photo that had the lowest mean ranking were selected. So that six subjects could be tested at once, two copies of each photo were mounted on separate file cards on which a fictitious name, Marilyn Thomas, was typed along with a bogus standard description of the girl pictured. This information was the same on each card and described the writer as a freshman college student from Ohio whose father was a businessman, whose mother was a housewife, and who had two brothers and a sister. Her hobbies were identified as horseback riding and reading. All of the cards, including two cards that did not have photos mounted on them, contained the same background information. Each of these cards was attached to a copy of the essays by means of a paper clip. This was done in such a way that in order for the essay to be read, the card would have to be moved, presumably resulting in the readers' attending to the information provided, that is, the photograph on those cards having one, and the standard background information about the writer. Procedure Two to six subjects were scheduled to report for the experiment during any given experimental session. When the subjects arrived at the waiting room, they were greeted by the experimenter who explained that the experiment dealt with "social judgment, that is, how people make assessments of others." He told the subjects that he wanted them each to read and judge one of a number of essays that had been submitted in a freshman English class. The experimenter further explained that the instructor of this class had asked his students to prepare as an exercise an essay for submission to a contest being run by a local television station. Thus, each essay was turned in to the instructor along with some descriptive background about its author. These essays were then used by the experimenter to conduct the research in social judgment. Following these introductory remarks, the experimenter ushered each subject into a separate cubicle where he randomly assigned him to one of the experimental conditions. He then handed each individual subject one of the sets of materials—a writer background information card attached to an essay—• corresponding to the condition to which he had been assigned. There were 10 subjects in each of the six experimental conditions. Dependent Measure When each subject had finished reading the essay, the experimenter asked him to evaluate the essay by filling out a "judgment form." This consisted of eight rating scales. The subjects evaluated the essay they had read on four dimensions by circling the appropriate number on each of the 9-point rating scales labeled at the end points, which followed each dimension—creativity: low (l)-high (9); ideas: dull (l)-interesting (9); style: poor (l)-good (9); and general quality: poor (l)-good (9). The subjects rated their impressions of the writer of the essay they had read on the following four dimensions— intelligence: low (l)-high (9); sensitivity: low (1)high (9); talent: low (l)-high (9); and overall ability: low (l)-high (9). When each of the subjects had completed filling out the judgment form, the experimenter asked them to recongegrate in the waiting room. He then interviewed them. After assuring himself that the subjects were not suspicious, he informed them about the true nature of the experiment. RESULTS AND DISCUSSION Two measures of performance evaluation were derived from the subjects' responses on the judgment form. The first was simply their rating of the general quality of the essay as given by their reactions to a single rating scale. The second was an essay evaluation score consisting of the sum of the ratings of the essay on three dimensions: creativity, ideas, and style.3 Table 1 presents the means, standard deviations, and analysis of variance of ratings of the general quality of the essay. Table 2 presents the means, standard deviations, and analysis of variance of the essay evaluation scores. From an examination of these tables, it is clear that the less physically attractive the writer, the lower were the subjects' evaluations of her work. The main effect for writer attractiveness was significant on both the general quality measure (F = 6.26, df = 2/54, p < .01) and the essay evaluation measure (F - 5.34, df = 2/54, p < .01). The highly significant main effect for essay quality on both measures of performance evaluation indicates that, as intended, one essay was perceived to be of higher quality than the other. This essentially constituted a check on the manipulation of essay quality. The overall ( 2 X 3 ) interaction between essay quality and writer attractiveness was not significant on either measure of perform3 The intercorrelations of the subjects' ratings on these three dimensions ranged from .34 (creativity and style) to .65 (creativity and ideas). The mean ratings on these separate dimensions for subjects in each of the experimental conditions followed the same pattern as the means of the essay evaluation scores. 302 DAVID LANDY AND HAROLD SIGALL TABLE 1 RATINGS oi' THE GENERAL QUALITY of THE ESSAY FOR SUBJECTS IN EACH op THE EXPERIMENTAL CONDITIONS Means and standard deviations'1 Writer's )hysical attractiveness Kssay quality Good M SI) Poor M SD Total JW Total M Attractive Control Unattractive 6.7 1.57 6.6 1.35 5.9 1.60 6.4 5.2 1.55 6.0 4.7 1.95 5.5 2.7 1.34 4.3 4.2 Analysis of variance Source A! MS F Essay quality (A) Writer attractiveness (B) 1 2 2 54 72.600 15.450 3.950 2.467 29.43** 6.26* 1.60 A XB Error » The higher the number, the better the quality; n = 10 in each condition. *p < .01. **/> < .001. TABLE 2 ESSAY EVALUATION SCORES FOR SUBJECTS IN EACH OF THE EXPERIMENTAL CONDITIONS Means and standard deviations'1 Writer's physical attractiveness quality Good M SI) Poor M SD Total M Total M Attractive Control Unattractive 17.9 4.82 17.9 15.5 4.70 17.1 3.60 14.9 3.31 16.4 13.4 5.99 15.6 8.7 3.68 12.1 12.3 MS F Analysis of variance Source Essay quality (A) Writer's attractiveness (B) AXB Error 95% to participate. We needed 130 participants for each version of the experiment. After a first round of collecting data, we ended up with 131 participants for the perfective aspect condition and 132 for Downloaded from pps.sagepub.com by guest on January 30, 2016 Eerland et al. 168 the imperfective aspect condition. Because we decided beforehand to exclude all non-native speakers of English, we excluded data from 1 participant in the perfective aspect condition and data from 7 participants in the imperfective aspect condition. Also, there were 4 participants who completed both versions of the experiment. For those participants, data of their second participation (based on the time log of their participation) were excluded. All these participants performed the perfective aspect version first. In total, we excluded data of 11 participants in the imperfective aspect condition. Then, we collected data from 9 additional participants in the imperfective aspect condition. Among these additional participants were 2 subjects that had already performed the task. Data of these participants were excluded and we collected data from 2 additional subjects. We ended up with 130 participants in both conditions. The final sample included 75 males (28.85%) and 185 females (71.15%). Age ranged from 18 to 76 (M = 39.00, SD = 12.69). Lab studies Jack D. Arnal, McDaniel College OSF: https://osf.io/gdbrf/ Participants were recruited from the approved departmental participant pool. Those interested signed up for the experiment through the department’s participant pool management software (Sona Systems) or via a Google Docs schedule. Instructions provided on both Sona and the Google Docs schedule invited participants to take part in a study about decision making. All other aspects of the study followed the prescribed protocol, including use of the provided Qualtrics script. Students who participated received partial course credit for their participation. The goal was to have a minimum of 30 participants per condition, with initial analyses provided to the overall lead investigator of the replication project by November 15. Although the preregistered plan was to stop data collection on November 1, data collection was not terminated until November 14 (because sign-ups were slower than expected) with a total of 73 participants. The data from 6 participants were excluded from analyses because the participants reported as non-native speakers of English. The resulting sample sizes were 35 for the perfective aspect condition and 32 for the imperfective aspect condition. Of the 67 individuals included in the analyses, 17 reported as male (25.37%) and 50 reported as female (74.63%). The ages of participants ranged from 18 to 36 (M = 19.69, SD = 2.82). Stephanie A. Berger, College of Mount Saint Vincent OSF: https://osf.io/bcdfm/ Students completed the study in our psychology lab and earned extra credit in psychology courses for participating. A majority of our students are bilingual, so we excluded data from students whose second language was English and who estimated using English less than 90% of the time based on a short survey. Their data were eliminated from the file based on the IP address of the lab computer and the date and time they completed the study. Our goal was to have 40 native or primary English speakers in each condition. After running the first 108 participants, only 48 (44%) met the language requirement (n = 23 perfective, n = 25 imperfective). We continued recruiting in multiples of 8, collecting data from a total of N = 164 students. Of the 164 total participants, 82 were eliminated because of the language requirement—3 because of equipment problems and 4 because they completed the study twice (data from their first time in the study was included in the analysis). The final sample included n = 35 in the perfective aspect condition (9% male, age; M = 19.26, SD = 1.34) and n = 40 in the imperfective (15% male, age; M = 19.13, SD = 1.38). We did not meet our goal of 40 participants in each condition, but competition for our small subject pool prevented us from recruiting additional participants. Angela R. Birt, Mount Saint Vincent University Philip Aucoin, Mount Saint Vincent University OSF: https://osf.io/gducj/ A total of 70 students from Mount Saint Vincent University in Halifax, Nova Scotia, participated in the study. They were recruited from undergraduate courses at the university, were tested in groups of 1–4 using Qualtrics software, and were paid in exchange for their participation. We excluded 4 participants from analyses: 3 were excluded because English was not their native language and 1 was excluded due to being younger than 18 years of age. Two participants were initially excluded due to what was originally considered as missing data, but they were reincluded as this was not the case. This resulted in a total sample size of 66 participants who met the inclusion criteria: for the perfective condition, n = 33, 75.80% female, age; M = 20.06, SD = 2.05, and for the imperfective condition: n = 33, 81.80% female, age; M = 21.30, SD = 5.06. Other than one of the primary student research assistants not participating in carrying out the RRR from the beginning, all procedures followed the approved protocol and did not deviate from our preregistered plan. Anita Eerland, Utrecht University Andrew M. Sherrill, Northern Illinois University Joseph P. Magliano, Northern Illinois University Rolf A. Zwaan, Erasmus University Rotterdam OSF: https://osf.io/z7kfe/ Participants were 100- and 300-level undergraduate psychology students at Northern Illinois University (NIU). Participants were recruited through in-class announcements and an online bulletin board (Sona Systems). Each participant took approximately 10 min to complete all procedures and was compensated with course credit. The lab space included eight individual rooms, though no more than 5 participants completed the study at any given time. Each room had a Dell desktop on which study materials were administered via Qualtrics. Informed consent and debriefing were conducted with each participant individually. Downloaded from pps.sagepub.com by guest on January 30, 2016 RRR: Hart & Albarracín (2011) 169 Participants completed the study in the room alone and with the door closed. True random assignment was executed by flipping a coin before each participant entered the lab. The experimenter remained blind to study conditions by obfuscating the labels of study materials. In total, 126 participants were recruited. Following preregistered exclusionary criteria, 18 participants were excluded for not being native English speakers and 8 participants were excluded for being over 25 years old. Data collection continued on an individual basis (within session blocks of up to 5 participants) until preregistered target sample sizes were achieved (50 per condition), accounting for exclusionary criteria. When 50 participants met inclusion criteria for the perfective condition, 46 participants currently met inclusion criteria for the imperfective condition. To balance conditions, 5 additional participants were recruited and assigned to the imperfective condition, with 1 excluded for being a non-native English speaker. In the final sample (N = 100; 50 per condition), 70 participants were female and 30 participants were male. The average age was 19.95 years (SD = 1.46). Todd R. Ferretti, Wilfrid Laurier University OSF: https://osf.io/5uf6s/ Participants were recruited from the Department of Psychology undergraduate testing pool by signing up online (Sona Systems). Participants were also recruited through posters placed around Wilfrid Laurier University. There were 86 participants in total. Thirty-eight of them were undergraduates that signed up through the departmental testing pool and received course credit for their participation. The first 11 undergraduate participants recruited through recruitment posters received $11 for their participation. However, due to the slow pace of recruitment, compensation was modified so that participants received $16 for participation in the study. As a result, a further 37 participants received $16 for their participation. One participant was removed for not meeting the criteria that participants had to be native English speakers. The data analysis was conducted on the remaining 85 participants, which included 43 in the imperfective condition and 42 in the perfective condition. The average age of the 55 female participants (65%) and 30 male participants (35%) was 20.09 years old (SD = 2.81). The lab used consists of three separate rooms that contain a Mac desktop computer. Participants performed the study individually on the Mac computers in these rooms. The task took approximately 15 min to complete, including the time to read and sign the consent form. Michael M. Knepp, University of Mount Union OSF: https://osf.io/hxaq4/ Participants were undergraduate students recruited from psychology courses at the university. The Sona Systems research management system was used to recruit subjects and to ensure anonymity of the data. Within the Sona system, students were given a link to the study after sign-up and credit was automatically granted by the system following completion of the questionnaires. Student received .5 Sona credits for completing the online study. Random assignment to groups was done within the Qualtrics survey and each subject had an equal chance of being selected for either of the two conditions. Ninety students took part in our online-only version of the replication. Four students were excluded from the final analysis as they did not indicate English as their primary language. Within the 86 student sample, both groups had an equal gender ratio (13 men, 30 women) for a total of 26 men (30.2%) and 60 women (69.8%). There was no difference in age between the imperfective (M = 19.16, SD = 1.09) and perfective groups (M = 19.33, SD = 1.41, p > .10).
Christopher A. Kurby, Grand Valley State University
Mackenzie R. Kibbe, Grand Valley State University
OSF: https://osf.io/xiedk/
Participants were introductory-level undergraduate psychology students at Grand Valley State University. Participants were
recruited through an online bulletin board (i.e., Sona Systems).
Each participant took approximately 10 min to complete all
procedures and was compensated with credit to satisfy course
requirements. In total, 136 participants were recruited. Fourteen
participants were excluded for not being native English speakers, and two participants were excluded because of missing
data. In the final sample (N = 120), 89 participants were female
and 31 participants were male. The average age was 18.62 years
(SD = 1.58).
Data was collected by one undergraduate student. The psychology lab room had eight Dell desktops on which the surveys
were administered via Qualtrics. A language history questionnaire was completed with paper and pencil. Debriefing was
conducted with the participants as a group. Participants completed the study in the same room on tables with separators
between them and with the door closed. At no point did participants interact with each other during the study. Participants
were randomly assigned to condition using block randomization of 10-participant blocks to ensure an equal number of
participants per cell. Sixty participants were assigned to each
experimental condition (perfective and imperfective). The
experimenter was blind to study conditions.
Joseph M. Melcher, St. Cloud State University
OSF: https://osf.io/3g8bh/
We recruited from the St. Cloud State University (St. Cloud,
MN, USA) Department of Psychology participant pool, which
consists of students taking a psychology course whose instructor offers extra credit for participating in studies. The pool is
administered with Sona Systems, an online system through
which students can browse and sign up for available studies.
It also allows invitations to be sent on the basis of participant
characteristic filters (e.g., self-reported native language is English). Our lab was aiming for 60 participants (30 per condition).
Between September 8 and December 3, 2014, 69 students participated. All participants responded to all of the questions in
the Qualtrics script. The data from 3 participants was excluded
Downloaded from pps.sagepub.com by guest on January 30, 2016
Eerland et al.
170
because they self-reported a native language other than English
as part of the demographic survey contained in the Qualtrics
script. No other participants were excluded. This left 33 participants per condition. Participants received course extra credit
based upon the allotted 30 min. Participants were run in groups
of 1–3, each on a computer in separate rooms. The sample
characteristics are consistent with our subject pool characteristics and the sample from the original Hart and Albarracín study:
There were 18 males (27%) and 48 females (73%). Ages ranged
from 18 to 50 (median = 20.0; M = 22.8; SD = 7.1).
Stephen W. Michael, Mercer University
OSF: https://osf.io/8y6bf/
Ninety undergraduates were recruited from introductory psychology courses at a private university in the Southeast in
exchange for course credit. Sign-up sheets for a study on decision making were posted on a bulletin board in the psychology
building. Consistent with preregistration exclusionary criteria,
the only stated qualifications for participation were that the
individual be 18 years of age and speak English as their primary language. The testing area was a computer lab in the
psychology department where participants, in groups of 1–5,
were unable to see others’ computers screens. They also wore
headphones throughout the study. Study materials (Qualtrics
scripts) were administered on Dell desktop computers. Pseudorandom assignment was used whereby 45 participants were
randomly assigned to Conditions 1 and 2. Research assistants
were blind to the corresponding verb aspect conditions. Seven
participants were excluded from analyses after indicating a language other than English as their primary language, leaving a
final sample of 83 participants. Although preregistration plans
were to recruit a minimum of 40 students in each condition,
unequal exclusions across conditions resulted in 39 participants
in the perfective condition and 44 in the imperfective condition. However, because preregistration for this lab specified a
stopping point at 90 participants, no more participants were
recruited. This final sample was 68.5% female with an average
age of 18.71 (SD = 1.03).
Christopher Poirier, Stonehill College
Nicole Capezza, Stonehill College
Candace Crocker, Stonehill College
OSF: https://osf.io/px6n2/
We recruited participants from the psychology department’s
participant pool at Stonehill College. The participants were
enrolled in one or more of the following courses: General Psychology, Developmental Psychology, Social Psychology, and/
or Introduction to Statistics. They participated in the study as
part of one option for course credit. We used a prescreening
process in Sona Systems to recruit only participants who met
the specified inclusion criteria (e.g., Is English your first and primary language?). A total of 80 participants completed the study;
however, 1 participant was excluded because she did not follow instructions (i.e., the participant did not read the case study
before advancing to the next part). The final sample consisted
of 39 participants in the imperfective condition and 40 participants in the perfective condition. Candace Crocker served
as the experimenter for every session, and she was blind to
condition assignment. Our procedures followed the approved
protocol and did not deviate from our preregistered plan.
Jason M. Prenoveau, Loyola University Maryland
Marianna Carlucci, Loyola University Maryland
OSF: https://osf.io/trxbd/
Participants were undergraduates recruited from the Psychology Research Pool. Participants received either course credit
or extra credit for their participation in the study. Participants
signed up to participate in the study at a given time using the
Psychology Research Pool online recruitment tool.
When participants arrived at the laboratory, they were
greeted by one of four research assistants. All four research
assistants were trained in administering the study protocol and
had run at least two pilot participants (whose data were not
used for final analyses). All subjects completed the protocol in
the same room. The room has five computers that are separated
using dividers so that their screens are not visually accessible
to individuals sitting adjacent to one another. Subjects were
run both individually, and in small groups, depending on how
many signed up for a given time slot; per the replication protocol requirements, these small groups did not exceed 5 participants. Subjects were instructed not to speak to one another
during testing.
A random number generator (random.org) was used to
assign participants to conditions; research assistants were blind
to the conditions participants were assigned to. The target sample size was 50 participants per cell that meet the demographic
criteria (i.e., native-English speakers between the ages of 18
and 25) and participants were run until this target was met.
In the perfective condition, 5 of the first 55 participants
were excluded because 2 were outside of the specified age
range and 3 did not identify English as their primary language.
Because data were collected for 74 participants in the perfective
condition, 19 additional participants were excluded because
they exceeded the target sample size of 50 for this condition.
In the imperfective condition, 1 of the first 51 participants was
excluded because they were outside of the specified age range.
Because data was collected for 53 participants in the imperfective condition, 2 additional participants were excluded because
they exceeded the target sample size of 50 for this condition.
The perfective condition had an average age of 19.0 (SD =
1.2) and was 80% female. The imperfective condition had an
average age of 19.0 (SD = 1.0) and was 74% female.
Issues arose during data collection that were not fully covered in the predata collection methods plan. First, there were 4
participants that had problems with Qualtrics and were not able
to complete the study. Data from these participants were not
recorded by Qualtrics and therefore these participants were not
included in the analyses. Second, there was 1 participant who
told the research assistant that they just clicked through and did
not read the questions, and 2 others who research assistants
Downloaded from pps.sagepub.com by guest on January 30, 2016
RRR: Hart & Albarracín (2011)
171
noted spoke during the experiment. However, because of the
methods used in the present study (i.e., multiple participants
run at the same time and no participant identification number
given to participants), there was no way to determine which
data corresponded to these participants. Thus, their data could
not be excluded from possible inclusion in the analyses.
Acknowledgments
The authors would like to thank William Hart for his cooperation throughout the process and for his constructive feedback
on the protocol for the replication studies. We thank Geoff
Cumming for early feedback on the meta-analysis and Daniel
Simons, Lidia Arends, and Samantha Bouwmeester for their
input on performing the meta-(mediation) analyses. Also, we
would like to thank Jelte Wicherts for feedback on a previous
version of this article. Finally, we thank the following people
(alphabetical order) for their help in conducting the experiments: Blake Alexi, Ashley Buck, Joseph Catalano, Molly Cioffi,
Kaitlyn Fritz, Jessica Hagin, Jeffrey Hong, Yin Kong, Megan
Lund, Kaitlyn Moriarty, Danielle Power, Carley Rampy, Caitlin
Romano, and Miamoua Vang.
Declaration of Conflicting Interests
The authors declared that they had no conflicts of interest with
respect to their authorship or the publication of this article.
Funding
Funding for participant payments was provided to individual
labs by the Association for Psychological Science via a grant
from the Center for Open Science.
Notes
1. These effect sizes that we calculated ourselves are slightly
different from the effect sizes mentioned in the original paper.
2. The mention of “in-lab replication studies” includes the one
online study run by Knepp.
3. Ï„ is a measure of the total heterogeneity. I2 is an estimate of the
proportion of the heterogeneity that goes beyond what would
be expected by chance. It is the total heterogeneity divided
by the total variability. H2 is the total variability divided by the
sampling variability. The closer it is to 1, the more the variability
across effect size estimates is consistent with sampling variability rather than meaningful heterogeneity. Q is a null-hypothesis
test of whether there is meaningful heterogeneity.
4. T is an estimate of Ï„.
References
Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator
variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of
Personality and Social Psychology, 51, 1173–1182.
Carreiras, M., Carriedo, N., Alonso, M. A., & Fernandez, A.
(1997). The role of verb tense and verb aspect in the
foregrounding of information during reading. Memory &
Cognition, 25, 438–446.
Comrie, B. (1985). Tense. Cambridge, England: Cambridge
University Press.
Cumming, G. (2012). Understanding the new statistics: Effect
sizes, confidence intervals, and meta-analysis. New York,
NY: Routledge.
Ferretti, T. R., Kutas, M., & McRae, K. (2007). Verb aspect and
the activation of event knowledge. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33,
182–196.
Ferretti, T. R., Rohde, H., Kehler, A., & Crutchley, M. (2009).
Verb aspect, event structure, and coreferential processing.
Journal of Memory and Language, 61, 191–205.
Hart, W., & Albarracín, D. (2011). Learning about what others
were doing: Verb aspect and attributions of mundane and
criminal intent for past actions. Psychological Science, 22,
261–266.
Hayes, A. F. (2009). Beyond Baron and Kenny: Statistical mediation analysis in the new millennium. Communication
Monographs, 76, 408–420.
Johnson-Laird, P. N. (1983). Mental models. Cambridge, MA:
Harvard University Press.
Kozak, M., Marsh, A. A., & Wegner, D. M. (2006). What do
I think you’re doing? Action identification and mind attribution. Journal of Personality and Social Psychology, 90,
543–555.
Madden, C. J., & Ferretti, T. R. (2009). Verb aspect and the
mental representation of situations. The Expression of Time,
3, 217–231.
Madden, C. J., & Zwaan, R. A. (2003). How does verb aspect
constrain event representations? Memory & Cognition, 31,
663–672.
Magliano, J. P., & Schleich, M. C. (2000). Verb aspect and situation models. Discourse Processes, 29, 83–112.
Morrow, D. G., Greenspan, S. L., & Bower, G. H. (1987).
Accessibility and situation models in narrative comprehension. Journal of Memory and Language, 26, 165–187.
Mozuraitis, M., Chambers, C. G., & Daneman, M. (2013).
Younger and older adults’ use of verb aspect and world
knowledge in the online interpretation of discourse.
Discourse Processes, 50, 1–22.
Shrout, P. E., & Bolger, N. (2002). Mediation in experimental
and nonexperimental studies: New procedures and recommendations. Psychological Methods, 7, 422–445.
Simmons, J. P., Nelson, L. D., & Simonsohn, U. A. (2012, October
14). A 21 word solution. (October 14, 2012). Retrieved from
http://ssrn.com/abstract=2160588
Van Dijk, T. A., & Kintsch, W. (1983). Strategies of discourse
comprehension. New York, NY: Academic Press.
Vendler, Z. (1957). Verbs and times. The Philosophical Review,
66, 143–160.
Young, L., & Waytz, A. (1993). Mind attribution is for morality.
In S. Baron-Cohen, D. J. Cohen, & H. Tager-Flusberg (Eds.),
Understanding other minds: Perspectives from developmental social neuroscience (pp. 93–103). Oxford, England:
Oxford University Press.
Zwaan, R. A., & Radvansky, G. A. (1998). Situation models in
language and memory. Psychological Bulletin, 123, 162–
185.
Downloaded from pps.sagepub.com by guest on January 30, 2016

Purchase answer to see full
attachment

  
error: Content is protected !!