add health made a statement about kanazawa’s use of their data over @scientific american (scroll down, it’s at the bottom of the blog post), which includes a bit about the interviewers and their evaluations of subjects’ attractiveness:
“Interviewer ratings of respondent attractiveness represent a subjective ‘societal’ perception of the respondent’s attractiveness. We included these items because there is a long line of research evidence that indicates that perceived attractiveness is related to important health and social outcomes, including access to health care, health education and instruction, job search, promotions, academic achievement, and social success in friendship and marriage. For example, males who are rated more highly attractive tend to have higher wages, shorter periods of unemployment, and greater success in the job market*. In Add Health, we measure respondents’ self-perceptions and in the case of interviewer ratings, others’ perceptions. Despite one’s own perception of one’s intelligence, identity and appearance, often societal perceptions matter as well, and matter in ways that research needs to understand to inform policies to prevent discrimination, unequal access to resources, and social inequality.
“Because the interviewer’s perception is subjective, researchers need to account for the characteristics and life experiences of the interviewer in interpreting their ratings. A wealth of research on perceived attractiveness (that is, as perceived by others, not oneself) has shown that such ratings vary according to the characteristics of the rater. For example, a male interviewer might rate a female’s attractiveness according to different criteria than a female interviewer rating the same female’s attractiveness. Other interviewer characteristics that are important to take into account are age, race, ethnicity, education, geographic location, and life experiences, in general. Notably, several characteristics of the interviewers are available in the restricted use Add Health dataset at Waves 3 and 4. It is these data (e.g., interviewer age, sex, race, ethnicity, education) that might more usefully inform an analysis undertaken to investigate the role of other-perceived versus self-perceived attractiveness on some outcome of interest (employment, health, etc)….“
so, to settle the question of “who were the interviewers,” somebody just needs to go get the data from add health and blog it. since it’s “restricted use” data, presumably a non-accredited nobody like yours truly prolly wouldn’t get access. but maybe some actual scientist** (perhaps one who is also a blogger!) will step up to the plate.
the neuroskeptic looked at the possible bias of the interviewers from another, very creative if i might say, angle. he tried to find out if any other researchers had found an anti-black bias on the part of the add health interviewers:
“The obvious problem is that maybe the interviewers were biased against black women, and rated them lower for that reason. Kanazawa didn’t consider this in his post, which is unquestionably an oversight, but he did go on to speculate as to the biological reasons why they might be less attractive.
“However, looking at the original Add Health data, can we check whether this bias was at play or not?
“Short answer: I found no evidence either way.
“Long answer: I first looked over the Add Health website but it doesn’t seem to mention anything about who the interviewers were. It doesn’t mention their own ethnicity, which would be helpful, although even if they were all black themselves, they might have internalized racism, so that wouldn’t be conclusive. They were trained, but then, you can’t train someone to not be a racist.
“Then I decided to look at the publications. I searched Google Scholar for ‘Add Health’ + attractiveness. This reveals a number of articles, including a 2007 one by Kanazawa ironically, but only one seemed really relevant: Weight Preoccupation as a Function of Observed Physical Attractiveness. (There are other hits, but I skimmed the most likely looking ones and they didn’t address bias.)
“The details are unimportant, but it involved race and attractiveness, so the authors had to deal with the question of potential rater bias. Unlike Kanazawa they didn’t just brush this under the carpet:
‘Although the interviewers were different races and ethnicities, there is no information about the race or ethnicity of the interviewer for any one respondent to examine systematic bias. [altho the add health people above seem to say otherwise for waves iii and iv. – hbdchick]
‘However, post hoc cluster analyses that controlled for an interviewer effect yielded similar results; thus, it is unlikely that interviewers had any substantial biases against any one ethnic group or that they rated attractiveness significantly differently from each other.’
“The point about ‘post-hoc cluster analysis’ is the key here. To try to control for rater effects (not just racial ones) they analyzed the data covarying for which interviewer rated each girl. They didn’t know what races the interviewers were, but they did know which girls got rated by the same interviewer. They found that controlling for the rater did not affect their results.
“So does that mean there was no bias? No. Because – this only applies to their results, which were not about attractiveness per se, but about the interaction of attractiveness with other factors to predict an outcome variable (dieting and concern about weight) within a given race….
“So in my judgement, we just can’t tell. Unless I’ve missed something, in which case, please tell us about it in the comments.”
**i am, emphatically, NOT a scientist. i don’t even play one here on the innerwebs. i’m just a lay person interested in science-y stuff.
(note: comments do not require an email. or anti-matter. wait. wha?)