<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd" xmlns="http://www.loc.gov/MARC21/slim">
 <record>
  <leader>00000caa a22000003a 4500</leader>
  <controlfield tag="001">UP-8027390931312424523</controlfield>
  <controlfield tag="003">Buklod</controlfield>
  <controlfield tag="005">20140911150234.0</controlfield>
  <controlfield tag="006">o--- |     ||   ||</controlfield>
  <controlfield tag="007">ta</controlfield>
  <controlfield tag="008">140911s        xx     d     r    |||| u|</controlfield>
  <datafield tag="035" ind1=" " ind2=" ">
   <subfield code="a">(iLib)UPCEB-00008419479</subfield>
  </datafield>
  <datafield tag="040" ind1=" " ind2=" ">
   <subfield code="a">emz</subfield>
  </datafield>
  <datafield tag="041" ind1=" " ind2=" ">
   <subfield code="a">eng</subfield>
  </datafield>
  <datafield tag="100" ind1="1" ind2=" ">
   <subfield code="a">Paunonen, Sampo V.</subfield>
  </datafield>
  <datafield tag="245" ind1="0" ind2="4">
   <subfield code="a">Socially desirable responding and its elusive effects on the validity  of personality assessments. [article].</subfield>
  </datafield>
  <datafield tag="300" ind1=" " ind2=" ">
   <subfield code="a">pp. 158-175</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
   <subfield code="a">Past studies of socially desirable self-reports on the items of personality measures have found inconsistent effects of the response bias on the measures' predictive validities, with some studies reporting small effects and other studies reporting large effects.  using Monte Carlo methods, we evaluated various models of socially desirable responding by systematically  adding predetermined amounts of the bias to the simulated personality trait scores of hypothetical test respondents before computing test-criterion validity correlations.  Our study generally supported previous findings that have reported relatively minor decrements in criterion prediction, even with personality socre3s that were massively infused with desirability bias.  Furthermore, the response bias failed to reveal itself as a statistical  moderator of test validity or as a suppressor of validity.  Large differences between some respondents' obtained test scores and their true trait scores, however, meant that the personality measure's construct validity would be severely compromised and, more specifically, that estimates of those individuals' criterion performance would be grossly in error.  Our discussion focuses on reason for the discrepant results reported in the literature pertaining to the effect of socially desirable responding on criterion validity. more important, we explain why the lack of effects of desirability bias on the usual indicators of validity, moderation, and suppression should be surprising.-(from the authors)</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
   <subfield code="a">Social desirability.</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
   <subfield code="a">Personality assessement.</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
   <subfield code="a">Response bias.</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
   <subfield code="a">Test validity.</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
   <subfield code="a">Monte Carlo simulation.</subfield>
  </datafield>
  <datafield tag="773" ind1="0" ind2=" ">
   <subfield code="a">Journal of personality and social psychology</subfield>
   <subfield code="g">vol.103, 2012.</subfield>
  </datafield>
  <datafield tag="942" ind1=" " ind2=" ">
   <subfield code="a">Analytics</subfield>
  </datafield>
 </record>
</collection>
