<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd" xmlns="http://www.loc.gov/MARC21/slim">
 <record>
  <leader>00000ctm a22000003i 4500</leader>
  <controlfield tag="001">UP-1685523046126318128</controlfield>
  <controlfield tag="003">Buklod</controlfield>
  <controlfield tag="005">20231106120037.0</controlfield>
  <controlfield tag="006">m    |o  d |      </controlfield>
  <controlfield tag="007">ta</controlfield>
  <controlfield tag="008">230905s        xx     d     |||| ||    |</controlfield>
  <datafield tag="035" ind1=" " ind2=" ">
   <subfield code="a">(iLib)UPMNL-00015383077</subfield>
  </datafield>
  <datafield tag="040" ind1="0" ind2=" ">
   <subfield code="a">NTTCHP</subfield>
   <subfield code="e">rda</subfield>
  </datafield>
  <datafield tag="041" ind1="0" ind2=" ">
   <subfield code="a">eng</subfield>
  </datafield>
  <datafield tag="090" ind1="0" ind2="0">
   <subfield code="a">LG995 2001 H32 </subfield>
   <subfield code="b">C37</subfield>
  </datafield>
  <datafield tag="100" ind1="1" ind2=" ">
   <subfield code="a">Caparas, Delia S.</subfield>
   <subfield code="e">author</subfield>
  </datafield>
  <datafield tag="245" ind1="0" ind2="0">
   <subfield code="a">Development of standardized evaluation tools for assessing clinical competence of third year medical students rotating in the Department of Pediatrics, Dela Salle University Medical Center</subfield>
   <subfield code="c">Delia S. Caparas.</subfield>
  </datafield>
  <datafield tag="264" ind1=" " ind2="0">
   <subfield code="a">2001</subfield>
  </datafield>
  <datafield tag="300" ind1=" " ind2=" ">
   <subfield code="a">74 leaves</subfield>
  </datafield>
  <datafield tag="336" ind1=" " ind2=" ">
   <subfield code="a">text</subfield>
   <subfield code="2">rdacontent</subfield>
  </datafield>
  <datafield tag="337" ind1=" " ind2=" ">
   <subfield code="a">unmediated</subfield>
   <subfield code="2">rdamedia</subfield>
  </datafield>
  <datafield tag="338" ind1=" " ind2=" ">
   <subfield code="a">volume</subfield>
   <subfield code="2">rdacarrier</subfield>
  </datafield>
  <datafield tag="502" ind1=" " ind2=" ">
   <subfield code="a">Major project (Master in Health Professions Education)--University of the Philippines Manila</subfield>
  </datafield>
  <datafield tag="520" ind1="0" ind2=" ">
   <subfield code="a">Assessment of clinical competence is generally based on observed performance of skills. In the present setup of the Department of Pediatrics, Dela Salle University College of Medicine, evaluation consists of a single tool accomplished by only one rater, the preceptor. The existing evaluation tool provided variable and incomplete data which was not effective in identifying the competencies of the students. The criteria used were generalized, non-measureable and un-standardized with emphasis on the cognitive component. This study was done to identify the overall competencies of the students, assessing both the product and actual performance of skills encompassing the three domains of learning. The objective is to formulate a valid, reliable, generalizable and practical evaluation tool utilizing the appropriate raters in order to come up with a fair and objective manner of assessing the clinical competence of third year students who rotated in the Department of Pediatrics.  This is a descriptive developmental study conducted to three sets of respondents ?60 third year medical students, 20 preceptors (Pediatric Consultants) and 20 patient's caretakers. They were chosen by purposive sampling as they were manning the wards during the data collection procedure of the researcher. Four sets of evaluation tools were constructed using rating scales. Field testing was done to correct ambiguous items. The data was then subjected to tests of validity, reliability, generalizability and practicality to come up with standardized assessment tools. Actual collection of data was done from October to December 2000.  A thorough review of the official curricula and appropriate documents were reviewed together with other literature. Major constructs of clinical competence were identified and included: ?eliciting pediatric history?, ?performing appropriate physical examination?, ?formulating a logical diagnosis based on gathered data and physical examination?, ?formulating differential diagnosis?, ?selecting appropriate diagnostic work-up?, ?justifying a chosen therapeutic plan of management? and ?interpersonal skill.? Further analysis showed that year level 5's emphasis is on the formulation of primary and differential diagnosis.  These constructs were broken down into specific behavioral items and were distributed according to the proportional weight of each construct earlier mentioned. The rating scales were developed and the most appropriate raters were identified. They referred to the self and peer because there is active interaction among students and the length of exposure when they work as a group during the first session. On the other hand, the preceptor's assessment is based mainly on the discussion during the second session. The cognitive areas, primarily the formulation of primary and differential diagnosis, are best rated by the preceptors being the content experts, the emphasis being on the higher hierarchy of learning. The noncognitive areas are assessed jointly by self, peer and caretakers. These include communication skill, listening ability and feeling of empathy.  To establish construct and content validity, the completed rating scales for preceptors, patient, self and peer were presented to 5 content experts in Pediatrics. Through this Delphi technique, the instruments were revised thrice upon the recommendation of the experts. To determine interrater reliability, the data was subjected to statistical analysis utilizing kappa-statistic measure of agreement set at p&lt;.05 level of confidence. Results showed a significant degree of agreement among the raters given to the students. This further indicates a strong interrater reliability in the responses of the raters. Results show a fairly acceptable level of agreement. The tools are generalizable based on the homogeneity of the subjects and the sample size which is more than the statistically required number. It is also practical because of the ease of administration and scoring, economical and user-friendly.  This study shows that the evaluation of clinical competence of students involves several raters who will assess different areas of competency encompassing the three domains of learning. The results can be used to provide feedback to both students and faculty to improve clinical evaluation of students performance during ward work.</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="0">
   <subfield code="a">Clinical competence</subfield>
   <subfield code="x">Evaluation.</subfield>
  </datafield>
  <datafield tag="905" ind1=" " ind2=" ">
   <subfield code="a">FI</subfield>
  </datafield>
  <datafield tag="905" ind1=" " ind2=" ">
   <subfield code="a">UP</subfield>
  </datafield>
  <datafield tag="852" ind1="0" ind2=" ">
   <subfield code="a">UPMNL</subfield>
   <subfield code="b">NTTC</subfield>
   <subfield code="h">LG 995 2001 H32</subfield>
   <subfield code="i">C37</subfield>
  </datafield>
  <datafield tag="942" ind1=" " ind2=" ">
   <subfield code="a">Thesis</subfield>
  </datafield>
 </record>
</collection>
