<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd" xmlns="http://www.loc.gov/MARC21/slim">
 <record>
  <leader>00000ctm a22000004a 4500</leader>
  <controlfield tag="001">UP-99796217611426037</controlfield>
  <controlfield tag="003">Buklod</controlfield>
  <controlfield tag="005">20230215105418.0</controlfield>
  <controlfield tag="006">m    |o  d |      </controlfield>
  <controlfield tag="007">ta</controlfield>
  <controlfield tag="008">141011s        xx     d     r    |||| u|</controlfield>
  <datafield tag="035" ind1=" " ind2=" ">
   <subfield code="a">(iLib)UPD-00234405546</subfield>
  </datafield>
  <datafield tag="040" ind1=" " ind2=" ">
   <subfield code="a">DENGII</subfield>
   <subfield code="e">rda</subfield>
  </datafield>
  <datafield tag="041" ind1=" " ind2=" ">
   <subfield code="a">eng</subfield>
  </datafield>
  <datafield tag="042" ind1=" " ind2=" ">
   <subfield code="a">DMLUC</subfield>
  </datafield>
  <datafield tag="090" ind1=" " ind2=" ">
   <subfield code="a">LG 995 2014 E64</subfield>
   <subfield code="b">A54</subfield>
  </datafield>
  <datafield tag="100" ind1="1" ind2=" ">
   <subfield code="a">Angco, Marc Jordan G.</subfield>
   <subfield code="e">author.</subfield>
  </datafield>
  <datafield tag="245" ind1="1" ind2="0">
   <subfield code="a">Depth perception through adaptive 3D view perspective and motion parallax</subfield>
   <subfield code="c">Marc Jordan G. Angco.</subfield>
  </datafield>
  <datafield tag="264" ind1=" " ind2="1">
   <subfield code="a">Quezon City</subfield>
   <subfield code="b">College of Engineering, University of the Philippines Diliman</subfield>
   <subfield code="c">2014.</subfield>
  </datafield>
  <datafield tag="300" ind1=" " ind2=" ">
   <subfield code="a">x, 76 leaves</subfield>
   <subfield code="b">illustrations</subfield>
   <subfield code="c">28 cm</subfield>
  </datafield>
  <datafield tag="336" ind1=" " ind2=" ">
   <subfield code="a">text</subfield>
   <subfield code="2">rdacontent</subfield>
  </datafield>
  <datafield tag="337" ind1=" " ind2=" ">
   <subfield code="a">unmediated</subfield>
   <subfield code="2">rdamedia</subfield>
  </datafield>
  <datafield tag="338" ind1=" " ind2=" ">
   <subfield code="a">volume</subfield>
   <subfield code="2">rdacarrier</subfield>
  </datafield>
  <datafield tag="502" ind1=" " ind2=" ">
   <subfield code="a">Thesis (M.S. Electrical Engineering)--University of the Philippines, Diliman.</subfield>
  </datafield>
  <datafield tag="506" ind1=" " ind2=" ">
   <subfield code="a">Available to the general public.</subfield>
  </datafield>
  <datafield tag="520" ind1="3" ind2=" ">
   <subfield code="a">Previous years show the growth in three-dimensional (3D) content and stereoscopic 3D displays. These displays aim to provide realism through an illusion of depth to the viewers. However, stereoscopic 3D displays are only prevalent as larger format displays like television sets and cinemas, because of specialized hardware requirements. Stereoscopic 3D displays are also known to bring viewing discomfort to some users, outweighing the benefits that the technology provides. These contributed to the slow adoption rate of 3D technology. This research provides a method for users to perceive depth through monocular depth cues and motion parallax on 2D displays. Motion parallax is a depth cue based on the perspective seen through the movement of the viewer. The system developed is composed of a head tracking system that detects the position and movement of the user and a 3D graphics feedback system that changes the perspective as seen by the viewer. The system was implemented on a mobile tablet, with users viewing a scene on display and making use of its front camera as input. The tests done with the system show that users can perceive depth through lateral and forward-backward head movements while viewing the screen. The perceived quality of the depth of the users through the scene shown on the screen is also comparable to the users'perceived depth of the scene in real life.</subfield>
  </datafield>
  <datafield tag="650" ind1=" " ind2="0">
   <subfield code="a">Three-dimensional display systems.</subfield>
  </datafield>
  <datafield tag="650" ind1=" " ind2="0">
   <subfield code="a">Motion perception (Vision).</subfield>
  </datafield>
  <datafield tag="842" ind1=" " ind2=" ">
   <subfield code="a">Thesis</subfield>
  </datafield>
  <datafield tag="905" ind1=" " ind2=" ">
   <subfield code="a">FI</subfield>
  </datafield>
  <datafield tag="905" ind1=" " ind2=" ">
   <subfield code="a">UP</subfield>
  </datafield>
  <datafield tag="852" ind1="0" ind2=" ">
   <subfield code="a">UPD</subfield>
   <subfield code="b">DARCHIVES</subfield>
   <subfield code="h">LG 995 2014 E64</subfield>
   <subfield code="i">A54</subfield>
  </datafield>
  <datafield tag="852" ind1="0" ind2=" ">
   <subfield code="a">UPD</subfield>
   <subfield code="b">DENG-II</subfield>
   <subfield code="h">LG 995 2014 E64</subfield>
   <subfield code="i">A54</subfield>
  </datafield>
  <datafield tag="942" ind1=" " ind2=" ">
   <subfield code="a">Thesis</subfield>
  </datafield>
 </record>
</collection>
