<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd" xmlns="http://www.loc.gov/MARC21/slim">
 <record>
  <leader>00000ntmaa22000002i 4500</leader>
  <controlfield tag="001">UP-8027390931312009139</controlfield>
  <controlfield tag="003">Buklod</controlfield>
  <controlfield tag="005">20260212102326.0</controlfield>
  <controlfield tag="006">m|||||o||d||||||||</controlfield>
  <controlfield tag="007">cr |||||||||||</controlfield>
  <controlfield tag="008">260212s2025    xx    a     b ||| u eng  </controlfield>
  <datafield tag="035" ind1=" " ind2=" ">
   <subfield code="a">UPVT-00020035100</subfield>
  </datafield>
  <datafield tag="040" ind1=" " ind2=" ">
   <subfield code="a">UPTC</subfield>
   <subfield code="e">rda</subfield>
  </datafield>
  <datafield tag="041" ind1=" " ind2=" ">
   <subfield code="a">eng</subfield>
  </datafield>
  <datafield tag="090" ind1=" " ind2=" ">
   <subfield code="a">LG 993.5 2025 C66</subfield>
   <subfield code="b">P45</subfield>
  </datafield>
  <datafield tag="100" ind1="1" ind2=" ">
   <subfield code="a">Phillips, Charles Roy R.</subfield>
   <subfield code="c">Jr.</subfield>
   <subfield code="e">author.</subfield>
  </datafield>
  <datafield tag="245" ind1="1" ind2="0">
   <subfield code="a">MarketNet</subfield>
   <subfield code="b">multicolor hybrid CNN and ViT for multitask image classification &amp; segmentation</subfield>
   <subfield code="c">Charles Roy R. Phillips, Jr. ; John Paul T. Yusiong, adviser.</subfield>
  </datafield>
  <datafield tag="264" ind1=" " ind2="0">
   <subfield code="a">Tacloban City</subfield>
   <subfield code="b">Division of Natural Sciences and Mathematics, University of the Philippines Tacloban College</subfield>
   <subfield code="c">2025.</subfield>
  </datafield>
  <datafield tag="300" ind1=" " ind2=" ">
   <subfield code="a">xiii, 91 leaves</subfield>
   <subfield code="b">illustrations, color</subfield>
   <subfield code="c">31 cm.</subfield>
  </datafield>
  <datafield tag="336" ind1=" " ind2=" ">
   <subfield code="a">text</subfield>
   <subfield code="2">rdacontent</subfield>
  </datafield>
  <datafield tag="337" ind1=" " ind2=" ">
   <subfield code="a">unmediated</subfield>
   <subfield code="2">rdacontent</subfield>
  </datafield>
  <datafield tag="338" ind1=" " ind2=" ">
   <subfield code="a">volume</subfield>
   <subfield code="2">rdacarrier</subfield>
  </datafield>
  <datafield tag="502" ind1=" " ind2=" ">
   <subfield code="a">Undergraduate thesis (Bachelor of Science in Computer Science) -- University of the Philippines, Tacloban.</subfield>
  </datafield>
  <datafield tag="504" ind1=" " ind2=" ">
   <subfield code="a">Includes bibliographical references.</subfield>
  </datafield>
  <datafield tag="506" ind1=" " ind2=" ">
   <subfield code="a">Available to the general public-YES.</subfield>
  </datafield>
  <datafield tag="506" ind1=" " ind2=" ">
   <subfield code="a">Available only after consultation with author/adviser-NO.</subfield>
  </datafield>
  <datafield tag="506" ind1=" " ind2=" ">
   <subfield code="a">Available only for those bound by confidentiality agreement-NO.</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
   <subfield code="a">Monitoring grocery inventory is important for effective supply chain management and reduction of waste. Current grocery image classification models generally rely on RGB images but fail to capture the complex variations of grocery items. This paper aims to counter that limitation by introducing the novel deep learning model, known as MarketNet, which specifically aims at grocery image-based classification and segmentation tasks using multi-input channels. MarketNet uses a multi-input convolutional neural network (CNN) integrated with a Vision Transformer (VIT) to improve classification and segmentation accuracy The model incorporates multicolor input to learn richer and more discriminative features. The method was tested using a grocery image dataset in which the model classified items into predefined categories while simultaneously segmenting images for further localization. The methodology also includes preprocessing grocery images into multiple color channels followed by feature extraction by CNN. Features are further improved by the Vision Transformer to better accuracy and efficiency in predicting. It is shown experimentally that MarketNet does better in comparison to its baseline variants with the following metrics 98.87% Top-1 accuracy and 99.93% Top-5 accuracy for classification, and 94.54% mIoU, 96.73% accuracy, 96.81% precision, 97.54% recall, and 97.19% F1-Score far segmentation. This work contributes to the progress of grocery image analysis as a framework that can support improved inventory management, reduction in waste, and real-time product tracking within grocery settings. </subfield>
  </datafield>
  <datafield tag="650" ind1=" " ind2="0">
   <subfield code="a">Inventory control</subfield>
   <subfield code="b">Automation.</subfield>
  </datafield>
  <datafield tag="650" ind1=" " ind2="0">
   <subfield code="a">Image processing</subfield>
   <subfield code="x">Digital techniques.</subfield>
  </datafield>
  <datafield tag="700" ind1="1" ind2=" ">
   <subfield code="a">Yusiong, John Paul T.</subfield>
   <subfield code="e">adviser.</subfield>
  </datafield>
  <datafield tag="842" ind1=" " ind2=" ">
   <subfield code="a">Thesis</subfield>
  </datafield>
  <datafield tag="905" ind1=" " ind2=" ">
   <subfield code="a">FI</subfield>
  </datafield>
  <datafield tag="905" ind1=" " ind2=" ">
   <subfield code="a">UP</subfield>
  </datafield>
  <datafield tag="852" ind1="0" ind2=" ">
   <subfield code="a">UPTAC</subfield>
   <subfield code="b">UPTAC</subfield>
   <subfield code="h">LG 993.5 2025 C66</subfield>
   <subfield code="i">P45</subfield>
  </datafield>
  <datafield tag="942" ind1=" " ind2=" ">
   <subfield code="a">Thesis</subfield>
  </datafield>
 </record>
</collection>
