Please use this identifier to cite or link to this item: http://repo.lib.jfn.ac.lk/ujrr/handle/123456789/3805
Full metadata record
DC FieldValueLanguage
dc.contributor.authorJanani, T.
dc.contributor.authorRamanan, A.
dc.date.accessioned2021-08-19T03:11:26Z
dc.date.accessioned2022-06-28T10:19:57Z-
dc.date.available2021-08-19T03:11:26Z
dc.date.available2022-06-28T10:19:57Z-
dc.date.issued2017
dc.identifier.urihttp://repo.lib.jfn.ac.lk/ujrr/handle/123456789/3805-
dc.description.abstractBag-of-Features (BoF) approach have been successfully applied to visual object classification tasks. Recently, convolutional neural networks (CNNs) demonstrated excellent performance on object classification problems. In this paper we propose to construct a new feature set by processing CNN activations from convolutional layers fused with the traditional BoF representation for efficient object classification using SVMs. The dimension of convolutional features were reduced using PCA technique and the bag-of-features representation was reduced by tailoring the visual codebook using a statistical codeword selection method, in order to obtain a compact representation of the new feature set which achieves increased classification rate while requiring less storage. The proposed framework, based on the new features, outperforms other state-of-the-art approaches that have been evaluated on benchmark datasets: Xerox7, UIUC Texture, and Caltech-101.en_US
dc.language.isoenen_US
dc.publisherUniversity of Jaffnaen_US
dc.subjectTerms object classificationen_US
dc.subjectbag-of-featuresen_US
dc.subjectconvolutional neural networken_US
dc.subjectdeep learningen_US
dc.subjectshallow learningen_US
dc.titleFeature Fusion for Efficient Object Classification Using Deep and Shallow Learningen_US
dc.typeArticleen_US
Appears in Collections:Interdisciplinary Studies FoT

Files in This Item:
File Description SizeFormat 
Feature Fusion for Efficient Object Classification Using.pdf555.94 kBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.