Vis enkel innførsel

dc.contributor.authorJha, Debesh
dc.contributor.authorYazidi, Anis
dc.contributor.authorRiegler, Michael Alexander
dc.contributor.authorJohansen, Dag
dc.contributor.authorJohansen, Håvard D.
dc.contributor.authorHalvorsen, Pål
dc.date.accessioned2022-04-19T08:19:58Z
dc.date.available2022-04-19T08:19:58Z
dc.date.created2022-01-20T16:28:35Z
dc.date.issued2021-02-21
dc.identifier.isbn9783030692445
dc.identifier.isbn978-3-030-69243-8
dc.identifier.issn1611-3349
dc.identifier.issn0302-9743
dc.identifier.urihttps://hdl.handle.net/11250/2991234
dc.description.abstractDeep Neural Networks (DNNs) have become the de-facto standard in computer vision, as well as in many other pattern recognition tasks. A key drawback of DNNs is that the training phase can be very computationally expensive. Organizations or individuals that cannot afford purchasing state-of-the-art hardware or tapping into cloud hosted infrastructures may face a long waiting time before the training completes or might not be able to train a model at all. Investigating novel ways to reduce the training time could be a potential solution to alleviate this drawback, and thus enabling more rapid development of new algorithms and models. In this paper, we propose LightLayers, a method for reducing the number of trainable parameters in deep neural networks (DNN). The proposed LightLayers consists of LightDense and LightConv2D layer that are as efficient as regular Conv2D and Dense layers, but uses less parameters. We resort to Matrix Factorization to reduce the complexity of the DNN models resulting into lightweight DNN models that require less computational power, without much loss in the accuracy. We have tested LightLayers on MNIST, Fashion MNIST, CIFAR 10, and CIFAR 100 datasets. Promising results are obtained for MNIST, Fashion MNIST, CIFAR-10 datasets whereas CIFAR 100 shows acceptable performance by using fewer parameters.en_US
dc.language.isoengen_US
dc.publisherSpringeren_US
dc.relation.ispartofParallel and Distributed Computing, Applications and Technologies: 21st International Conference, PDCAT 2020, Shenzhen, China, December 28–30, 2020, Proceedings
dc.relation.ispartofseriesLecture Notes in Computer Science;Volume 12606
dc.subjectDeep learningen_US
dc.subjectLightweight modelsen_US
dc.subjectConvolutional neural networksen_US
dc.subjectMNISTen_US
dc.subjectCIFAR-10en_US
dc.subjectWeight decompositionen_US
dc.titleLightLayers: Parameter Efficient Dense and Convolutional Layers for Image Classificationen_US
dc.typeConference objecten_US
dc.description.versionacceptedVersionen_US
cristin.ispublishedtrue
cristin.fulltextpostprint
cristin.qualitycode1
dc.identifier.doihttps://doi.org/10.1007/978-3-030-69244-5_25
dc.identifier.cristin1986602
dc.source.journalLecture Notes in Computer Scienceen_US
dc.source.volume12606en_US
dc.source.issue12606en_US
dc.source.pagenumber12en_US
dc.relation.projectNorges forskningsråd: 263248en_US
dc.subject.nsiVDP::Kommunikasjon og distribuerte systemer: 423en_US
dc.subject.nsiVDP::Communication and distributed systems: 423en_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel