Vis enkel innførsel

dc.contributor.authorYazidi, Anis
dc.contributor.authorHassan, Ismail
dc.contributor.authorHammer, Hugo Lewi
dc.contributor.authorOommen, John
dc.date.accessioned2021-02-01T22:44:50Z
dc.date.accessioned2021-03-10T18:55:29Z
dc.date.available2021-02-01T22:44:50Z
dc.date.available2021-03-10T18:55:29Z
dc.date.issued2020-08-05
dc.identifier.citationYazidi A, Hassan I, Hammer HL, Oommen J. Achieving Fair Load Balancing by Invoking a Learning Automata-based Two Time Scale Separation Paradigm. IEEE Transactions on Neural Networks and Learning Systems. 2020en
dc.identifier.issn2162-237X
dc.identifier.issn2162-2388
dc.identifier.urihttps://hdl.handle.net/10642/9988
dc.description.abstractIn this article, we consider the problem of load balancing (LB), but, unlike the approaches that have been proposed earlier, we attempt to resolve the problem in a fair manner (or rather, it would probably be more appropriate to describe it as an ε-fair manner because, although the LB can, probably, never be totally fair, we achieve this by being ``as close to fair as possible''). The solution that we propose invokes a novel stochastic learning automaton (LA) scheme, so as to attain a distribution of the load to a number of nodes, where the performance level at the different nodes is approximately equal and each user experiences approximately the same Quality of the Service (QoS) irrespective of which node that he/she is connected to. Since the load is dynamically varying, static resource allocation schemes are doomed to underperform. This is further relevant in cloud environments, where we need dynamic approaches because the available resources are unpredictable (or rather, uncertain) by virtue of the shared nature of the resource pool. Furthermore, we prove here that there is a coupling involving LA's probabilities and the dynamics of the rewards themselves, which renders the environments to be nonstationary. This leads to the emergence of the so-called property of ``stochastic diminishing rewards.'' Our newly proposed novel LA algorithm ε-optimally solves the problem, and this is done by resorting to a two-time-scale-based stochastic learning paradigm. As far as we know, the results presented here are of a pioneering sort, and we are unaware of any comparable results.en
dc.description.sponsorshipThe work of the last author was partially supported by NSERC, the Natural Sciences and Engineering Council of Canada.en
dc.language.isoenen
dc.publisherIEEEen
dc.relation.ispartofseriesIEEE Transactions on Neural Networks and Learning Systems;
dc.rights© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en
dc.subjectContinuous learning automatonsen
dc.subjectFair load balancingen
dc.subjectResource allocationsen
dc.titleAchieving Fair Load Balancing by Invoking a Learning Automata-based Two Time Scale Separation Paradigmen
dc.typeJournal articleen
dc.typePeer revieweden
dc.date.updated2021-02-01T22:44:50Z
dc.description.versionacceptedVersionen
dc.identifier.doihttps://ieeexplore.ieee.org/document/9159930
dc.identifier.cristin1822128
dc.source.journalIEEE Transactions on Neural Networks and Learning Systems


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel