dc.contributor.author Yazidi, Anis dc.contributor.author Zhang, Xuan dc.contributor.author Lei, Jiao dc.contributor.author Oommen, John dc.date.accessioned 2020-02-10T12:36:46Z dc.date.accessioned 2020-02-11T13:41:38Z dc.date.available 2020-02-10T12:36:46Z dc.date.available 2020-02-11T13:41:38Z dc.date.issued 2019 dc.identifier.citation Yazidi A, Zhang X, Lei J, Oommen J. The Hierarchical Continuous Pursuit Learning Automation: A Novel Scheme for Environments With Large Numbers of Actions. IEEE Transactions on Neural Networks and Learning Systems. 2019 en dc.identifier.issn 2162-237X dc.identifier.issn 2162-237X dc.identifier.issn 2162-2388 dc.identifier.uri https://hdl.handle.net/10642/8102 dc.description.abstract Although the field of learning automata (LA) has made significant progress in the past four decades, the LA-based methods to tackle problems involving environments with a large number of actions is, in reality, relatively unresolved. The extension of the traditional LA to problems within this domain cannot be easily established when the number of actions is very large. This is because the dimensionality of the action probability vector is correspondingly large, and so, most components of the vector will soon have values that are smaller than the machine accuracy permits, implying that they will never be chosen . This paper presents a solution that extends the continuous pursuit paradigm to such large -actioned problem domains. The beauty of the solution is that it is hierarchical, where all the actions offered by the environment reside as leaves of the hierarchy. Furthermore, at every level, we merely require a two -action LA that automatically resolves the problem of dealing with arbitrarily small action probabilities. In addition, since all the LA invoke the pursuit paradigm, the best action at every level trickles up toward the root. Thus, by invoking the property of the “max” operator, in which the maximum of numerous maxima is the overall maximum, the hierarchy of LA converges to the optimal action. This paper describes the scheme and formally proves its $\epsilon$ -optimal convergence. The results presented here can, rather trivially, be extended for the families of discretized and Bayesian pursuit LA too. This paper also reports extensive experimental results (including for environments having 128 and 256 actions) that demonstrate the power of the scheme and its computational advantages. As far as we know, there are no comparable pursuit-based results in the field of LA . In some cases, the hierarchical continuous pursuit automaton requires less than 18% of the number of iterations than the benchmark $L_{R-I}$ scheme, which is, by all metrics, phenomenal . en dc.language.iso en en dc.publisher Institute of Electrical and Electronics Engineers (IEEE) en dc.relation.ispartofseries IEEE Transactions on Neural Networks and Learning Systems;Volume: 31, Issue: 2 dc.rights Author can archive post-print (ie final draft post-refereeing). en © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. dc.subject Estimator-based learning automata en dc.subject Hierarchical learning automata en dc.subject Learning automata en dc.subject Large action numbers en dc.subject Pursuit learning automata en dc.title The Hierarchical Continuous Pursuit Learning Automation: A Novel Scheme for Environments With Large Numbers of Actions en dc.type Journal article en dc.type Peer reviewed en dc.date.updated 2020-02-10T12:36:46Z dc.description.version acceptedVersion en dc.identifier.doi https://dx.doi.org/10.1109/TNNLS.2019.2905162 dc.identifier.cristin 1769489 dc.source.journal IEEE Transactions on Neural Networks and Learning Systems
﻿