Unsupervised State Representation Learning in Partially Observable Atari Games
Chapter, Peer reviewed, Conference object, Journal article
Accepted version
Permanent lenke
https://hdl.handle.net/11250/3120462Utgivelsesdato
2023Metadata
Vis full innførselSamlinger
Originalversjon
https://doi.org/10.1007/978-3-031-44240-7_21Sammendrag
State representation learning aims to capture latent factors of an environment. Although some researchers realize the connections between masked image modeling and contrastive representation learning, the effort is focused on using masks as an augmentation technique to represent the latent generative factors better. Partially observable environments in reinforcement learning have not yet been carefully studied using unsupervised state representation learning methods. In this article, we create an unsupervised state representation learning scheme for partially observable states. We conducted our experiment on a previous Atari 2600 framework designed to evaluate representation learning models. A contrastive method called Spatiotemporal DeepInfomax (ST-DIM) has shown state-of-the-art performance on this benchmark but remains inferior to its supervised counterpart. Our approach improves ST-DIM when the environment is not fully observable and achieves higher F1 scores and accuracy scores than the supervised learning counterpart. The mean accuracy score averaged over categories of our approach is 66%, compared to 38% of supervised learning. The mean F1 score is 64% to 33%. The code can be found on https://github.com/mengli11235/MST_DIM.