Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = Flappy Bird game

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4515 KB  
Article
Playing Flappy Bird Based on Motion Recognition Using a Transformer Model and LIDAR Sensor
by Iveta Dirgová Luptáková, Martin Kubovčík and Jiří Pospíchal
Sensors 2024, 24(6), 1905; https://doi.org/10.3390/s24061905 - 16 Mar 2024
Cited by 1 | Viewed by 3031
Abstract
A transformer neural network is employed in the present study to predict Q-values in a simulated environment using reinforcement learning techniques. The goal is to teach an agent to navigate and excel in the Flappy Bird game, which became a popular model for [...] Read more.
A transformer neural network is employed in the present study to predict Q-values in a simulated environment using reinforcement learning techniques. The goal is to teach an agent to navigate and excel in the Flappy Bird game, which became a popular model for control in machine learning approaches. Unlike most top existing approaches that use the game’s rendered image as input, our main contribution lies in using sensory input from LIDAR, which is represented by the ray casting method. Specifically, we focus on understanding the temporal context of measurements from a ray casting perspective and optimizing potentially risky behavior by considering the degree of the approach to objects identified as obstacles. The agent learned to use the measurements from ray casting to avoid collisions with obstacles. Our model substantially outperforms related approaches. Going forward, we aim to apply this approach in real-world scenarios. Full article
(This article belongs to the Special Issue Robust Motion Recognition Based on Sensor Technology)
Show Figures

Figure 1

15 pages, 6751 KB  
Article
Using a Reinforcement Q-Learning-Based Deep Neural Network for Playing Video Games
by Cheng-Jian Lin, Jyun-Yu Jhang, Hsueh-Yi Lin, Chin-Ling Lee and Kuu-Young Young
Electronics 2019, 8(10), 1128; https://doi.org/10.3390/electronics8101128 - 7 Oct 2019
Cited by 19 | Viewed by 5394
Abstract
This study proposed a reinforcement Q-learning-based deep neural network (RQDNN) that combined a deep principal component analysis network (DPCANet) and Q-learning to determine a playing strategy for video games. Video game images were used as the inputs. The proposed DPCANet was used to [...] Read more.
This study proposed a reinforcement Q-learning-based deep neural network (RQDNN) that combined a deep principal component analysis network (DPCANet) and Q-learning to determine a playing strategy for video games. Video game images were used as the inputs. The proposed DPCANet was used to initialize the parameters of the convolution kernel and capture the image features automatically. It performs as a deep neural network and requires less computational complexity than traditional convolution neural networks. A reinforcement Q-learning method was used to implement a strategy for playing the video game. Both Flappy Bird and Atari Breakout games were implemented to verify the proposed method in this study. Experimental results showed that the scores of our proposed RQDNN were better than those of human players and other methods. In addition, the training time of the proposed RQDNN was also far less than other methods. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop