Jekyll2023-08-03T18:50:16+00:00https://paulobruno.github.io/feed.xmlPaulo BrunoPersonal Site.Paulo Bruno SerafimDRLeague: a Novel 3D Environment for Training Reinforcement Learning Agents2022-10-25T00:00:00+00:002022-10-25T00:00:00+00:00https://paulobruno.github.io/publication/SBGames-drleague<p><em>XXI Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)</em></p>
<p> <a href="https://hyuan02.github.io/">Hyuan Peixoto Farrapo</a><sup>1,2</sup>
<a href="https://romulofff.github.io/">Rômulo Freire Férrer Filho</a><sup>1,3</sup><br />
<a href="https://scholar.google.com.br/citations?user=gnTTsAYAAAAJ&hl=en">José Gilvan Rodrigues Maia</a><sup>1,2</sup>
<a href="https://paulobruno.github.io">Paulo Bruno de Sousa Serafim</a><sup>4</sup></p>
<p style="font-size:0.7em">
<sup>1</sup>Federal University of Ceará (UFC)
<sup>2</sup>Virtual UFC Institute<br />
<sup>3</sup>Department of Computing (DC)
<sup>4</sup>COATI, Inria centre at Université Côte d'Azur
</p>
<p><img src="/assets/images/drleague.jpg" alt="DRLeague" /></p>
<hr />
<!-- Paper: [[PDF](https://www.sbgames.org/proceedings2022/ComputacaoFull/.pdf)] -->
<!-- Page: [[IEEE](https://ieeexplore.ieee.org/document/)] -->
<h3 id="abstract">Abstract</h3>
<p style="text-align:justify;">
The development of autonomous agents performing unique interactions that resemble human-like behavior is currently driven by Deep Reinforcement Learning (DRL) techniques combined with complex virtual environments. These constitute an active field of research that is fueled by environments usually inspired or borrowed from video games. Although works in the area commonly do not make use of trending 3D games, these games are interesting testbeds for more complex and compelling behaviors, as they tend to explore more variables than their predecessors. This paper introduces DRLeague, a novel DRL environment, proposed to be open-source, and easily customizable, which supports mechanics for 3D games inspired by the popular “car football” game Rocket League. Besides the typical gameplay, we implemented four challenging minigames based on the mechanics from this title with advanced physics simulation and fine-grained car control: penalty shoot, multiplayer penalty shoot, barrier kick, and aerial shoot, each of these requiring more complex skills than the previous ones. Finally, we provide solid baseline experimental results showing the learning progress of agents using Unity’s ML-Agents toolkit, evidencing DRLeague as a suitable testbed in the application of machine learning techniques.
</p>
<h3 id="bibtex">BibTeX</h3>
<p style="text-align:left">
<a href="/assets/citations/farrapo2022drleague.bib">Download</a>
</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@InProceedings{farrapo2022drleague,
title = {DRLeague: a Novel 3D Environment for Training Reinforcement Learning Agents},
author = {Farrapo, Hyuan Peixoto and F\'{e}rrer Filho, R\^{o}mulo Freire and Maia, Jos\'{e} Gilvan Rodrigues and Serafim, Paulo Bruno Sousa},
booktitle = {Proceedings of the XXI Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)},
pages = {1--6},
year = {2022}
}
</code></pre></div></div>Paulo Bruno SerafimXXI Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)Gym Hero: A Research Environment for Reinforcement Learning Agents in Rhythm Games2021-10-19T00:00:00+00:002021-10-19T00:00:00+00:00https://paulobruno.github.io/publication/SBGames-gym-hero<p><em>XX Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)</em></p>
<p><a href="https://romulofff.github.io/">Rômulo Freire Férrer Filho</a><sup>1</sup>
<a href="http://www.lia.ufc.br/~yuri/">Yuri Lenon Barbosa Nogueira</a><sup>2</sup>
<a href="http://www.lia.ufc.br/~cvidal/">Creto Augusto Vidal</a><sup>2</sup><br />
<a href="http://www.lia.ufc.br/~joaquimb/">Joaquim Bento Cavalcante Neto</a><sup>2</sup>
<a href="https://paulobruno.github.io">Paulo Bruno de Sousa Serafim</a><sup>3</sup></p>
<p style="font-size:0.7em">
<sup>1</sup>Teleinformatics Engineering Department (DETI), Federal University of Ceara (UFC)<br />
<sup>2</sup>Department of Computing (DC), Federal University of Ceara (UFC)
<sup>3</sup>Instituto Atlântico
</p>
<p><img src="/assets/images/gymhero.jpg" alt="Gym Hero" /></p>
<hr />
<p>Paper: [<a href="https://www.sbgames.org/proceedings2021/ComputacaoFull/217884.pdf">PDF</a>]
Page: [<a href="https://ieeexplore.ieee.org/document/9637691">IEEE</a>]</p>
<h3 id="abstract">Abstract</h3>
<p style="text-align:justify;">
This work presents a Reinforcement Learning environment, called Gym Hero, based on the game Guitar Hero. It consists of a similar game implementation, developed using the graphics engine PyGame, with four difficulty levels, and able to randomly generate tracks. On top of the game, we implemented a Gym environment to train and evaluate Reinforcement Learning agents. In order to assess the environment's capacity as a suitable learning tool, we ran a set of experiments to train three autonomous agents using Deep Reinforcement Learning. Each agent was trained on a different level using Deep Q-Networks, a technique that combines Reinforcement Learning with Deep Neural Networks. The input of the network is only the pixels of the screen. We show that the agents were capable of learning the expected behaviors to play the game. The obtained results validate the proposed environment as capable of evaluating autonomous agents on Reinforcement Learning tasks.
</p>
<h3 id="video">Video</h3>
<p style="text-align:left;font-size:0.7em"><i>Presentation starts at 14:52</i></p>
<!-- Courtesy of embedresponsively.com //-->
<div class="responsive-video-container">
<iframe src="https://www.youtube-nocookie.com/embed/eJRgUhP-88E" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</div>
<h3 id="bibtex">BibTeX</h3>
<p style="text-align:left">
<a href="/assets/citations/ferrer2021gymhero.bib">Download</a>
</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@InProceedings{ferrer2021gymhero,
title = {Gym Hero: A Research Environment for Reinforcement Learning Agents in Rhythm Games},
author = {F\'{e}rrer Filho, R\^{o}mulo Freire and Nogueira, Yuri Lenon Barbosa and Vidal, Creto Augusto and Cavalcante-Neto, Joaquim Bento and Serafim, Paulo Bruno Sousa},
booktitle = {Proceedings of the XX Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)},
pages = {87--96},
year = {2021},
doi = {10.1109/SBGames54170.2021.00020}
}
</code></pre></div></div>Paulo Bruno SerafimXX Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)Assessing the Robustness of Deep Q-Network Agents to Changes on Game Object Textures2021-10-18T00:00:00+00:002021-10-18T00:00:00+00:00https://paulobruno.github.io/publication/SBGames-assessing-robustness<p><em>XX Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)</em></p>
<p><a href="https://paulobruno.github.io">Paulo Bruno de Sousa Serafim</a><sup>1</sup>
<a href="http://www.lia.ufc.br/~yuri/">Yuri Lenon Barbosa Nogueira</a><sup>2</sup><br />
<a href="http://www.lia.ufc.br/~joaquimb/">Joaquim Bento Cavalcante Neto</a><sup>2</sup>
<a href="http://www.lia.ufc.br/~cvidal/">Creto Augusto Vidal</a><sup>2</sup></p>
<p style="font-size:0.7em">
<sup>1</sup>Instituto Atlântico
<sup>2</sup>Department of Computing (DC), Federal University of Ceara (UFC)<br />
</p>
<p><img src="/assets/images/assessing.jpg" alt="Assessing robustness" /></p>
<hr />
<p>Paper: [<a href="https://www.sbgames.org/proceedings2021/ComputacaoFull/217993.pdf">PDF</a>]
Page: [<a href="https://ieeexplore.ieee.org/document/9637695">IEEE</a>]</p>
<h3 id="abstract">Abstract</h3>
<p style="text-align:justify;">
The research in autonomous agents aspires to achieve Artificial General Intelligence, where agents, like humans, are able to understand concepts and learn how to solve tasks. We would like to observe this ability on game agents as well. Recent research on autonomous agents for game playing uses a combination of Deep Neural Networks and Reinforcement Learning algorithms. Commonly, Neural Networks present vision-based models, usually Convolutional Neural Networks (CNN). However, those models can undergo performance degradation when dealing with different pixel patterns, an issue that also happens with vision-based autonomous agents in games. Prior works have shown that CNN-based autonomous agents cannot reproduce the behavior learned in one scene when they are placed into a brand new version with different textures. In this work, we evaluate whether the agents educe high-level elements, such as enemy, foreground, and background. Instead of testing the agent in a completely different scene, we designed two experiments based on slight changes. In the first experiment, we change only a subset of the game objects. In the second experiment, the agents play in an interpolated version of two scenes. Even when changing only a single game object texture, the agents are not guaranteed to present good behavior. We show that, depending on the training scenario, the agents are not fully robust to generalize a high-level concept of game objects.
</p>
<h3 id="video">Video</h3>
<p style="text-align:left;font-size:0.7em"><i>Presentation starts at 49:04</i></p>
<!-- Courtesy of embedresponsively.com //-->
<div class="responsive-video-container">
<iframe src="https://www.youtube-nocookie.com/embed/RmV6rUZQaeE" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</div>
<h3 id="bibtex">BibTeX</h3>
<p style="text-align:left">
<a href="/assets/citations/serafim2021assessing.bib">Download</a>
</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@InProceedings{serafim2021assessing,
title = {Assessing the Robustness of Deep Q-Network Agents to Changes on Game Object Textures},
author = {Serafim, Paulo Bruno Sousa and Nogueira, Yuri Lenon Barbosa and Cavalcante-Neto, Joaquim Bento and Vidal, Creto Augusto},
booktitle = {Proceedings of the XX Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)},
pages = {19--28},
year = {2021},
doi = {10.1109/SBGames54170.2021.00013}
}
</code></pre></div></div>Paulo Bruno SerafimXX Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)Robust Fingerprint Singular Point Detection using a Single-Stage CNN for Object Detection2021-06-03T00:00:00+00:002021-06-03T00:00:00+00:00https://paulobruno.github.io/publication/IWSSIP-robust-fingerprint<p><em>28th International Conference on Systems, Signals and Image Processing (IWSSIP)</em></p>
<p><a href="https://www.linkedin.com/in/lucasfernandes42/">Lucas de Sousa Fernandes</a><sup>1</sup>
<a href="https://www.linkedin.com/in/joaopedrobernardino/">João Pedro Bernardino Andrade</a><sup>1</sup>
<a href="https://www.linkedin.com/in/leonardo-ferreira-da-costa-05a978136">Leonardo Ferreira da Costa</a><sup>1</sup><br />
<a href="https://paulobruno.github.io">Paulo Bruno de Sousa Serafim</a><sup>1</sup>
<a href="https://cc.ufc.br/curso/corpo-docente/pauloalr/">Paulo Antonio Leal Rego</a><sup>1,3</sup>
<a href="https://scholar.google.com.br/citations?user=gnTTsAYAAAAJ&hl=en">José Gilvan Rodrigues Maia</a><sup>1,2</sup></p>
<p style="font-size:0.7em">
<sup>1</sup>Federal University of Ceará (UFC)
<sup>2</sup>Virtual UFC Institute<br />
<sup>3</sup>Group of Computer Networks, Software Engineering and Systems (GREat)
</p>
<p><img src="/assets/images/iwssip-2021-thumb.jpg" alt="IWSSIP 2021" /></p>
<hr />
<p>Slides: [<a href="https://d3smihljt9218e.cloudfront.net/lecture/22342/slideshow/a4c0568dee9f6e6bd1f914d315d7a1ed.pdf">PDF</a>]</p>
<h3 id="abstract">Abstract</h3>
<p style="text-align:justify;">
An Automated Fingerprint Identification System (AFIS) is the cornerstone of many modern identity-driven applications, ranging from device authentication and law enforcement to security and borderline control. As the population in urban centers grows and the digitalization of services grows, so does the demand for more effective and efficient fingerprint recognition systems. Singular Points (SP), such as core and delta, are important landmarks that help to tackle this challenge. This paper proposes and evaluates an effective approach for SP detection based on a single-stage deep convolutional neural network model for object detection. We show that YOLOv4 detector with customized output layers is effective for handling cores and deltas patterns as patches in fingerprint images, using their center as coordinates. Experimental results were carried out on the challenging SPD2010 dataset to evaluate the proposed SP detector under different configurations. The best result is 60.34% of correctly detected fingerprints. In particular, compared to the state-of-the-art methods, our approach achieves an improvement up to 12% in correct detections, 8% in core detection rate, and 10% in delta detection rate. Core and delta miss rates are also reduced by 8% and 10%, respectively.
</p>
<h3 id="bibtex">BibTeX</h3>
<p style="text-align:left">
<a href="/assets/citations/fernandes2021robust.bib">Download</a>
</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@InProceedings{fernandes2021robust,
title = {Robust Fingerprint Singular Point Detection using a Single-Stage CNN for Object Detection},
author = {Fernandes, Lucas de Sousa and
Andrade, Jo{\~{a}}o Pedro Bernardino and
Costa, Leonardo Ferreira and
Serafim, Paulo Bruno Sousa and
Rego, Paulo Antonio Leal and
Maia, Jos\'{e} Gilvan Rodrigues},
booktitle = {28th International Conference on Systems, Signals and Image Processing (IWSSIP)},
pages = {1--12},
year = {2021}
}
</code></pre></div></div>Paulo Bruno Serafim28th International Conference on Systems, Signals and Image Processing (IWSSIP)Investigating Deep Q-Network Agent Sensibility to Texture Changes on FPS Games2020-11-10T00:00:00+00:002020-11-10T00:00:00+00:00https://paulobruno.github.io/publication/SBGames-investigating-deep<p><em>XIX Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)</em></p>
<p><a href="https://paulobruno.github.io">Paulo Bruno de Sousa Serafim</a><sup>1</sup>
<a href="http://www.lia.ufc.br/~yuri/">Yuri Lenon Barbosa Nogueira</a><sup>2</sup>
<a href="http://www.lia.ufc.br/~cvidal/">Creto Augusto Vidal</a><sup>2</sup><br />
<a href="http://www.lia.ufc.br/~joaquimb/">Joaquim Bento Cavalcante Neto</a><sup>2</sup>
<a href="https://romulofff.github.io/">Rômulo Freire Férrer Filho</a><sup>3</sup></p>
<p style="font-size:0.7em">
<sup>1</sup>Instituto Atlântico
<sup>2</sup>Department of Computing (DC), Federal University of Ceara (UFC)<br />
<sup>3</sup>Teleinformatics Engineering Department (DETI), Federal University of Ceara (UFC)
</p>
<p><img src="/assets/images/investigating.jpg" alt="Agents' sensibility" /></p>
<hr />
<p>Paper: [<a href="https://www.sbgames.org/proceedings2020/ComputacaoFull/209515.pdf">PDF</a>]
Page: [<a href="https://ieeexplore.ieee.org/document/9291626">IEEE</a>]</p>
<h3 id="abstract">Abstract</h3>
<p style="text-align:justify;">
Graphical updates are very common in modern digital games. For instance, PC game versions usually receive higher resolution textures after some time. This could be a problem for autonomous agents trained to play a game using Convolutional Neural Networks. These agents use the pixels of the screen as inputs and changing them could harm their performance. In this work, we evaluate agents' sensibility to texture changes. The agents are trained to play a First-Person Shooter game and then are presented to different versions of the same scenario, in which the only difference among them is texture changes. As the testbed, we use a ViZDoom scenario with a static monster that should be killed by the agent. Four agents are trained using Deep Q-Networks in four different scenarios. Then, every agent is tested in all four scenarios. We show that although every agent can learn the behaviors to win the game when playing the same version in which it was trained, they cannot generalize to all other versions. Only in one case, the agent had a good performance in a different scenario. Most of the time, the agent moved randomly or just stood still, and shot continuously, indicating that it could not understand the current screen. Even when the background textures were kept the same, the agent could not identify the enemy. Thus, to ensure proper behavior, an agent needs to be retrained not only if the problem changes, but also when only the visual aspects of the problem are modified.
</p>
<h3 id="video">Video</h3>
<p style="text-align:left;font-size:0.7em"><i>Presentation starts at 33:34</i></p>
<!-- Courtesy of embedresponsively.com //-->
<div class="responsive-video-container">
<iframe src="https://www.youtube-nocookie.com/embed/Zp1_KQdWSI0" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</div>
<h3 id="bibtex">BibTeX</h3>
<p style="text-align:left">
<a href="/assets/citations/serafim2020investigating.bib">Download</a>
</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@InProceedings{serafim2020investigating,
title = {Investigating Deep Q-Network Agent Sensibility to Texture Changes on {FPS} Games},
author = {Serafim, Paulo Bruno Sousa and Nogueira, Yuri Lenon Barbosa and Vidal, Creto Augusto and Cavalcante-Neto, Joaquim Bento and F\'{e}rrer Filho, R\^{o}mulo Freire},
booktitle = {Proceedings of the XIX Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)},
pages = {1--9},
year = {2020},
issn = {2179-2259},
doi = {10.1109/SBGames51465.2020.00025}
}
</code></pre></div></div>Paulo Bruno SerafimXIX Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)Deep Reinforcement Learning em Ambientes Virtuais2020-11-09T00:00:00+00:002020-11-09T00:00:00+00:00https://paulobruno.github.io/publication/SVR-drl-ambientes-virtuais<p><em>XXII Symposium on Virtual and Augmented Reality (SVR) - Pre-Symposium</em></p>
<p><a href="https://paulobruno.github.io">Paulo Bruno de Sousa Serafim</a>
<a href="http://www.lia.ufc.br/~yuri/">Yuri Lenon Barbosa Nogueira</a><br />
<a href="http://www.lia.ufc.br/~joaquimb/">Joaquim Bento Cavalcante Neto</a>
<a href="http://www.lia.ufc.br/~cvidal/">Creto Augusto Vidal</a></p>
<p style="font-size:0.7em">
Department of Computing (DC), Federal University of Ceara (UFC)
</p>
<p><img src="/assets/images/ambientes.jpg" alt="Ambientes virtuais DRL" /></p>
<hr />
<p>Book: [<a href="https://drive.google.com/file/d/107FzdhWwz5N0Cjdd9WgSPlEadGcyLdEg/view">PDF</a>] [<a href="http://rvra.esemd.org/">WebSite</a>]</p>
<h3 id="abstract">Abstract</h3>
<p style="text-align:justify;">
Esse capítulo apresenta conceitos fundamentais relacionados ao Aprendizado por Reforço e a sua utilização em conjunto com Deep Neural Networks, chamado de Deep Reinforcement Learning. Serão também apresentadas diversas ferramentas que podem ser utilizados no desenvolvimento de modelos de agentes imersos em ambientes virtuais.
</p>
<h3 id="video">Video</h3>
<p style="text-align:left;font-size:0.7em"><i>Presentation starts at 1:32:45</i></p>
<!-- Courtesy of embedresponsively.com //-->
<div class="responsive-video-container">
<iframe src="https://www.youtube-nocookie.com/embed/LU-LJUo6fyA" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</div>
<h3 id="bibtex">BibTeX</h3>
<p style="text-align:left">
<a href="/assets/citations/serafim2020deep.bib">Download</a>
</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@InCollection{serafim2020deep,
title = {Deep Reinforcement Learning em Ambientes Virtuais},
author = {Serafim, Paulo Bruno Sousa and Nogueira, Yuri Lenon Barbosa and Cavalcante-Neto, Joaquim Bento and Vidal, Creto Augusto},
booktitle = {Introdução a Realidade Virtual e Aumentada},
editor = {Tori, Romero and Hounsell, Marcelo Silva and Corr\^{e}a, Cl\'{e}ber Gimenez and Nunes, Eunice Pereira Santos},
publisher = {Sociedade Brasileira de Computação - SBC},
year = {2020},
edition = {3},
chapter = {20},
pages = {423--436},
url = {http://rvra.esemd.org/}
}
</code></pre></div></div>Paulo Bruno SerafimXXII Symposium on Virtual and Augmented Reality (SVR) - Pre-SymposiumAutonomous Foraging with SARSA-based Deep Reinforcement Learning2020-11-08T00:00:00+00:002020-11-08T00:00:00+00:00https://paulobruno.github.io/publication/SVR-autonomous-foraging<!--__faltando: links pra pdf, ieee e apresentação no youtube, doi no bibtex*__-->
<p><em>XXII Symposium on Virtual and Augmented Reality (SVR)</em></p>
<p><a href="https://www.linkedin.com/in/anderson-oliveira-b65099133/">Anderson Oliveira Mesquita</a><sup>1</sup>
<a href="http://www.lia.ufc.br/~yuri/">Yuri Lenon Barbosa Nogueira</a><sup>1</sup>
<a href="http://www.lia.ufc.br/~cvidal/">Creto Augusto Vidal</a><sup>1</sup><br />
<a href="http://www.lia.ufc.br/~joaquimb/">Joaquim Bento Cavalcante Neto</a><sup>1</sup>
<a href="https://paulobruno.github.io">Paulo Bruno de Sousa Serafim</a><sup>2</sup></p>
<p style="font-size:0.7em">
<sup>1</sup>Department of Computing (DC), Federal University of Ceara (UFC)<br />
<sup>2</sup>Instituto Atlântico
</p>
<p><img src="/assets/images/autonomous_foraging.jpg" alt="Autonomous foraging" /></p>
<hr />
<p>Page: [<a href="https://ieeexplore.ieee.org/document/9262697">IEEE</a>]</p>
<h3 id="abstract">Abstract</h3>
<p style="text-align:justify;">
This work proposes an autonomous system capable of foraging in an environment that has food and poisons distributed throughout a scenario. We use Deep Learning framework to process color images. These images simulate the agent's vision. The foraging task is modeled as a reinforcement learning problem, in which an input constituted by raw pixels is processed by a convolutional neural network resulting in an set of actions. An algorithm based on SARSA was used. During training, the agent selects the actions based on a probability distribution called Softmax. The objective of this work is to present an agent capable of searching for food and distinguishing it from poisons through continuous learning and without the help or external intervention from humans. The experiments show that the agent is able to distinguish food from poisons without the hints or markings in it's vision input. This highlights the advantages of combining Deep Learning with reinforcement learning for the foraging problem. The results of this work form an initial basis for understanding the relationship among autonomy, cognition and perception in artificial agents.
</p>
<h3 id="video">Video</h3>
<p style="text-align:left;font-size:0.7em"><i>Presentation starts at 1:47</i></p>
<!-- Courtesy of embedresponsively.com //-->
<div class="responsive-video-container">
<iframe src="https://www.youtube-nocookie.com/embed/RwAUlDVUEhw" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</div>
<h3 id="bibtex">BibTeX</h3>
<p style="text-align:left">
<a href="/assets/citations/mesquita2020autonomous.bib">Download</a>
</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@InProceedings{mesquita2020autonomous,
title = {Autonomous Foraging with {SARSA}-based Deep Reinforcement Learning},
author = {Mesquita, Anderson Oliveira and Nogueira, Yuri Lenon Barbosa and Vidal, Creto Augusto and Cavalcante-Neto, Joaquim Bento and Serafim, Paulo Bruno Sousa},
booktitle = {Proceedings of the XXII Symposium on Virtual and Augmented Reality (SVR)},
pages = {1--9},
year = {2020},
doi = {10.1109/SVR51698.2020.00070}
}
</code></pre></div></div>Paulo Bruno SerafimXXII Symposium on Virtual and Augmented Reality (SVR)Simplificando o Balanceamento de Atributos em RPGs Eletrônicos2020-11-07T00:00:00+00:002020-11-07T00:00:00+00:00https://paulobruno.github.io/publication/SBGames-simplificando-balanceamento<p><em>XIX Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)</em><br />
<strong>Best Paper Award - Third Place</strong></p>
<p><a href="https://github.com/magnomont12">Alexandre Magno Monteiro Santos</a><sup>1</sup>
<a href="https://paulobruno.github.io">Paulo Bruno de Sousa Serafim</a><sup>2</sup>
<a href="https://www.researchgate.net/profile/Artur_Franco2">Artur de Oliveira da Rocha Franco</a><sup>2</sup><br />
<a href="https://www.researchgate.net/profile/Rafael_Carmo6">Rafael Augusto Ferreira do Carmo</a><sup>1</sup>
<a href="https://www.researchgate.net/profile/Jose_Maia3">José Gilvan Rodrigues Maia</a><sup>1</sup></p>
<p style="font-size:0.7em">
<sup>1</sup>Instituto UFC Virtual, Universidade Federal do Ceará<br />
<sup>2</sup>Departamento de Computação, Universidade Federal do Ceará
</p>
<p><img src="/assets/images/balanceamento.png" alt="Flappy Bird" /></p>
<hr />
<p>Paper: [<a href="https://www.sbgames.org/proceedings2020/ArtesDesignFull/209491.pdf">PDF</a>]</p>
<h3 id="abstract">Abstract</h3>
<p style="text-align:justify;">
A criação de jogos do gênero RPG requer diversas etapas que estão relacionadas com os seus sistemas. No sistema de combate, a definição das habilidades dos personagens é uma tarefa importante e que muitas vezes envolve balanceamento manual por parte do game designer. Neste trabalho, é proposto um processo de balanceamento automático de atributos de personagens em jogos de RPG para simplificar o processo de criação de personagens e otimizar o tempo utilizado no balanceamento. Para alcançar tais resultados, utiliza-se algorítimo genético para identificação de parâmetros da curva de crescimento de um jogador para que este alcance uma taxa de vitórias pré-determinada contra um inimigo previamente criado pelo game designer. A ferramenta de balanceamento proposta foi capaz de gerar os atributos do personagem para cada um dos níveis dentro da margem de erro definida. Em seguida, foram geradas as curvas de nível iniciais, que são suavizadas para gerar as curvas finais. Uma avaliação experimental utilizou dez níveis de um inimigo com uma taxa de vitória desejada de 80% e margem de erro de 5%. Esses resultados sugerem que o uso do algoritmo genético foi eficaz na geração de curvas de nível, sendo adequada como um processo de balanceamento automático para auxiliar o game designer.
</p>
<h3 id="video">Video</h3>
<p style="text-align:left;font-size:0.7em"><i>Presentation starts at 4:07</i></p>
<!-- Courtesy of embedresponsively.com //-->
<div class="responsive-video-container">
<iframe src="https://www.youtube-nocookie.com/embed/EqzOA6Ywd5s" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</div>
<h3 id="bibtex">BibTeX</h3>
<p style="text-align:left">
<a href="/assets/citations/santos2020simplificando.bib">Download</a>
</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@InProceedings{santos2020simplificando,
title = {Simplificando o Balanceamento de Atributos em {RPG}s Eletr\^onicos},
author = {Santos, Alexandre Magno Monteiro and
Serafim, Paulo Bruno Sousa and
Franco, Artur Oliveira Rocha and
Carmo, Rafael Augusto Ferreira and
Maia, Jos\'{e} Gilvan Rodrigues},
booktitle = {Proceedings of the XIX Brazilian Symposium on Computer Games and Digital Entertainment (SBGames)},
pages = {1--9},
year = {2020},
issn = {2179-2259}
}
</code></pre></div></div>Paulo Bruno SerafimXIX Brazilian Symposium on Computer Games and Digital Entertainment (SBGames) Best Paper Award - Third PlaceDeep Reinforcement Learning: Today’s AIs that beat humans2020-08-29T00:00:00+00:002020-08-29T00:00:00+00:00https://paulobruno.github.io/talk/todays-ais-beat-humans<p><a href="https://paulobruno.github.io">Paulo Bruno de Sousa Serafim</a></p>
<p><strong>CorongaMeet 2.0 - Data Peste (2020)</strong></p>
<div style="position:relative;width:100%;overflow:hidden;padding-top:59.27%">
<iframe style="position:absolute;top:0;left:0;bottom:0;right:0;width:100%;height:100%;border:none" src="https://docs.google.com/presentation/d/e/2PACX-1vRF6ahwF2zg31HyHh5zXoH86UH4HLQ16qud7mEM7esM361JEd1zdWUUkJ7JOrcJVQdTPaHn4SMV30dP/embed?start=true&loop=false&delayms=30000" frameborder="0" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>
</div>
<p>Slides: [<a href="https://docs.google.com/presentation/d/e/2PACX-1vRF6ahwF2zg31HyHh5zXoH86UH4HLQ16qud7mEM7esM361JEd1zdWUUkJ7JOrcJVQdTPaHn4SMV30dP/pub?start=true&loop=false&delayms=30000">Low-Res</a>] [<a href="https://docs.google.com/presentation/d/e/2PACX-1vRU1VBC4xhLgpTJ-bjNe-K6JCl8pcYGyPDvo3xLTmev6P4mWhWhbsCYKUq-cXmGo2KsNb84i3H2A7Yl/pub?start=true&loop=false&delayms=30000">High-Res</a>] [<a href="/assets/pdfs/DrlTodaysAIsBeatHumans.pdf">PDF</a>]</p>Paulo Bruno SerafimPaulo Bruno de Sousa SerafimAvanços tecnológicos na impressão 3D2020-08-20T00:00:00+00:002020-08-20T00:00:00+00:00https://paulobruno.github.io/talk/avancos-tecnologicos-impressao-3d<p><a href="https://paulobruno.github.io">Paulo Bruno de Sousa Serafim</a></p>
<p><strong>CyberSenge 2020 - UNILAB</strong></p>
<div style="position:relative;width:100%;overflow:hidden;padding-top:59.27%">
<iframe style="position:absolute;top:0;left:0;bottom:0;right:0;width:100%;height:100%;border:none" src="https://docs.google.com/presentation/d/e/2PACX-1vSnGLM0m_1I2JMks2HxEThDkgDIVo3xqCSdrN2OUbTQ0eoN1-oyfUQ8rI0wxuSBOA5FJwG-A_sP59Dp/embed?start=true&loop=false&delayms=30000" frameborder="0" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>
</div>
<p>Slides <em>(PT-BR)</em>: [<a href="https://docs.google.com/presentation/d/e/2PACX-1vSnGLM0m_1I2JMks2HxEThDkgDIVo3xqCSdrN2OUbTQ0eoN1-oyfUQ8rI0wxuSBOA5FJwG-A_sP59Dp/pub?start=true&loop=false&delayms=30000">Online</a>] [<a href="/assets/pdfs/AvancosTecnologicosImpressão3d.pdf">PDF</a>]</p>Paulo Bruno SerafimPaulo Bruno de Sousa Serafim