Back To Schedule
Wednesday, November 20 • 12:05 - 12:30
"Did you hear that?" Learning to play video games from audio cues

Log in to save this to your schedule, view media, leave feedback and see who's attending!

The aim of this talk would be to introduce an exciting new area of research and seek interest from audio developers to join the project and offer technical expertise on the subject. This has been recently accepted for publication at the IEEE Conference on Games, research paper available here.

Talk abstract:

In this talk I will describe work in progress regarding an interesting direction of game-playing AI research: learning to play video games from audio cues only. I will highlight that current state-of-the-art techniques rely either on visuals or symbolic information to interpret their environment, whereas humans benefit from the processing of many other types of sensor inputs. Sounds and music are key elements in games, which not only affect player experience, but gameplay itself in certain scenarios. Sounds within games can be used to alert the player to a nearby hazard (especially when in darkness), inform them they collected an item, or provide clues for solving certain puzzles. This additional sensory output is different from traditionally visual information and allows for many new gameplay possibilities. 

Audio design in games also raises some important challenges when it comes to inclusivity and accessibility. People who may be partially or completely blind rely exclusively on audio, as well as some minor haptic feedback, to play many video games effectively. Including audio as well as visual information within a game can make completing it much more plausible for visually impaired players. Additionally, individuals with hearing difficulties would find it hard to play games that are heavily reliant on sound. Intelligent agents can help to evaluate games for individuals with disabilities: if an agent is able to successfully play a game using only audio or visual input, then this could help validate the game for the corresponding player demographics. 

avatar for Raluca Gaina

Raluca Gaina

PhD Student, Queen Mary University of London
I am a research student interested in Artificial Intelligence for game playing. I'm looking to have conversations about game-playing AI using audio input (as opposed to, or complimenting, traditional visual or symbolic input), with regards to accessibility in games.

Wednesday November 20, 2019 12:05 - 12:30 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB