The aim of this talk would be to introduce an exciting new area of research and seek interest from audio developers to join the project and offer technical expertise on the subject. This has been recently accepted for publication at the IEEE Conference on Games, research paper available
here.
Talk abstract:In this talk I will describe work in progress regarding an interesting direction of game-playing AI research: learning to play video games from audio cues only. I will highlight that current state-of-the-art techniques rely either on visuals or symbolic information to interpret their environment, whereas humans benefit from the processing of many other types of sensor inputs. Sounds and music are key elements in games, which not only affect player experience, but gameplay itself in certain scenarios. Sounds within games can be used to alert the player to a nearby hazard (especially when in darkness), inform them they collected an item, or provide clues for solving certain puzzles. This additional sensory output is different from traditionally visual information and allows for many new gameplay possibilities.
Audio design in games also raises some important challenges when it comes to inclusivity and accessibility. People who may be partially or completely blind rely exclusively on audio, as well as some minor haptic feedback, to play many video games effectively. Including audio as well as visual information within a game can make completing it much more plausible for visually impaired players. Additionally, individuals with hearing difficulties would find it hard to play games that are heavily reliant on sound. Intelligent agents can help to evaluate games for individuals with disabilities: if an agent is able to successfully play a game using only audio or visual input, then this could help validate the game for the corresponding player demographics.