My current focus is on empirical research towards the development of Affect-based deep-learning generative musical systems for videogames. This research involves quantitative and qualitative research into the current and possible roles of music in games, development of new music generation systems that can integrate seamlessly into games, and evaluative quantitative and qualitative research on the system.

We have demonstrated empirical support for the hypothesis that music that adapts to a games intended emotional impact will heighten that emotional impact. We are in the process of publishing the paper that demonstrates this currently.