IEEE Transactions on Audio, Speech, and Language Processing 16(2), 448–457 (2008) Yang, Y.H., Lin, Y.C., Su, Y.F., Chen, H.H.: A regression approach to music emotion recognition. Implications for research on orientation time and mixed emotions are discussed. By demonstrating how continuous category selections (votes) changed over time, we were able to show that (1) more than one emotion-face could be expressed by music at the same time and (2) the emotion face that best portrayed the emotion the music conveyed could change over time, and (3) the change could be attributed to changes in musical structure. 30 participants rated the emotion expressed by these excerpts on our ‘emotion-face-clock’. We tested the interface using six extracts of music, one targeting each of the six faces: ‘Excited’ (at 1 o’clock), ‘Happy’ (3), ‘Calm’ (5), ‘Sad’ (7), ‘Scared’ (9) and ‘Angry’ (11). A response interface based on six simple sketch style emotion faces aligned into a clock-like distribution was developed with the aim of allowing participants to quickly and easily rate emotions in music continuously as the music unfolded.
However, numerous retrospective studies of emotion in music use checklist style responses, usually in the form of emotion words, (such as happy, angry, sad…) or facial expressions. Recent instruments measuring continuous self-reported emotion responses to music have tended to use dimensional rating scale models of emotion such as valence (happy to sad).