This study explores the field of Music Emotion Recognition, a transdisciplinary perspective that addresses the investigation of moods and musical emotions through the recovery of data and various types of computer and analog analysis. Some questions are posed in order to present the main assumptions and new perspectives, which guides at a methodological level two experiments that expose the central approaches of this type of providers and show a series of practical applications. The results indicate research possibilities that point to the consideration that it is complementary alternatives to face the learning challenges of music, one of the main cultural manifestations that affects the alteration of a person’s emotional experience.
Music Emotion Recognition (REM) is a recent and evolving field of scientific research. It is derived from two major fields: 1) the retrieval of musical information (RIM) (Music Information Retrieval MIR); and 2) Affective Computing (Calvo, D’Mello, Gratch and Kappas, 2014; Picard, 2000). Broadly speaking, it can be said that REM revolves around several ideas regarding the psychological understanding of the relationship between human affection and music, from which three central research ideas emerge; 1) emotion is perceived and identified with musical listening; 2) emotion is induced and experienced through the representation of musical listening, which differs from perceived emotion; and 3) the emotion is transmitted through an explicit communicative intention (Yang and Chen, 2011; Yang, Dong and Li, 2018). As can be understood from the above, the perceived emotions are more intersubjective and classifiable than the more intrasubjective, culturally rooted and memory-dependent experienced emotions (Aljanaki, 2016, p. 22-23; Panda, Malheiro and Paiva , 2018a).
One of the central ideas of REM lies in the ability to determine through automatic systems by entering various data (musical signals) and variables (computational parameters), which and what kind of emotions are perceived from the musical compositions, and try to perceive how each one of the shapes of their structural features can produce certain types of characteristic reactions in listeners.
Graphic 1. Acoustic features and musical concepts
Source: elaborated from Aljanaki (2016, p. 52).
Table 1 presents the main musical concepts investigated in REM and their conventional features (algorithms) used by the community of specialists in this field. Studies reveal that people relate the perceived and experienced mood states with labels and adjectives, although it is noted that the temporal factor is essential in its intensity and duration characteristics, for these reasons at a methodological level in this context, emotions are Assign an attribute that measures the intensity of emotion on a certain scale (Deng, Leung, Milani, and Chen, 2015).
Currently, there is a specific difference from the psychology of music regarding the conception of the terms emotion (mood) and states of mind (mood) in the approach to REM models, a discussion that generally extends to the field of the psychology of music (Juslin, 2019, p. 46-48). In this sense, emotions are conceived as a brief reaction at a temporal level (minutes or hours) and intense with regard to the perception derived from a specific stimulus, subject or object. In contrast, mood states require longer lasting emotional states (days, weeks or months), are less intense, and reflect an emotionality defined by a global affective inclination, because they do not depend on the reaction to a specific object or a contextual stimulus .
REM’s perspectives revolve around computer science and psychological music theory, on which there are at least three approaches and approaches:
• Categorical models based on Hevner’s (1936) initial proposal;
• The dimensional models designed by Russell (1980) and Thayer (1989); and
• The hybrid proposals that configure the detection of the musical-emotional variation (DVME).
Among the most common strategies are the sets of music-emotional notes made by specialist psychologists on extensive musical corpus and the classification through emotional labels or tags, carried out by user communities based on specific songs, on which strategies such as games or interactive portals on the Internet and social networks dedicated to musical recommendations, a field linked to REM explorations (Deng et al., 2015, Yang et al., 2018).
For example, applications such as Emotion Face Clock (Schubert, Ferguson, Farrar, Taylor and McPherson, 2013), MoodSwings (Speck, Schmidt, Morton and Kim, 2011), or research model evaluation contests, optimization algorithms and empirical research among which include the annual Music Information Retrieval Evaluation eXchange (MIREX), a community widely recognized in the RIM evaluation, and the MediaEval Database for Emotional Analysis in Music (DEAM) workshop (Aljanaki, 2016), dedicate their efforts to focus through different perspectives, measurement and prediction of musical emotions and related processes.
Systems dedicated to the recovery of data based on emotions or Emotion-Based Retrieval Tools make up an important field of research and practice of REM. Some of these application strategies are music recommendation systems through the creation of communities of music lovers, critics and musicians, the generation of automatic playlists or musical lists with strong features that stem from artificial intelligence and musical categorization for the purpose of musical evaluation, consumption and criticism.