Emotion recognition and emotion based classification of audio using genetic algorithm – an optimized approach

Music information retrieval (MIR) is one of the vast areas of research and it is gaining more and more attention from researchers, as well as from the music developing community. Music can be classified throughout many dimensions such as genre, mood, instrument, artists, etc. Emotion based (Mood-based) music classification is also carried out by researchers for understanding Physiological and Psychological effects of music on human mood and body. There are number of applications of Music classification such as Audio finger printing, copyright monitoring, etc. This paper explains an optimized approach for emotion detection from audio and classifies it among eight emotions. Also, provides an overview of popular algorithms, models and various techniques involved in mood-based music classification.

Different mood-based music classification methods are compared with each other and their relative advantages and disadvantages are discussed. Arousal-Valence method for emotion recognition is used for music emotion detection. Some pitfalls and limitations of the existing systems are investigated. Then, a model is proposed for optimal music classification based on mood / emotion. Basically, it tries to optimize the current system by overcoming some of its short falls, such as, high computation time and low accuracy. Here, genetic algorithm is used for optimal feature selection. Thus, the average computation time for classification is reduced for large dataset.

Share This Post