Music-Emotion Data Distribution in 2-D feature space
All 1200 music clips using all 55 features mapping to the first two canonical axisCurrently, music-emotion database are 1200 clips, which are labeled by over 300 people based on modified Hevner Checklist, and each is of 55 musical feature (including dynamics, rhythm, pitch, harmony, and others) and the averaged emotion label per clip is 1.88 (multi-label classification, label from 1 dominant one to 2 overlapping ones by some heuristic criteria). 8 Single binary classifiers using specific feature set with high discriminability and interpretation power are still tuned. Support Vector Machine-based classifier using cross-validation method for building model, further precision/recall for performance results are under construction.
Issues addressed here to be solved:
1. Feature Selected by Trial and Error.
2. Efficient Model calculated by Entropy using Decision-Tree
More figures could be found in PowerPoint File

沒有留言:
張貼留言