This research project proposes to evaluate the scalability of the current MIR algorithms and methods to a large and real scale. It will focus on extending our results in music genre classification to a large scale dataset of one million songs. So far, we have shown that the segmental approach that considers fragments of the audio signal together with hybrid feature vectors and a multi-classifier system achieves very promising results. The main aims of the project are the following: 1) extension of the segmental approach to the music tracks of the MSD using the segments delimited by note onsets or other discontinuities in the signal; 2) evaluation of the incremental learning techniques to deal with the large amount of data; 3) extension of the segmental approach to other tasks such as automatic music tagging, artist recognition and cover song recognition; 4) evaluation of the previous developed algorithms on the MSD and comparison of our findings with the state of the art. Edital CAPES/Fulbright.