The task of music generation has been considered as the biggest challenge in the field of MIR. In the previous work of music generation, researchers have tried to tackle the task from different point of view. In other words, aiming for different application. In this post, I'm going to list some esteemed works in music generation, and conclude with the idea of MidiNet.
The Previous Work
WaveNet(Google, Deepmind)
C-RNN-GAN(Olof Mogren)
A.I. Duet (Google, Magenta)
DeepBach(SonySCL, Flow Machines)
If we looked into these works and tried to conclude their applications, we could say that these works are solving the sub-questions of music generation, such as generating harmonic sounds(WaveNet), or composer assistant (DeepBach, A.I. Duet).
However, in the MidiNet project, we tend to give consideration in music generation in a more human nature scenario.
How we learn music?
Think of a paino teacher is trying to teach a student who have never seen or played piano before, and have no music education. The first step that a teacher is going to do might be playing some easy melody and try to guide the student from introducing the basic music theory, then, let the student try.
After the student have some clue, the teacher might started to introduce more concept, such as the relation of notes and chord, the emotion of music, or other fundamentals.
One day, the student reach an intermediate level, it's time for some challenges, such as taking performance examinations, reaching higher level by pass through higher standards.
There you go, that's the idea of MidiNet! Check out MidiNet-PART1.