Producer, music aficionado and tech entrepreneur Jun Inoue is CEO and the driving force behind Amadeus Code. We asked him a few key questions about creativity, artificial intelligence, and the elegance and challenge of turning machine learning into expressive sound. We’ll be posting his responses over the next few weeks, to introduce the world to what makes Amadeus Code stand out as an AI-driven musical idea generator.
Theorizing AI and musical connections is one thing, but building the engine to create these connections is another. How did you select the compositions that went into Amadeus Code’s dataset?
We closely examined what songs Amadeus Code should be taught. The objective was to compose good songs, not win a game, and we believed that using a method in which it was taught to comprehensively learn a vast amount of tunes that included mediocre songs was the wrong way to go about it. Also, Amadeus Code uses machine learning to increase its corpus, so it was necessary to carefully select the songs that would act as its teacher.
In order to achieve this, we started up a research laboratory we call the HSRL (Hit Song Research Lab) and held countless meetings to establish fair selection standards by discarding each team member’s personal preferences. This resulted in us defining the “good songs” that were to be used as the fundamental learning tools as “songs that many people listened to”. Based on this definition, we then selected the pop hits that had economic viability as commercial songs in the past sixty years when the music charts existed, and then went back even further to select the classical and jazz hits that are still played today in concerts. All in all, we came up with approximately 750 pieces of music.
Have a question? Tweet us your question!
Have a question regarding our #ArtificailIntelligence or service? Tweet us your questions!
— Amadeus Code (@AmadeusCode) March 8, 2018