E07: Semi-Automatic Synthesis of Videos of Performers Appearing to Play User-Specified Music

Yamamoto,T., Okabe,M., Onai,R.

Abstract:
We propose a method to synthesize the video
of a user-specified band music, in which the performers appear to play it nicely. Given the music and videos of the band members as inputs, our system synthesizes the resulting video by semi-automatically cutting and concatenating the videos temporarily so that these syn-chronize to the music. To compute the synchronization between music and video, we analyze the timings of the musical notes of them, which we estimate from the audio signals by applying techniques including short-time Fourier transform (STFT), image processing, and sound source separation. Our video retrieval technique then uses the estimated timings of musical notes as the feature vector. To efficiently retrieve a part of the video that matches to a part of the music, we develop a novel feature matching technique more suitable for our feature vector than dynamic-time warping (DTW) algorithm. The output of our system is the project file of Adobe After Effects, on which the user can further refine the result interactively. In our experiment, we recorded videos of performances of playing the violin, piano, guitar, bass and drums. Each video is recorded independently for each instrument. We demonstrate that our system helps the non-expert performers who can-not play the music well to synthesize its performance videos. We also present that, given an arbitrary mu-sic as input, our system can synthesize its performance video by semi-automatically cutting and pasting exist-ing videos.