Once a song is mixed and any master tapes of individual performance is discarded, there’s no un-mixing the music, right? Not so fast. A new artificial intelligence project from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) can extract and isolate individual instruments from a blended recording, or even a recording of a band playing together.
The system, which is “self-supervised,” doesn’t require any human annotations on what the instruments are or what they sound like.
Trained on over 60 hours of videos, the “PixelPlayer” system can view a never-before-seen musical performance, identify specific instruments at pixel level, and extract the sounds that are associated with those instruments.
For example, it can take a video of a tuba and a trumpet playing the “Super Mario Brothers” theme song, and separate out the soundwaves associated with each instrument.
This technology will be a boon to recording studios, remixers, and anyone who wants to learn, say, the trumpet part from an orchestra performance. I can see it being used in schools to help music students, which would be great if done in private, but humiliating in front of one’s classmates. What’s the worst that could happen? Someone will record a middle school band recital, isolate the worst player, and upload it to social media for laughs. Read about the program and its potential uses at MIT News. -via Gizmodo
(Image credit: MIT CSAIL)