Mix and Localize: Localizing Sound Sources in Mixtures


Xixi Hu*
Ziyang Chen*
Andrew Owens

U. Michigan         UT Austin

CVPR 2022

[Paper]
[Github]
[Video]
[Slides]
[Poster]




Abstract

We present a method for simultaneously localizing multiple sound sources within a visual scene. This task requires a model to both group a sound mixture into individual sources, and to associate them with a visual signal. Our method jointly solves both tasks at once, using a formulation inspired by the contrastive random walk of Jabri et al. We create a graph in which images and separated sounds correspond to nodes, and train a random walker to transition between nodes from different modalities with high return probability. The transition probabilities for this walk are determined by an audio-visual similarity metric that is learned by our model. We show through experiments with musical instruments and human speech that our model can successfully localize multiple sounds, outperforming other self-supervised methods.



Talk




Paper and Supplementary Material

Xixi Hu*, Ziyang Chen*, Andrew Owens.
Mix and Localize: Localizing Sound Sources in Mixtures.
CVPR 2022.
(Paper)


[Bibtex]


Acknowledgements

This work was funded in part by DARPA Semafor and Cisco Systems. The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. The webpage template was originally made by Phillip Isola and Richard Zhang for a Colorization project.