Andrew Owens
Please turn on JavaScript to view email address.
Postdoc at U.C. Berkeley.
Google Scholar  ·  GitHub  ·  CV

I'm a postdoc at U.C. Berkeley, working with Alyosha Efros and Jitendra Malik. I did my PhD at MIT CSAIL, where I was advised by William Freeman and Antonio Torralba. Before that, I was an undergrad at Cornell.

I'm co-organizing a new workshop at CVPR 2018: Sight and Sound!

Publications:
MoSculp: Interactive Visualization of Shape and Time
Xiuming Zhang, Tali Dekel, Tianfan Xue, Andrew Owens, Qiurui He, Jiajun Wu, Stefanie Mueller, William T. Freeman
UIST 2018
paper · project page · bibtex
@article{zhang2018mosculp, title={MoSculp: Interactive Visualization of Shape and Time}, author={Zhang, Xiuming and Dekel, Tali and Xue, Tianfan and Owens, Andrew and Wu, Jiajun and Mueller Stefanie and Freeman, William T.}, journal={User Interface Software and Technology (UIST)}, year={2018} }
We make single-image summaries of complex 3D motions using a representation called a motion sculpture.
Audio-Visual Scene Analysis with Self-Supervised Multisensory Features
Andrew Owens, Alexei A. Efros
ECCV 2018 (Oral)
paper · project page · video · slides (key, ppt) · code · bibtex
@article{owens2018audio, title={Audio-visual Scene Analysis with Self-Supervised Multisensory Features}, author={Owens, Andrew and Efros, Alexei A}, journal={European Conference on Computer Vision (ECCV)}, year={2018} }
We use self-supervision to learn a multisensory representation that fuses the audio and visual streams of a video. We apply it to: a) sound-source localization, b) action recognition, c) on/off-screen audio source separation.
Fighting Fake News: Image Splice Detection via Learned Self-Consistency
Minyoung Huh*, Andrew Liu*, Andrew Owens, Alexei A. Efros
ECCV 2018
paper · project page · video · code · bibtex
@article{huh2018fighting, title={Fighting Fake News: Image Splice Detection via Learned Self-Consistency}, author={Huh, Minyoung and Liu, Andrew and Owens, Andrew and Efros, Alexei A}, journal={European Conference on Computer Vision (ECCV)}, year={2018} }
We detect photoshopped images using an anomaly-detection model that was trained only on real images.
More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch
Roberto Calandra, Andrew Owens, Dinesh Jayaraman, Justin Lin, Wenzhen Yuan, Jitendra Malik, Edward H. Adelson, Sergey Levine
RA-L 2018
paper · video · project page · bibtex
@article{calandra2018more, title={More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch}, author={Calandra, Roberto and Owens, Andrew and Jayaraman, Dinesh and Lin, Justin and Yuan, Wenzhen and Malik, Jitendra and Adelson, Edward H and Levine, Sergey}, journal={arXiv preprint arXiv:1805.11085}, year={2018} }
We train a robot to adjust its grasp using vision and touch.
The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes?
Roberto Calandra, Andrew Owens, Manu Upadhyaya, Wenzhen Yuan, Justin Lin, Edward H. Adelson, Sergey Levine
CoRL 2017
paper · project page · bibtex
@article{calandra2017feeling, title={The feeling of success: Does touch sensing help predict grasp outcomes?}, author={Calandra, Roberto and Owens, Andrew and Upadhyaya, Manu and Yuan, Wenzhen and Lin, Justin and Adelson, Edward H and Levine, Sergey}, journal={Conference on Robot Learning (CoRL)}, year={2017} }
Touch sensing makes it easier to tell whether a grasp will succeed.
Shape-independent Hardness Estimation Using Deep Learning and a GelSight Tactile Sensor
Wenzhen Yuan, Chenzhuo Zhu, Andrew Owens, Mandayam Srinivasan, Edward H. Adelson
ICRA 2017
paper · video · bibtex
@inproceedings{yuan2017shape, title={Shape-independent Hardness Estimation Using Deep Learning and a GelSight Tactile Sensor}, author={Yuan, Wenzhen and Zhu, Chenzhuo and Owens, Andrew and Srinivasan, Mandayam A and Adelson, Edward H}, booktitle={International Conference on Robotics and Automation (ICRA)}, year={2017}, }
Hardness can be accurately estimated from rich touch sensors.
Ambient Sound Provides Supervision for Visual Learning
Andrew Owens, Jiajun Wu, Josh McDermott, William T. Freeman, Antonio Torralba
ECCV 2016 (Oral)
paper · journal paper (2018) · project page · bibtex
@inproceedings{owens2018ambient, title={Learning Sight From Sound: Ambient Sound Provides Supervision for Visual Learning}, author={Owens, Andrew and Wu, Jiajun and McDermott, Josh H and Freeman, William T and Torralba, Antonio}, booktitle={International Journal of Computer Vision (IJCV)}, year={2018}, } @inproceedings{owens2016ambient, title={Ambient Sound Provides Supervision for Visual Learning}, author={Owens, Andrew and Wu, Jiajun and McDermott, Josh H and Freeman, William T and Torralba, Antonio}, booktitle={European Conference on Computer Vision (ECCV)}, year={2016}, }
When we train a neural network to predict sound from sight, it learns to recognize objects and scenes — without using any labeled training data.
Visually Indicated Sounds
Andrew Owens, Phillip Isola, Josh McDermott, Antonio Torralba, Edward H. Adelson, William T. Freeman
CVPR 2016 (Oral)
paper · project page · video · bibtex
@inproceedings{owens2016visually, title={Visually indicated sounds}, author={Owens, Andrew and Isola, Phillip and McDermott, Josh and Torralba, Antonio and Adelson, Edward H and Freeman, William T}, booktitle={Computer Vision and Pattern Recognition (CVPR)}, year={2016} }
What sound does an object make when you hit it with a drumstick? We use sound as a supervisory signal for learning about materials and actions.
Camouflaging an Object from Many Viewpoints
Andrew Owens, Connelly Barnes, Alex Flint, Hanumant Singh, William T. Freeman
CVPR 2014 (Oral).
paper · project page · video · code · bibtex
@inproceedings{owens2014camouflaging, title={Camouflaging an object from many viewpoints}, author={Owens, Andrew and Barnes, Connelly and Flint, Alex and Singh, Hanumant and Freeman, William}, booktitle={Computer Vision and Pattern Recognition (CVPR)}, year={2014} }
We color a 3D object such that it's hard to see from many viewpoints.
Shape Anchors for Data-Driven Multi-view Reconstruction
Andrew Owens, Jianxiong Xiao, Antonio Torralba, William T. Freeman
ICCV 2013
paper · project page · bibtex
@inproceedings{owens2013shape, title={Shape anchors for data-driven multi-view reconstruction}, author={Owens, Andrew and Xiao, Jianxiong and Torralba, Antonio and Freeman, William}, booktitle={International Conference on Computer Vision (ICCV)}, year={2013} }
Some image patches are highly informative about the 3D shape of an object. We use this idea to make a multi-view reconstruction system that exploits single-image depth cues.
SUN3D: A Database of Big Spaces Reconstructed using SfM and Object Labels
Jianxiong Xiao, Andrew Owens, Antonio Torralba
ICCV 2013
paper · project page · video · bibtex
@inproceedings{xiao2013sun3d, title={SUN3D: A Database of Big Spaces Reconstructed using SfM and Object Labels}, author={Xiao, Jianxiong and Owens, Andrew and Torralba, Antonio}, booktitle={International Conference on Computer Vision (ICCV)}, year={2013} }
A large dataset of 3D-reconstructed indoor scenes.
Discrete-Continuous Optimization for Large-Scale Structure from Motion
David Crandall, Andrew Owens, Noah Snavely, Dan Huttenlocher
CVPR 2011 (Oral)
CVPR Best Paper Award Honorable Mention
paper · journal paper (2013) · project page · video · bibtex
@article{crandall2013pami, author = {David Crandall and Andrew Owens and Noah Snavely and Daniel Huttenlocher}, title = {{SfM with MRFs}: Discrete-Continuous Optimization for Large-Scale Structure from Motion}, journal = {Transactions on Pattern Analysis and Machine Intelligence (PAMI)}, year = {2013}, } @inproceedings{crandall2011cvpr, author = {David Crandall and Andrew Owens and Noah Snavely and Daniel Huttenlocher}, title = {Discrete-Continuous Optimization for Large-scale Structure from Motion}, booktitle = {Computer Vision and Pattern Recognition (CVPR)}, year = {2011} }
Discrete Markov random fields can solve structure-from-motion problems, while incorporating extra information such as GPS and vanishing lines.