ASP Vision

ASP Vision: Optically Computing the First Layer of Convolutional Neural Networks using Angle Sensitive Pixels

aspvision1aspvision2aspvision4aspvision3

Main Paper, Github code, Talk

Authors: Huaijin (George) Chen*, Suren Jayasuriya*, Jiyue Yang, Judy Stephen, Sriram Sivaramakrishnan, Ashok Veeraraghavan, Alyosha Molnar  (* = joint first authors)

Abstract: 

Deep learning using convolutional neural networks (CNNs) is quickly becoming the state-of-the-art for challenging computer vision applications. However, deep learning’s power consumption and bandwidth requirements currently limit its application in embedded and mobile systems with tight energy budgets. In this paper, we explore the energy savings of optically computing the first layer of CNNs. To do so, we utilize bio-inspired Angle Sensitive Pixels (ASPs), custom CMOS diffractive image sensors which act similar to Gabor filter banks in the V1 layer of the human visual cortex. ASPs replace both image sensing and the first layer of a conventional CNN by directly performing optical edge filtering, saving sensing energy, data bandwidth, and CNN FLOPS to compute. Our experimental results (both on synthetic data and a hardware prototype) for a variety of vision tasks such as digit recognition, object recognition, and face identification demonstrate reduction in image sensor power consumption and data bandwidth from sensor to CPU, while achieving similar performance compared to traditional deep learning pipelines.

Venue: Oral presentation at Computer Vision and Pattern Recognition (CVPR) 2016, Las Vegas, USA

Files:

1) Main Paper

2) Presentation Slides (PPT)

3) Poster (PDF)

4) Our talk at CVPR 2016

Code/Datasets: Please see our Github for an initial release of the code.

Citation (BibTex):

@InProceedings{Chen_2016_ASPVision,
author = {Chen, Huaijin G. and Jayasuriya, Suren and Yang, Jiyue and Stephen, Judy and Sivaramakrishnan, Sriram and Veeraraghavan, Ashok and Molnar, Alyosha},
title = {ASP Vision: Optically Computing the First Layer of Convolutional Neural Networks Using Angle Sensitive Pixels},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}

Contact: Suren Jayasuriya, sj498 “at” cornell “dot” edu

Acknowledgements:

We would like to thank Albert Wang for help designing the existing ASP prototype, and Dalu Yang and Dr. Arvind Rao for lending us the GPU. We also especially want to thank Dr. Eric Fossum and Dr. Vivienne Sze for pointing out corrections with regard to our energy comparison. We also received valuable feedback from Cornell’s Graphics & Vision group, Prof. David Field, Achuta Kadambi, Robert LiKamWa, Tan Nguyen, Dr. Ankit Patel, Kuan-chuan Peng, and Hang Zhao. This work was supported by NSF CCF1527501, THECB-NHARP 13308, and NSF CAREER-1150329. S.J. was supported by a NSF Graduate Research Fellowship and a Qualcomm Innovation Fellowship. H.G.C was partially supported by a Texas Instruments Graduate Fellowship.