June 18, 2018, Salt Lake City, UT
Held in conjunction with IEEE CVPR 2018
|08:30||Opening Remarks||EVW Organizers|
|Oral 1 (0840-1000)|
|08:40||Vision for Autonomous Driving and Multitasking Humans||Mohan Trivedi (UCSD)|
|09:20||A Comparative Study of Real-time Semantic Segmentation for Autonomous Driving||Mennatullah Siam, Mostafa Gamal, Moemen AbdelRazek, Senthil Yogamani, Martin Jagersand, Zhang Hong|
|09:40||Efficient Semantic Segmentation using Gradual Grouping||Nikitha Vallurupalli, Sriharsha Annamaneni, Girish Varma, C.V. Jawahar, Manu Mathew, Soyeb Nagori|
|Oral 2 (1030-1210)|
|10:30||Edge Computing with the Intel Neural Stick||Cormac Brick (Intel)|
|11:10||IFQ-Net: Integrated Fixed-point Quantization Networks for Embedded Vision||Hongxing Gao, Wei Tao, Dongchao Wen, Tse-Wei Chen, Kinya Osa, Masami Kato|
|11:30||Intelligent Scene Perception onboard Autonomous Platforms||Raghuveer Rao (US Army Research Lab)|
|Oral 3 (1310-1440)|
|13:10||Scalable and Semantic Indoor Mapping||Donghwan Lee (Naver Labs)|
|14:00||Interpolation-based Object Detection Using Motion Vectors for Embedded Real-time Tracking Systems||Takayuki Ujiie, Masayuki Hiromoto, Takashi Sato|
|14:20||Onboard Stereo Vision for Drone Pursuit or Sense and Avoid||Cevahir Cigla, Rohan Thakker, Larry Matthies|
|Poster/Demos + Spotlights (1440-1540)|
|14:40||Light Field Depth Estimation on Off-the-Shelf Mobile GPU||Andre Ivan, Williem Williem, In Kyu Park|
|14:42||Pseudo-labels for Supervised Learning on Dynamic Vision Sensor Data, Applied to Object Detection under Ego-motion||Nicholas Chen|
|14:44||GPU based Video Object Tracking on PTZ cameras||Cevahir Cigla, Kemal Sahin, Fikret Alim|
|14:46||Analysis of Efficient CNN Design Techniques for Semantic Segmentation||Alexandre Briot, Prashanth Viswanath, Senthil Yogamani|
|14:48||Design of a Reconfigurable 3D Pixel-Parallel Neuromorphic Architecture for Smart Image Sensor||Pankaj Bhowmik, Md Jubaer Hossain Pantho, Marjan Asadinia, Christophe Bobda|
|14:50||PointAR: Augmented Reality Laser Pointer for Tele-Assistance (Demo)||Harald Haraldsson, Doron Tal, Karla Polo-Garcia, Serge Belongie|
|14:55||Posters and Demo|
|Oral 4 (1610-1720)|
|16:10||Brainware for Embedded Vision||Warren Gross (McGill University)|
|17:00||KCNN: Extremely-Efficient Hardware Keypoint Detection with a Compact Convolutional Neural Network||Paolo Di Febbo, Carlo Dal Mutto, Kinh Tieu, Stefano Mattoccia|
|17:20||Closing Remarks and Awards||EVW Organizers|
Invited Talk #1:
Speaker: Prof. Mohan Trivedi, UCSD
Bio: Mohan Manubhai Trivedi is a Distinguished Professor of and the founding director of the UCSD LISA: Laboratory for Intelligent and Safe Automobiles, winner of the IEEE ITSS Lead Institution Award (2015). Currently, Trivedi and his team are pursuing research in distributed video arrays, active vision, human body modeling and activity analysis, intelligent driver assistance and active safety systems for automobiles. Trivedis team has played key roles in several major research initiatives. These include developing an autonomous robotic team for Shinkansen tracks, a human-centered vehicle collision avoidance system, vision based passenger protection system for smart airbag deployment and lane/turn/merge intent prediction modules for advanced driver assistance. Some of the professional awards received by him include the IEEE ITS Society’s highest honor Outstanding Research Award in 2013, Pioneer Award (Technical Activities) and Meritorious Service Award by the IEEE Computer Society, and Distinguished Alumni Award by the Utah State University. Three of his students were awarded Best Dissertation Awards by professional societies and a number of his papers have won Best or Honorable Mention awards at international conferences. Trivedi is a Fellow of the IEEE, IAPR and SPIE. Trivedi regularly serves as a consultant to industry and government agencies in the U.S., Europe, and Asia.
Title: Vision for Autonomous Driving and Multitasking Humans
Invited Talk #2:
Speaker: Mr. Cormac Brick, Intel Corporation, Movidius Group
Bio: Cormac Brick is Director of Machine Intelligence in the Movidius group at Intel Corporation, where he builds new foundational algorithms for computer vision and machine intelligence to enhance the Myriad VPU product family. Cormac contributes to internal architecture, and helps customers build products using very latest techniques in deep learning and embedded vision through a set of advanced applications and libraries. Cormac has worked with Movidius since its early days and has contributed heavily towards the design of the ISA, hardware systems design, computer vision software development and tools. Cormac has a B.Eng. in Electronic Engineering from University College Cork.
Title: Edge Computing with the Intel Neural Stick
Invited Talk #3:
Speaker: Dr. Raghuveer Rao, U.S. Army Research Laboratory (ARL)
Bio: Dr. Raghuveer Rao has a M.E. degree in Electrical Communication Engineering from the Indian Institute of Science and a Ph.D. in Electrical Engineering from the University of Connecticut. He was a Member of Technical Staff at AMD Inc. from 1985 to 1987. He joined the Rochester Institute of Technology in December 1987 where, at the time of leaving in 2008, he was a professor of Electrical Engineering and Imaging Science. He is currently the Chief of the Image Processing Branch at the U.S. Army Research Laboratory (ARL) in Adelphi, MD where he manages researchers and programs in computer vision & scene understanding for autonomous systems and intelligence applications. Dr. Rao’s research contributions cover multiple areas such as signal processing with higher order statistics, wavelet transforms & scale-invariant systems, statistical self-similarity and their applications to modeling in communication and image texture synthesis. His recent work is focused on machine learning methods for scene understanding on maneuvering platforms. Dr. Rao is currently an ABET program evaluator for electrical engineering and has served on technical committees of the IEEE Signal Processing Society and SPIE. He has also served as an Associate Editor for the IEEE Transactions on Signal Processing, the IEEE Transactions on Circuits & Systems, and the Journal of Electronic Imaging. He has held visiting appointments at the Indian Institute of Science, Air Force Research Laboratory, Naval Surface Warfare Center and Princeton University. Dr. Rao is an elected Fellow of IEEE and SPIE.
Title: Intelligent Scene Perception onboard Autonomous Platforms
Abstract: Autonomous land and air platforms form an important part of future Army maneuvering units. Automatic scene perception onboard such platforms is required for both navigation and object recognition purposes. A key challenge is that of achieving good performance with size, weight, power and time (SWaPT) constraints. Multimodal sensor suites, such as imaging in multiple bands of the visible and infrared wavelengths, help in achieving scene understanding over varied environments. The talk will provide examples of programs at the Army Research Laboratory that involve artificial intelligence & machine learning, embedded processing systems and multimodal sensing in addressing intelligent scene perception.
Invited Talk #4:
Speaker: Dr. Donghwan Lee, Naver Labs
Bio: Donghwan Lee received his Ph.D. from Seoul National University in 2014. After spending 3 years at Samsung Electronics, he joined NAVER LABS, where he is currently leading the visual localization team. His research interests include signal processing, error correction codes (ECCs), computer vision, and machine learning.
Title: Scalable and Semantic Indoor Mapping
Abstract: 3D indoor maps are useful in various ways, from helping people find their way at complex transit interchanges to supporting more efficient logistics. However, creating and updating those maps are labor-intensive tasks that require many human professionals. Scalable and Semantic Indoor Mapping (SSIM) is a technology that creates, maintains, and publishes 3D maps of indoor environments such as airports, train stations, and shopping malls with minimal human intervention. In NAVER LABS’ SSIM scenario, the mapping robot, M1, is capturing high-precision data to create accurate 3D indoor maps. The maps will then be kept updated using the information gathered by AROUND, our service robot. Finally, deep learning based image recognition technology will recognize venues and detect changes automatically for semantic mapping. This talk will give an overview of our robotics, localization, and image recognition technologies, which are key elements of SSIM.
Invited Talk #5:
Speaker: Prof. Warren J. Gross, McGill University
Bio: Warren J. Gross received the B.A.Sc. degree in electrical engineering from the University of Waterloo, Waterloo, Ontario, Canada, in 1996, and the M.A.Sc. and Ph.D. degrees from the University of Toronto, Toronto, Ontario, Canada, in 1999 and 2003, respectively. Currently, he is Professor and Chair of the Department of Electrical and Computer Engineering, McGill University, Montreal, Quebec, Canada. His research interests are in the design and implementation of signal processing systems and custom computer architectures.
Dr. Gross served as Chair of the IEEE Signal Processing Society Technical Committee on Design and Implementation of Signal Processing Systems. He has served as General Co-Chair of IEEE GlobalSIP 2017 and IEEE SiPS 2017, and as Technical Program Co-Chair of SiPS 2012. He has also served as organizer for the Workshop on Polar Coding in Wireless Communications at WCNC 2017, the Symposium on Data Flow Algorithms and Architecture for Signal Processing Systems (GlobalSIP 2014) and the IEEE ICC 2012 Workshop on Emerging Data Storage Technologies. Dr. Gross served as Associate Editor for the IEEE Transactions on Signal Processing and currently is a Senior Area Editor. Dr. Gross is a Senior Member of the IEEE and a licensed Professional Engineer in the Province of Ontario.
Title: Brainware for Embedded Vision
Abstract: Deep neural networks have gone through a recent rise in popularity, achieving state-of-the-art results in various fields, including image classification, speech recognition, and automated control. This talk will describe recent progress in the design and hardware implementation of convolutional neural networks with embedded vision applications in mind. The first part of the talk will discuss architectures for hardware convolution engines for scenarios with limited hardware resources and tight power and latency constraints. The second part of the talk will discuss the need for automated tools to solve the difficult problem of designing neural networks under complexity constraints and will describe a design-space-exploration tool that automatically discovers good neural network models with efficient hardware implementations.
Main theme: Computer Vision on Embedded Processors
Embedded Vision is what makes Computer Vision mainstream today, as it brings together embedded systems with vision functionalities. Due to the emergence of powerful yet low-cost and energy-efficient processors, it has become possible to incorporate vision capabilities into a wide range of embedded systems, including video search and annotation, surveillance, gesture recognition in video games, driver assist systems in automotive safety, and autonomous robots such as drones. The IEEE Embedded Vision Workshop (EVW) brings together researchers working on vision problems that share embedded system characteristics.
March 17, 2018 March 24, 2018 (Extended)
Notification to the authors:
May 12, 2018 April 17, 2018
Camera-ready paper: April 20, 2018
Paper submission: https://cmt3.research.microsoft.com/EVW2018
Author guidelines: http://cvpr2018.thecvf.com/submission/main_conference/author_guidelines
Call for Papers (PDF file): Download
Best paper award sponsored by Naver Labs Europe: $1000
EVW will have live demonstrations of embedded vision prototypes and solutions. This year we strongly encourage authors to embed their demo into their talk. We will provide 5 extra minutes in order to have enough time for the demo.
Furthermore, we plan a demo session during which authors, engineers, and researchers can showcase their prototypes with real-time implementations of vision systems on embedded computing platforms. Additionally, we invite abstracts, independent of the paper submissions, to present your demonstrations during the workshop.
Research papers are solicited in, but not limited to, the following topics:
- Analysis of vision problems specific to embedded systems
- Analysis of embedded systems issues specific to computer vision
- Artificial intelligence hardware and software architectures for visual exploitation (for e.g. deep learning and deep convolutional neural networks)
- Embedded vision for robotics (industrial, mobile and consumer)
- Embedded vision for unmanned ground and air vehicles, including consumer drones
- Advanced driver assistance systems and autonomous driving
- Large-scale computer vision problems including object recognition, scene analysis, industrial and medical applications
- New trends in programmable processors for vision and computational imaging
- Applications of and algorithms for embedded vision on:
- Massively parallel platforms such as GPUs (PC, embedded and mobile)
- Programmable platforms such as DSPs and multicore SoCs
- Reconfigurable platforms such as FPGAs and SoCs with reconfigurable logic
- Mobile devices, including smartphones and tablets
- Vision-based client devices for the Internet of Things (IoT)
- Embedded vision for 3D movies and TV
- Biologically-inspired vision and embedded systems
- Computer vision applications distributed between embedded devices and servers
- Social networking embedded computer vision applications
- Educational methods for embedded vision
- User interface designs and CAD tools for embedded vision applications
- Hardware enhancements (lens, imager, processor) that impact vision applications
- Software enhancements (OS, middleware, vision libraries, development tools) that impact embedded vision application
- Methods for standardization and measurement of vision functionalities
- Performance metrics for evaluating embedded systems
- Hybrid embedded systems combining vision and other sensor modalities
Special Journal Issue on Embedded Computer Vision
All of the previous Workshops on Embedded (Computer) Vision (ECVW and EVW) were held in conjunction with CVPR, with the exception of the fifth, which was held in conjunction with ICCV 2009. These events were very successful, and selected workshop papers have been published in several special issues of major journals (EURASIP Journal on Embedded Systems, CVIU and Springer monographs titled Embedded Computer Vision). This year, we also plan to organize a special issue for selected papers.
Ravi Kumar Satzoda, Nauto Inc.
Zoran Nikolic, Nvidia
Tse-Wei Chen, Canon Inc.
Rajesh Narasimha, Edgetensor
Martin Humenberger, Naver Labs Europe
Shanti Swarup Medasani, Uurmi Systems
Stefano Mattoccia, University of Bologna, Italy
Jagadeesh Sankaran, Nvidia
Ahmed Nabil Belbachir, Teknova AS
Sek Chai, SRI International
Margrit Gelautz, Vienna University of Technology
Branislav Kisacanin, Nvidia
Yu Wang, Ambarella
Abelardo Lopez-Lagunas, ITESM-Toluca
Antonio Haro, HERE Technologies
Bernhard Rinner, Klagenfurt University of Austria
Branislav Kisacanin, NVIDIA
Burak Ozer, Pekosoft
Daniel Steininger, AIT Austrian Institute of Technology
Dongchao Wen, Canon Information Technology (Beijing) Co., LTD
Florian Seitner, emotion3D GmbH
Hassan Rabah, University of Lorraine
Hongxing Gao, Canon Information Technology (Beijing) Co., LTD
Linda Wills, Georgia Institute of Technology
Margrit Gelautz, Vienna University of Technology
Marilyn Wolf, Georgia Institute of Technology
Martin Humenberger, Naver Labs Europe
Matteo Poggi, University of Bologna
Matthias Schörghuber, AIT Austrian Institute of Technology
Nicolas Thorstensen, IVISO
Orazio Gallo, NVIDIA Research
Philippe Weinzaepfel, Naver Labs Europe
Rajesh Narasimha, Deep Dive Vision
Ravi Satzoda, Nauto Inc.
Roland Brockers, Jet Propulsion Laboratory
Sek Chai, SRI
Tse-Wei Chen, Canon Inc.
Wei Tao, Canon Information Technology (Beijing) Co., LTD
Zoran Zivkovic, Intel Corporation