The 15th Embedded Vision Workshop

Main theme: Embedded Vision for Unmanned Aerial Vehicles (UAVs)

Organized by: Roland Brockers, Stephan Weiss (Program Chairs), Martin Humenberger, Tse-Wei Chen (General Chairs)

June 16, 2019, Long Beach, CA @ 101A


Program

09:00Opening RemarksEVW Organizers
Morning Session
09:10Invited Talk 1
An Embedded Vision Based Navigation System for NASA’s Mars Helicopter
David S. Bayard
(JPL)
09:50Invited Talk 2
Flying Robots – Design and Navigation
Roland Siegwart
(ETH Zuerich)
10:30Coffee Break + Poster Session 
11:00Invited Talk 3
Embedded vision on computationally constrained platforms acting in 3D space
Gary McGrath
(Qualcomm)
Spotlights
11:40Paper and Demo Spotlights 
12:10Lunch Break 
Afternoon Session
13:30Invited Talk 4
Direct Visual SLAM for Autonomous Systems
Daniel Cremers
(TU Munich)
14:10Invited Talk 5
Minimalist Visual Perception and Navigation for Consumer Drones
Shaojie Shen
(Hong Kong University of Science and Technology)
14:50Coffee Break + Poster Session 
15:20Invited Talk 6
Fully Autonomous Flight In the Wild: Progress and Challenges
Hayk Martiros
(Skydio)
Live Demonstrations
16:00Live Demonstration #1
Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High Speed Scenarios
Guillermo Gallego
(University of Zurich)
16:25Live Demonstration #2
Efficient AI processing for drone and edge devices
Sek Chai
(Latent AI)
16:50Live Demonstration #3
Real time Omnidirectional sensing on UAV embedded platform
Sijin Li
(DJI)
17:15Closing RemarksEVW Organizers

Invited Talk #1:

An Embedded Vision Based Navigation System for NASA’s Mars Helicopter

Speaker: Dr. David S. Bayard, Jet Propulsion Laboratory

David_S_Bayard

Bio: Dr. David S. Bayard is currently a JPL Technical Fellow, with 30 plus years’ experience in the aerospace industry. At JPL, he has been involved in the application of modern estimation and control techniques to a wide range of emerging spacecraft and planetary missions. During the period of 2017-2018 Dr. Bayard led the JPL Mars Helicopter Navigation Team responsible for developing the first autonomous vision-based navigation system designed to fly onboard a planetary drone. His work includes over 160 papers in refereed journals and conferences, 63 NASA Tech Brief Awards, and 4 U.S. patents. Dr. Bayard received numerous NASA Awards and Medals during his career, and received the 2006 American Automatic Control Council’s (AACC) Control Engineering Practice Award. He is an Associate Fellow of AIAA.     

Abstract: NASA will fly a small helicopter as an addition to the Mars 2020 rover mission. The main goal is to verify the feasibility of using helicopters for future Mars exploration missions through a series of fully autonomous flight demonstrations. In addition to demonstrating the sophisticated dynamics and control functions needed to fly the helicopter in an ultra-thin Mars atmosphere, a key supporting function is the capability to perform autonomous navigation. This presentation provides an overview of the Mars Helicopter vision-based navigation system including its architecture, sensors, vision processing and state estimation algorithms. The main challenge is to design a navigation system that is reliable, fully self-contained, and operate without human intervention. Special attention is given to design choices made to address the unique constraints arising when flying autonomously on Mars. The hope is that future Mars helicopters will serve as scouting platforms to map terrain and find interesting science targets ahead of the rover. Additionally, helicopters may eventually carry payloads of their own for in-situ scientific exploration in areas inaccessible to rovers.


Invited Talk #2:

Flying Robots – Design and Navigation

Speaker: Dr. Roland Siegwart, Autonomous Systems Lab & Wyss, Zurich

Roland_Siegwart

Bio: Roland Siegwart (born in 1959) is professor for autonomous mobile robots at ETH Zurich, founding co-director of the Wyss Zurich and member of the board of directors of multiple high tech companies. He studied mechanical engineering at ETH, brought up a spin-off company, spent ten years as professor at EPFL Lausanne (1996 – 2006), was vice president of ETH Zurich (2010 -2014) and held visiting positions at Stanford University and NASA Ames. He is and was the coordinator of multiple European projects and co-founder of half a dozen spin-off companies. He is IEEE Fellow, recipient of the IEEE RAS Inaba Technical Award, the IEEE RAS Pioneer award and officer of the International Federation of Robotics Research (IFRR). He is on the editorial board of multiple journals in robotics and was a general chair of several conferences in robotics including IROS 2002, AIM 2007, FSR 2007, ISRR 2009, FSR 2017 and CoRL 2018. His interests are in the design and navigation of wheeled, walking and flying robots operating in complex and highly dynamical environments and in the promotor of innovation and entrepreneurship.

Abstract: While robots are already doing a wonderful job as factory workhorses, they are now gradually appearing in our daily environments and offering their services as autonomous cars, delivery drones, helpers in search and rescue and much more. For fast search & rescue or inspection of complex environments, flying robots are probably the most efficient and versatile devices. However, the limited flight time and payload, as well as the restricted computing power of drones renders autonomous operations quite challenging. This talk will focus on the design and autonomous navigation of flying robots. Innovative designs of flying systems, from novel concepts of omni-directional multi-copters and blimps to solar airplanes for continuous flights are presented. Recent results of visual and laser based navigation (localization, mapping, planning) in GPS denied environments are showcased and discussed. Performance and potential applications are presented.


Invited Talk #3:

Embedded vision on computationally constrained platforms acting in 3D space

Speaker: Dr. Gary McGrath, Qualcomm

GaryMcGrath

Bio: Dr. McGrath holds a B.S. from University of California, Irvine, a M.S., and a Ph.D. from the University of Hawaii, Manoa. After completing his postdoctoral research stationed at Stanford, he joined Qualcomm in 1995 and first specialized in simulations, massively parallel systems, and optimization algorithms. He created the RF ray-tracing, PHY, and MAC simulations to design the first CDMA ground data networks as well as the first air-to-ground CDMA data network that later became GoGo wireless. As lead of Qualcomm Research’s augmented reality SW group, he created algorithms for Qualcomm’s augmented reality SDK (Vuforia). He created Qualcomm’s first computer vision SDK (FastCV) which later initiated the OpenVX standard. He received Qualcomm’s “Innovator of the Year” award for his work in computer vision. He now leverages his optimization and computer vision work into Qualcomm’s robotics and autonomous car efforts where he created Qualcomm’s Machine Vision SDK.

Abstract: The complexities of computer vision and navigation in 3D space lead to algorithms that can quickly hit the limits of embedded CPUs. Early UAV platforms were limited in functionality and velocity to fit within the computational constraints of the platform. As the needs of autonomy drive greater algorithmic complexity along with the requirements to support higher and higher velocities, the requirements on embedded platforms are growing at a nonlinear rate. Coupled together with the constraints from Moore’s law and thermal dissipation, parallelism and heterogeneity provide a path forward.


Invited Talk #4:

Direct Visual SLAM for Autonomous Systems

Speaker: Prof. Daniel Cremers, Technical University of Munich

daniel_cremers

Bio: Daniel Cremers obtained a PhD in Computer Science from the University of Mannheim, Germany. Subsequently he spent two years as a postdoctoral researcher at UCLA and one year as a permanent researcher at Siemens Corporate Research in Princeton, NJ. From 2005 until 2009, he was associate professor at the University of Bonn, Germany. Since 2009 he holds the Chair of Computer Vision and Artificial Intelligence at the Technical University of Munich. His publications received numerous awards, most recently the SGP 2016 Best Paper Award and the CVPR 2016 Best Paper Honorable Mention. Prof. Cremers received three ERC Grants and the Gottfried-Wilhelm Leibniz Award 2016. He is cofounder of several companies, in particular of Artisense, a hightech startup focused on computer vision & machine learning technologies for autonomous systems.

Abstract: Over the last years, we have witnessed tremendous progress on camera-based localization and SLAM. In my presentation, I will highlight some recent developments at TU Munich and in the company Artisense on visual odometry, sensory fusion and embedded solutions with applications to autonomous systems. These methods enable us to localize autonomous systems in realtime at high degree of precision and robustness, even in GPS-denied environments. In the process we create detailed semantic maps of the environment which can serve as a basis for relocalization and 3D scene understanding.


Invited Talk #5:

Minimalist Visual Perception and Navigation for Consumer Drones

Speaker: Prof. Shaojie Shen, Hong Kong University of Science and Technology

Shaojie_Shen

Bio: Shaojie Shen received B.Eng. degree in Electronic Engineering from the Hong Kong University of Science and Technology (HKUST) in 2009. He received M.S. in Robotics and Ph.D. in Electrical and Systems Engineering in 2011 and 2014, respectively, all from the University of Pennsylvania. He joined the Department of Electronic and Computer Engineering at the HKUST in September 2014 as an Assistant Professor. He is the founding director of the HKUST-DJI Joint Innovation Laboratory (HDJI Lab). His research interests are in the areas of robotics and unmanned aerial vehicles, with focus on state estimation, sensor fusion, localization and mapping, and autonomous navigation in complex environments. He is currently serving as associate editors for T-RO and AURO. His recent work, VINS-Mono, received the Honorable Mention status for the 2018 T-RO Best Paper Award. He and his research team also won the Best Student Paper Award in IROS 2018, Best Service Robot Paper Finalist in ICRA 2017, Best Paper Finalist in ICRA 2011, and Best Paper Awards in SSRR 2016 and SSRR 2015.

Abstract: Consumer drone developers often face the challenge of achieving safe autonomous navigation under very tight size, weight, power, and cost constraints. In this talk, I will present our recent results towards a minimalist, but complete perception and navigation solution utilizing only a low-cost monocular visual-inertial sensor suite. I will start with an introduction of VINS-Mono, a robust state estimation solution packed with multiple features for easy deployment, such as online spatial and temporal inter-sensor calibration, loop closure, and map reuse. I will then describe efficient monocular dense mapping solutions utilizing efficient map representation, parallel computing, and deep learning techniques for real-time reconstruction of the environment. The perception system is completed by a geometric-based method for estimating full 6-DoF poses of arbitrary rigid dynamic objects using only one camera. With this real-time perception capability, trajectory planning and replanning methods with optimal time allocation are proposed to close the perception-action loop. The performance of the overall system is demonstrated via autonomous navigation in unknown complex environments, as well as aggressive drone racing in a teach-and-repeat setting.


Invited Talk #6:

Fully Autonomous Flight In the Wild: Progress and Challenges

Speaker: Hayk Martiros, Skydio

hayk-profile-square

Bio: Hayk Martiros leads the autonomous technology team at Skydio, whose work focuses on robust approaches to computer vision, deep learning, nonlinear optimization, and motion planning. Hayk did his undergraduate study at Princeton and graduate study at Stanford University.

Abstract: The technology for intelligent and trustworthy navigation of autonomous UAVs is just reaching the inflection point to provide enormous value across video capture, inspection, mapping, monitoring, and delivery. At Skydio we believe the ability to handle difficult unknown scenarios onboard and in real-time based on visual sensing is the key to making that happen, within a tightly integrated system from pixels to propellers. I will discuss our learnings from shipping a fully autonomous drone, the algorithms that make it work, and challenges beyond.


Live Demonstration #1:

Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High Speed Scenarios

Speaker: Guillermo Gallego, University of Zurich

Abstract: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that new algorithms that exploit the high temporal resolution and the asynchronous nature of the sensor are required. In this talk, we will demonstrate a SLAM system that combines events, images and IMU for robust and accurate pose estimation. The system has been developed at the Robotics and Perception Group, lead by Prof. Davide Scaramuzza (University of Zurich and ETH Zurich).


Live Demonstration #2:

Efficient AI processing for drone and edge devices

Speaker: Sek Chai, Latent AI

Abstract: Deep Neural Networks (DNN) parameter sizes continue to grow dramatically, far exceeding the power and memory budgets for on-board drone processing. With a goal of improving the efficiency of DNN inference, algorithms have been proposed for DNNs including binary weights, mixed hybrid precision, pruning and quantization. In this presentation, we present our learning approach with heterogeneous bit-precisions to find optimal precision for each layer of the network. Our approach constitutes a new way of regularizing DNN training, with the option to encode parameter values (e.g. power-of-two values). We present results and analyses to illustrate the effects of quantization. We also present an embedded vision demonstration of aerial drone perception, using a low-bit-precision network with lower power, memory footprint, and compute latency.


Live Demonstration #3:

Real time Omnidirectional sensing on UAV embedded platform

Speaker: Sijin Li, DJI

Abstract:When talking about UAV, safety is one of the most important issues that customers care about. To achieve safe autonomy, the UAV needs to know its location and the surrounding environment. We had built a multi-sensor fusion system for real-time localization and depth perception on Mavic pro 2. The whole system consists of 8 cameras (3 pair of stereo cameras and 2 monocular cameras) and 2 TOF sensors. The main contributions are,
1) Closed-loop control, which takes Visual-inertial odometry as feedback, enables stop/brake under aggressive maneuvers.
2) The omnidirectional depth sensing, which combines the monocular depth and stereo depth, allows the drone to avoid obstacle while tracking in all directions.
3) Our heterogeneous computing platform support above tasks in high frequency (150Hz for VIO and 50Hz for depth) with low power assumption.


Paper and Demo Spotlights

11:40Paper Spotlight #1:
Learning to detect and take advantage of reliable anchor points for embedded stereo refinement
Fabio Tosi,
Matteo Poggi,
Stefano Mattoccia
11:46Paper Spotlight #2:
DupNet: Towards Very Tiny Quantized CNN with Improved Accuracy for Face Detection
Hongxing Gao,
Wei Tao,
Dongchao Wen, 
Junjie Liu,
Tse-Wei Chen,
Kinya Osa,
Masami Kato
11:52Paper Spotlight #3:
Condensation-Net: Memory-Efficient Network Architecture with Cross-Channel Pooling Layers and Virtual Feature Maps
Tse-Wei Chen,
Motoki Yoshinaga, Hongxing Gao,
Wei Tao,
Dongchao Wen,
Junjie Liu,
Kinya Osa,
Masami Kato
11:58Demo Spotlight #1:
YUVMultiNet: Real-time YUV multi-task CNN for autonomous driving
Thomas Boulay,
Said El-Hachimi,
Manikumar Surisetti,
Pullarao Maddu,
Saranya Kandan
12:04Demo Spotlight #2:
System Demo of CNN Domain Specific Accelerator with Embedded MRAM in-memory Computing for Mobile and IoT Applications
Baohua Sun

EVW2018-Logo
Main theme: Embedded Vision for Unmanned Aerial Vehicles (UAVs)

UAVs with embedded and real-time vision processing capabilities have proven to be highly versatile autonomous robotic platforms in a variety of environments and for different tasks. The workshop aims at collecting and discussing next-generation approaches for online and onboard visual systems embedded on UAVs in order to elaborate a holistic picture of the state of the art in this field, and also to reveal remaining challenges and issues. The discussion of these topics will identify next steps in this research area for both young researchers as well as senior scientists. The invited speakers from academia will highlight past and current research and the invited speakers from industry will help closing the loop how research efforts are and will be perceived for industrial applications. Topics of interest include vision-based navigation, vision-based (multi-)sensor-fusion, online reconstruction, collaborative vision algorithms for navigation and environment perception, and related topics.

Important Dates

Paper submission: March 18, 2019
Demo abstract submission: March 18, 2019
Notification to the authors: April 10, 2019
Camera ready paper: April 19, 2019

Paper submission: https://cmt3.research.microsoft.com/EVW2019
Author guidelines: http://cvpr2019.thecvf.com/submission/main_conference/author_guidelines

  • For paper submission, please refer to CVPR guidelines.
  • For demo abstract submission, authors are encouraged to submit an abstract with up to 4 pages.

Call for Papers (PDF file): Download

Sponsors

Sponsored by NAVER LABS Europe:

Demos

EVW will have live demonstrations of embedded vision prototypes and solutions. This year, we plan dedicated poster and demonstration slots during which authors, engineers, and researchers can showcase their prototypes with real-time implementations of vision systems on embedded computing platforms for UAVs and other applications to a wide audience with good interaction opportunities. Demos related to UAVs are particularly encouraged, and we will make special efforts to meet the requirements of the demonstrators. Additionally, we invite abstracts, independent of the paper submissions, to present your demonstrations during the workshop.

Topics

Research papers are solicited in, but not limited to, the following topics and with relevance particularly to Unmanned Aerial Vehicles:

  • New trends and challenges in embedded visual processing for UAV applications, including localization, navigation, and image-based perception
  • Analysis of vision problems specific to embedded systems
  • Analysis of embedded systems issues specific to computer vision
  • Artificial intelligence hardware and software architectures for visual exploitation (for example deep learning, convolutional neural networks, and lightweight convolutional networks)
  • Embedded vision for UAVs (industrial, mobile and consumer)
  • Advanced assistance systems and autonomous navigation frameworks
  • Augmented and Virtual Reality
  • Large-scale computer vision problems including object recognition, scene analysis, industrial and medical applications
  • New trends in programmable processors for vision and computational imaging
  • Applications of and algorithms for embedded vision on:
    • Massively parallel platforms such as GPUs (PC, embedded and mobile)
    • Programmable platforms such as DSPs and multicore SoCs
    • Reconfigurable platforms such as FPGAs and SoCs with reconfigurable logic
    • Swarms of mobile aerial platforms
    • Vision-based client devices for the Internet of Things (IoT)
    • Mixed reality on mobile devices
    • Embedded vision for 3D reconstruction and (live) data streams
    • Biologically-inspired vision and embedded systems
    • Computer vision applications distributed between embedded devices and servers
    • Educational methods for embedded vision
    • Hardware enhancements (lens, imager, processor) that impact vision applications
    • Software enhancements (OS, middleware, vision libraries, development tools) that impact embedded vision application
    • Performance metrics for evaluating embedded systems
    • Hybrid embedded systems combining vision and other sensor modalities

Special Journal Issue on Embedded Computer Vision

All of the previous Workshops on Embedded (Computer) Vision (ECVW and EVW) were held in conjunction with CVPR, with the exception of the fifth, which was held in conjunction with ICCV 2009. These events were very successful, and selected workshop papers have been published in several special issues of major journals (EURASIP Journal on Embedded Systems, CVIU and Springer monographs titled Embedded Computer Vision). This year, we also plan to organize a special issue for selected papers.

Organizing Committee

General Chairs:

Martin Humenberger, NAVER LABS Europe (France)
Tse-Wei Chen, Canon Inc. (Japan)
Rajesh Narasimha, Edgetensor (USA)

Program Chairs:

Stephan Weiss, University of Klagenfurt (Austria)
Roland Brockers, Jet Propulsion Laboratory (USA)

Steering Committee:

Ravi Satzoda, Nauto (USA)
Zoran Nikolic, Nvidia (USA)
Swarup Medasani, MathWorks (India)
Stefano Mattoccia, University of Bologna (Italy)
Jagadeesh Sankaran, Nvidia (USA)
Goksel Dedeoglu, Perceptonic (USA)
Ahmed Nabil Belbachir, NORCE (Norway)
Sek Chai, SRI International (USA)
Margrit Gelautz, Vienna University of Technology (Austria)
Branislav Kisacanin, Nvidia (USA)

Program Committee:

Abelardo Lopez-Lagunas, ITESM-Toluca
Andre Chang, FwdNxt
Antonio Haro, HERE Technologies
Burak Ozer, Pekosoft
Daniel Steininger, AIT Austrian Institute of Technology
Dipan Kumar Mandal, Intel Corporation
Dongchao Wen, Canon Information Technology (Beijing) Co., LTD
Fayçal Bensaali, Qatar University
Hassan Rabah, University of Lorraine
Hongxing Gao, Canon Information Technology (Beijing) Co., LTD
Jeff Delaune, Jet Propulsion Laboratory
Jörg Thiem, University of Applied Sciences and Arts of Dortmund
Linda Wills, Georgia Institute of Technology
Margrit Gelautz, Vienna University of Technology
Matteo Poggi, University of Bologna
Matthias Schörghuber, AIT Austrian Institute of Technology
Sek Chai, Latent AI
Shreyansh Daftry, Jet Propulsion Laboratory
Senthil Yogamani, Valeo
Supriya Sathyanarayana, Stanford University
Stefano Mattoccia
Sven Fleck, Smartsurv
Tse-Wei Chen, Canon Inc.
Zoran Zivkovic, Intel Corporation

 

 

 

 

 

Advertisements