Categories
Offsites

Google at ECCV 2022

Google is proud to be a Platinum Sponsor of the European Conference on Computer Vision (ECCV 2022), a premier forum for the dissemination of research in computer vision and machine learning (ML). This year, ECCV 2022 will be held as a hybrid event, in person in Tel Aviv, Israel with virtual attendance as an option. Google has a strong presence at this year’s conference with over 60 accepted publications and active involvement in a number of workshops and tutorials. We look forward to sharing some of our extensive research and expanding our partnership with the broader ML research community.

Registered for ECCV 2022? We hope you’ll visit our on-site or virtual booths to learn more about the research we’re presenting at ECCV 2022, including several demos and opportunities to connect with our researchers. Learn more about Google’s research being presented at ECCV 2022 below (Google affiliations in bold).

Organizing Committee

Program Chairs include: Moustapha Cissé

Awards Paper Committee: Todd Zickler

Area Chairs include: Ayan Chakrabarti, Tali Dekel, Alireza Fathi, Vittorio Ferrari, David Fleet, Dilip Krishnan, Michael Rubinstein, Cordelia Schmid, Deqing Sun, Federico Tombari, Jasper Uijlings, Ming-Hsuan Yang, Todd Zickler

Accepted Publications

NeuMesh: Learning Disentangled Neural Mesh-Based Implicit Field for Geometry and Texture Editing
Bangbang Yang, Chong Bao, Junyi Zeng, Hujun Bao, Yinda Zhang, Zhaopeng Cui, Guofeng Zhang

Anti-Neuron Watermarking: Protecting Personal Data Against Unauthorized Neural Networks
Zihang Zou, Boqing Gong, Liqiang Wang

Exploiting Unlabeled Data with Vision and Language Models for Object Detection
Shiyu Zhao, Zhixing Zhang, Samuel Schulter, Long Zhao, Vijay Kumar B G, Anastasis Stathopoulos, Manmohan Chandraker, Dimitris N. Metaxas

Waymo Open Dataset: Panoramic Video Panoptic Segmentation
Jieru Mei, Alex Zhu, Xinchen Yan, Hang Yan, Siyuan Qiao, Yukun Zhu, Liang-Chieh Chen, Henrik Kretzschmar

PRIF: Primary Ray-Based Implicit Function
Brandon Yushan Feng, Yinda Zhang, Danhang Tang, Ruofei Du, Amitabh Varshney

LoRD: Local 4D Implicit Representation for High-Fidelity Dynamic Human Modeling
Boyan Jiang, Xinlin Ren, Mingsong Dou, Xiangyang Xue, Yanwei Fu, Yinda Zhang

k-Means Mask Transformer (see blog post)
Qihang Yu*, Siyuan Qiao, Maxwell D Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen

MaxViT: Multi-Axis Vision Transformer (see blog post)
Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, Yinxiao Li

E-Graph: Minimal Solution for Rigid Rotation with Extensibility Graphs
Yanyan Li, Federico Tombari

RBP-Pose: Residual Bounding Box Projection for Category-Level Pose Estimation
Ruida Zhang, Yan Di, Zhiqiang Lou, Fabian Manhardt, Federico Tombari, Xiangyang Ji

GOCA: Guided Online Cluster Assignment for Self-Supervised Video Representation Learning
Huseyin Coskun, Alireza Zareian, Joshua L Moore, Federico Tombari, Chen Wang

Scaling Open-Vocabulary Image Segmentation with Image-Level Labels
Golnaz Ghiasi, Xiuye Gu, Yin Cui, Tsung-Yi Lin*

Adaptive Transformers for Robust Few-Shot Cross-Domain Face Anti-spoofing
Hsin-Ping Huang, Deqing Sun, Yaojie Liu, Wen-Sheng Chu, Taihong Xiao, Jinwei Yuan, Hartwig Adam, Ming-Hsuan Yang

DualPrompt: Complementary Prompting for Rehearsal-Free Continual Learning
Zifeng Wang*, Zizhao Zhang, Sayna Ebrahimi, Ruoxi Sun, Han Zhang, Chen-Yu Lee, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, Tomas Pfister

BLT: Bidirectional Layout Transformer for Controllable Layout Generation
Xiang Kong, Lu Jiang, Huiwen Chang, Han Zhang, Yuan Hao, Haifeng Gong, Irfan Essa

V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer
Runsheng Xu, Hao Xiang, Zhengzhong Tu, Xin Xia, Ming-Hsuan Yang, Jiaqi Ma

Learning Visibility for Robust Dense Human Body Estimation
Chun-Han Yao, Jimei Yang, Duygu Ceylan, Yi Zhou, Yang Zhou, Ming-Hsuan Yang

Are Vision Transformers Robust to Patch Perturbations?
Jindong Gu, Volker Tresp, Yao Qin

PseudoAugment: Learning to Use Unlabeled Data for Data Augmentation in Point Clouds
Zhaoqi Leng, Shuyang Cheng, Ben Caine, Weiyue Wang, Xiao Zhang, Jonathon Shlens, Mingxing Tan, Dragomir Anguelov

Structure and Motion from Casual Videos
Zhoutong Zhang, Forrester Cole, Zhengqi Li, Noah Snavely, Michael Rubinstein, William T. Freeman

PreTraM: Self-Supervised Pre-training via Connecting Trajectory and Map
Chenfeng Xu, Tian Li, Chen Tang, Lingfeng Sun, Kurt Keutzer, Masayoshi Tomizuka, Alireza Fathi, Wei Zhan

Novel Class Discovery Without Forgetting
Joseph K J, Sujoy Paul, Gaurav Aggarwal, Soma Biswas, Piyush Rai, Kai Han, Vineeth N Balasubramanian

Hierarchically Self-Supervised Transformer for Human Skeleton Representation Learning
Yuxiao Chen, Long Zhao, Jianbo Yuan, Yu Tian, Zhaoyang Xia, Shijie Geng, Ligong Han, Dimitris N. Metaxas

PACTran: PAC-Bayesian Metrics for Estimating the Transferability of Pretrained Models to Classification Tasks
Nan Ding, Xi Chen, Tomer Levinboim, Soravit Changpinyo, Radu Soricut

InfiniteNature-Zero: Learning Perpetual View Generation of Natural Scenes from Single Images
Zhengqi Li, Qianqian Wang*, Noah Snavely, Angjoo Kanazawa*

Generalizable Patch-Based Neural Rendering (see blog post)
Mohammed Suhail*, Carlos Esteves, Leonid Sigal, Ameesh Makadia

LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds
Minghua Liu, Yin Zhou, Charles R. Qi, Boqing Gong, Hao Su, Dragomir Anguelov

The Missing Link: Finding Label Relations Across Datasets
Jasper Uijlings, Thomas Mensink, Vittorio Ferrari

Learning Instance-Specific Adaptation for Cross-Domain Segmentation
Yuliang Zou, Zizhao Zhang, Chun-Liang Li, Han Zhang, Tomas Pfister, Jia-Bin Huang

Learning Audio-Video Modalities from Image Captions
Arsha Nagrani, Paul Hongsuck Seo, Bryan Seybold, Anja Hauth, Santiago Manen, Chen Sun, Cordelia Schmid

TL;DW? Summarizing Instructional Videos with Task Relevance & Cross-Modal Saliency
Medhini Narasimhan*, Arsha Nagrani, Chen Sun, Michael Rubinstein, Trevor Darrell, Anna Rohrbach, Cordelia Schmid

On Label Granularity and Object Localization
Elijah Cole, Kimberly Wilber, Grant Van Horn, Xuan Yang, Marco Fornoni, Pietro Perona, Serge Belongie, Andrew Howard, Oisin Mac Aodha

Disentangling Architecture and Training for Optical Flow
Deqing Sun, Charles Herrmann, Fitsum Reda, Michael Rubinstein, David J. Fleet, William T. Freeman

NewsStories: Illustrating Articles with Visual Summaries
Reuben Tan, Bryan Plummer, Kate Saenko, J.P. Lewis, Avneesh Sud, Thomas Leung

Improving GANs for Long-Tailed Data Through Group Spectral Regularization
Harsh Rangwani, Naman Jaswani, Tejan Karmali, Varun Jampani, Venkatesh Babu Radhakrishnan

Planes vs. Chairs: Category-Guided 3D Shape Learning Without Any 3D Cues
Zixuan Huang, Stefan Stojanov, Anh Thai, Varun Jampani, James Rehg

A Sketch Is Worth a Thousand Words: Image Retrieval with Text and Sketch
Patsorn Sangkloy, Wittawat Jitkrittum, Diyi Yang, James Hays

Learned Monocular Depth Priors in Visual-Inertial Initialization
Yunwen Zhou, Abhishek Kar, Eric L. Turner, Adarsh Kowdle, Chao Guo, Ryan DuToit, Konstantine Tsotsos

How Stable are Transferability Metrics Evaluations?
Andrea Agostinelli, Michal Pandy, Jasper Uijlings, Thomas Mensink, Vittorio Ferrari

Data-Free Neural Architecture Search via Recursive Label Calibration
Zechun Liu*, Zhiqiang Shen, Yun Long, Eric Xing, Kwang-Ting Cheng, Chas H. Leichner

Fast and High Quality Image Denoising via Malleable Convolution
Yifan Jiang*, Bartlomiej Wronski, Ben Mildenhall, Jonathan T. Barron, Zhangyang Wang, Tianfan Xue

Concurrent Subsidiary Supervision for Unsupervised Source-Free Domain Adaptation
Jogendra Nath Kundu, Suvaansh Bhambri, Akshay R Kulkarni, Hiran Sarkar,
Varun Jampani, Venkatesh Babu Radhakrishnan

Learning Online Multi-Sensor Depth Fusion
Erik Sandström, Martin R. Oswald, Suryansh Kumar, Silvan Weder, Fisher Yu, Cristian Sminchisescu, Luc Van Gool

Hierarchical Semantic Regularization of Latent Spaces in StyleGANs
Tejan Karmali, Rishubh Parihar, Susmit Agrawal, Harsh Rangwani, Varun Jampani, Maneesh K Singh, Venkatesh Babu Radhakrishnan

RayTran: 3D Pose Estimation and Shape Reconstruction of Multiple Objects from Videos with Ray-Traced Transformers
Michał J Tyszkiewicz, Kevis-Kokitsi Maninis, Stefan Popov, Vittorio Ferrari

Neural Video Compression Using GANs for Detail Synthesis and Propagation
Fabian Mentzer, Eirikur Agustsson, Johannes Ballé, David Minnen, Nick Johnston, George Toderici

Exploring Fine-Grained Audiovisual Categorization with the SSW60 Dataset
Grant Van Horn, Rui Qian, Kimberly Wilber, Hartwig Adam, Oisin Mac Aodha, Serge Belongie

Implicit Neural Representations for Image Compression
Yannick Strümpler, Janis Postels, Ren Yang, Luc Van Gool, Federico Tombari

3D Compositional Zero-Shot Learning with DeCompositional Consensus
Muhammad Ferjad Naeem, Evin Pınar Örnek, Yongqin Xian, Luc Van Gool, Federico Tombari

FindIt: Generalized Localization with Natural Language Queries (see blog post)
Weicheng Kuo, Fred Bertsch, Wei Li, AJ Piergiovanni, Mohammad Saffar, Anelia Angelova

A Simple Single-Scale Vision Transformer for Object Detection and Instance Segmentation
Wuyang Chen*, Xianzhi Du, Fan Yang, Lucas Beyer, Xiaohua Zhai, Tsung-Yi Lin, Huizhong Chen, Jing Li, Xiaodan Song, Zhangyang Wang, Denny Zhou

Improved Masked Image Generation with Token-Critic
Jose Lezama, Huiwen Chang, Lu Jiang, Irfan Essa

Learning Discriminative Shrinkage Deep Networks for Image Deconvolution
Pin-Hung Kuo, Jinshan Pan, Shao-Yi Chien, Ming-Hsuan Yang

AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation
Efthymios Tzinis*, Scott Wisdom, Tal Remez, John Hershey

Simple Open-Vocabulary Object Detection with Vision Transformers
Matthias Minderer, Alexey Gritsenko, Austin C Stone, Maxim Neumann, Dirk Weißenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, Neil Houlsby

COMPOSER: Compositional Reasoning of Group Activity in Videos with Keypoint-Only Modality
Honglu Zhou, Asim Kadav, Aviv Shamsian, Shijie Geng, Farley Lai, Long Zhao, Ting Liu, Mubbasir Kapadia, Hans Peter Graf

Video Question Answering with Iterative Video-Text Co-tokenization (see blog post)
AJ Piergiovanni, Kairo Morton*, Weicheng Kuo, Michael S. Ryoo, Anelia Angelova

Class-Agnostic Object Detection with Multi-modal Transformer
Muhammad Maaz, Hanoona Abdul Rasheed, Salman Khan, Fahad Shahbaz Khan, Rao Muhammad Anwer, Ming-Hsuan Yang

FILM: Frame Interpolation for Large Motion (see blog post)
Fitsum Reda, Janne Kontkanen, Eric Tabellion, Deqing Sun, Caroline Pantofaru, Brian Curless

Compositional Human-Scene Interaction Synthesis with Semantic Control
Kaifeng Zhao, Shaofei Wang, Yan Zhang, Thabo Beeler, Siyu Tang

Workshops

LatinX in AI
Mentors include: José Lezama
Keynote Speakers include: Andre Araujo

AI for Creative Video Editing and Understanding
Keynote Speakers include: Tali Dekel, Negar Rostamzadeh

Learning With Limited and Imperfect Data (L2ID)
Invited Speakers include: Xiuye Gu
Organizing Committee includes: Sadeep Jayasumana

International Challenge on Compositional and Multimodal Perception (CAMP)
Program Committee includes: Edward Vendrow

Self-Supervised Learning: What is Next?
Invited Speakers include: Mathilde Caron, Arsha Nagrani
Organizers include: Andrew Zisserman

3rd Workshop on Adversarial Robustness In the Real World
Invited Speakers include: Ekin Dogus Cubuk
Organizers include: Xinyun Chen, Alexander Robey, Nataniel Ruiz, Yutong Bai

AV4D: Visual Learning of Sounds in Spaces
Invited Speakers include: John Hershey

Challenge on Mobile Intelligent Photography and Imaging (MIPI)
Invited Speakers include: Peyman Milanfar

Robust Vision Challenge 2022
Organizing Committee includes: Alina Kuznetsova

Computer Vision in the Wild
Challenge Organizers include: Yi-Ting Chen, Ye Xia
Invited Speakers include: Yin Cui, Yongqin Xian, Neil Houlsby

Self-Supervised Learning for Next-Generation Industry-Level Autonomous Driving (SSLAD)
Organizers include: Fisher Yu

Responsible Computer Vision
Organizing Committee includes: Been Kim
Invited Speakers include: Emily Denton

Cross-Modal Human-Robot Interaction
Invited Speakers include: Peter Anderson

ISIC Skin Image Analysis
Organizing Committee includes: Yuan Liu
Steering Committee includes: Yuan Liu, Dale Webster
Invited Speakers include: Yuan Liu

Observing and Understanding Hands in Action
Sponsored by Google

Autonomous Vehicle Vision (AVVision)
Speakers include: Fisher Yu

Visual Perception for Navigation in Human Environments: The JackRabbot Human Body Pose Dataset and Benchmark
Organizers include: Edward Vendrow

Language for 3D Scenes
Invited Speakers include: Jason Baldridge
Organizers include: Leonidas Guibas

Designing and Evaluating Computer Perception Systems (CoPe)
Organizers include: Andrew Zisserman

Learning To Generate 3D Shapes and Scenes
Panelists include: Pete Florence

Advances in Image Manipulation
Program Committee includes: George Toderici, Ming-Hsuan Yang

TiE: Text in Everything
Challenge Organizers include: Shangbang Long, Siyang Qin
Invited Speakers include: Tali Dekel, Aishwarya Agrawal

Instance-Level Recognition
Organizing Committee: Andre Araujo, Bingyi Cao, Tobias Weyand
Invited Speakers include: Mathilde Caron

What Is Motion For?
Organizing Committee: Deqing Sun, Fitsum Reda, Charles Herrmann
Invited Speakers include: Tali Dekel

Neural Geometry and Rendering: Advances and the Common Objects in 3D Challenge
Invited Speakers include: Ben Mildenhall

Visual Object-Oriented Learning Meets Interaction: Discovery, Representations, and Applications
Invited Speakers include: Klaus Greff, Thomas Kipf
Organizing Committee includes: Leonidas Guibas

Vision with Biased or Scarce Data (VBSD)
Program Committee includes: Yizhou Wang

Multiple Object Tracking and Segmentation in Complex Environments
Invited Speakers include: Xingyi Zhou, Fisher Yu

3rd Visual Inductive Priors for Data-Efficient Deep Learning Workshop
Organizing Committee includes: Ekin Dogus Cubuk

DeeperAction: Detailed Video Action Understanding and Anomaly Recognition
Advisors include: Rahul Sukthankar

Sign Language Understanding Workshop and Sign Language Recognition, Translation & Production Challenge
Organizing Committee includes: Andrew Zisserman
Speakers include: Andrew Zisserman

Ego4D: First-Person Multi-Modal Video Understanding
Invited Speakers include: Michal Irani

AI-Enabled Medical Image Analysis: Digital Pathology & Radiology/COVID19
Program Chairs include: Po-Hsuan Cameron Chen
Workshop Partner: Google Health

Visual Object Tracking Challenge (VOT 2022)
Technical Committee includes: Christoph Mayer

Assistive Computer Vision and Robotics
Technical Committee includes: Maja Mataric

Human Body, Hands, and Activities from Egocentric and Multi-View Cameras
Organizers include: Francis Engelmann

Frontiers of Monocular 3D Perception: Implicit x Explicit
Panelists include: Pete Florence

Tutorials

Self-Supervised Representation Learning in Computer Vision
Invited Speakers include: Ting Chen

Neural Volumetric Rendering for Computer Vision
Organizers include: Ben Mildenhall, Pratul Srinivasan, Jon Barron
Presenters include: Ben Mildenhall, Pratul Srinivasan

New Frontiers in Efficient Neural Architecture Search!
Speakers include: Ruochen Wang



*Work done while at Google.  

Categories
Misc

Upcoming Webinar: A Deep Dive into MONAI

Join us on October 24 for a deep dive into MONAI, the essential framework for AI workflows in healthcare—including use cases, building blocks, and more.

Join us on October 24 for a deep dive into MONAI, the essential framework for AI workflows in healthcare—including use cases, building blocks, and more.

Categories
Offsites

PI-ARS: Accelerating Evolution-Learned Visual-Locomotion with Predictive Information Representations

Evolution strategy (ES) is a family of optimization techniques inspired by the ideas of natural selection: a population of candidate solutions are usually evolved over generations to better adapt to an optimization objective. ES has been applied to a variety of challenging decision making problems, such as legged locomotion, quadcopter control, and even power system control.

Compared to gradient-based reinforcement learning (RL) methods like proximal policy optimization (PPO) and soft actor-critic (SAC), ES has several advantages. First, ES directly explores in the space of controller parameters, while gradient-based methods often explore within a limited action space, which indirectly influences the controller parameters. More direct exploration has been shown to boost learning performance and enable large scale data collection with parallel computation. Second, a major challenge in RL is long-horizon credit assignment, e.g., when a robot accomplishes a task in the end, determining which actions it performed in the past were the most critical and should be assigned a greater reward. Since ES directly considers the total reward, it relieves researchers from needing to explicitly handle credit assignment. In addition, because ES does not rely on gradient information, it can naturally handle highly non-smooth objectives or controller architectures where gradient computation is non-trivial, such as meta–reinforcement learning. However, a major weakness of ES-based algorithms is their difficulty in scaling to problems that require high-dimensional sensory inputs to encode the environment dynamics, such as training robots with complex vision inputs.

In this work, we propose “PI-ARS: Accelerating Evolution-Learned Visual-Locomotion with Predictive Information Representations”, a learning algorithm that combines representation learning and ES to effectively solve high dimensional problems in a scalable way. The core idea is to leverage predictive information, a representation learning objective, to obtain a compact representation of the high-dimensional environment dynamics, and then apply Augmented Random Search (ARS), a popular ES algorithm, to transform the learned compact representation into robot actions. We tested PI-ARS on the challenging problem of visual-locomotion for legged robots. PI-ARS enables fast training of performant vision-based locomotion controllers that can traverse a variety of difficult environments. Furthermore, the controllers trained in simulated environments successfully transfer to a real quadruped robot.

PI-ARS trains reliable visual-locomotion policies that are transferable to the real world.

Predictive Information
A good representation for policy learning should be both compressive, so that ES can focus on solving a much lower dimensional problem than learning from raw observations would entail, and task-critical, so the learned controller has all the necessary information needed to learn the optimal behavior. For robotic control problems with high-dimensional input space, it is critical for the policy to understand the environment, including the dynamic information of both the robot itself and its surrounding objects.

As such, we propose an observation encoder that preserves information from the raw input observations that allows the policy to predict the future states of the environment, thus the name predictive information (PI). More specifically, we optimize the encoder such that the encoded version of what the robot has seen and planned in the past can accurately predict what the robot might see and be rewarded in the future. One mathematical tool to describe such a property is that of mutual information, which measures the amount of information we obtain about one random variable X by observing another random variable Y. In our case, X and Y would be what the robot saw and planned in the past, and what the robot sees and is rewarded in the future. Directly optimizing the mutual information objective is a challenging problem because we usually only have access to samples of the random variables, but not their underlying distributions. In this work we follow a previous approach that uses InfoNCE, a contrastive variational bound on mutual information to optimize the objective.

Left: We use representation learning to encode PI of the environment. Right: We train the representation by replaying trajectories from the replay buffer and maximize the predictability between the observation and motion plan in the past and the observation and reward in the future of the trajectory.

Predictive Information with Augmented Random Search
Next, we combine PI with Augmented Random Search (ARS), an algorithm that has shown excellent optimization performance for challenging decision-making tasks. At each iteration of ARS, it samples a population of perturbed controller parameters, evaluates their performance in the testing environment, and then computes a gradient that moves the controller towards the ones that performed better.

We use the learned compact representation from PI to connect PI and ARS, which we call PI-ARS. More specifically, ARS optimizes a controller that takes as input the learned compact representation PI and predicts appropriate robot commands to achieve the task. By optimizing a controller with smaller input space, it allows ARS to find the optimal solution more efficiently. Meanwhile, we use the data collected during ARS optimization to further improve the learned representation, which is then fed into the ARS controller in the next iteration.

An overview of the PI-ARS data flow. Our algorithm interleaves between two steps: 1) optimizing the PI objective that updates the policy, which is the weights for the neural network that extracts the learned representation; and 2) sampling new trajectories and updating the controller parameters using ARS.

Visual-Locomotion for Legged Robots
We evaluate PI-ARS on the problem of visual-locomotion for legged robots. We chose this problem for two reasons: visual-locomotion is a key bottleneck for legged robots to be applied in real-world applications, and the high-dimensional vision-input to the policy and the complex dynamics in legged robots make it an ideal test-case to demonstrate the effectiveness of the PI-ARS algorithm. A demonstration of our task setup in simulation can be seen below. Policies are first trained in simulated environments, and then transferred to hardware.

An illustration of the visual-locomotion task setup. The robot is equipped with two cameras to observe the environment (illustrated by the transparent pyramids). The observations and robot state are sent to the policy to generate a high-level motion plan, such as feet landing location and desired moving speed. The high-level motion plan is then achieved by a low-level Motion Predictive Control (MPC) controller.

Experiment Results
We first evaluate the PI-ARS algorithm on four challenging simulated tasks:

  • Uneven stepping stones: The robot needs to walk over uneven terrain while avoiding gaps.
  • Quincuncial piles: The robot needs to avoid gaps both in front and sideways.
  • Moving platforms: The robot needs to walk over stepping stones that are randomly moving horizontally or vertically. This task illustrates the flexibility of learning a vision-based policy in comparison to explicitly reconstructing the environment.
  • Indoor navigation: The robot needs to navigate to a random location while avoiding obstacles in an indoor environment.

As shown below, PI-ARS is able to significantly outperform ARS in all four tasks in terms of the total task reward it can obtain (by 30-50%).

Left: Visualization of PI-ARS policy performance in simulation. Right: Total task reward (i.e., episode return) for PI-ARS (green line) and ARS (red line). The PI-ARS algorithm significantly outperforms ARS on four challenging visual-locomotion tasks.

We further deploy the trained policies to a real Laikago robot on two tasks: random stepping stone and indoor navigation. We demonstrate that our trained policies can successfully handle real-world tasks. Notably, the success rate of the random stepping stone task improved from 40% in the prior work to 100%.

PI-ARS trained policy enables a real Laikago robot to navigate around obstacles.

Conclusion
In this work, we present a new learning algorithm, PI-ARS, that combines gradient-based representation learning with gradient-free evolutionary strategy algorithms to leverage the advantages of both. PI-ARS enjoys the effectiveness, simplicity, and parallelizability of gradient-free algorithms, while relieving a key bottleneck of ES algorithms on handling high-dimensional problems by optimizing a low-dimensional representation. We apply PI-ARS to a set of challenging visual-locomotion tasks, among which PI-ARS significantly outperforms the state of the art. Furthermore, we validate the policy learned by PI-ARS on a real quadruped robot. It enables the robot to walk over randomly-placed stepping stones and navigate in an indoor space with obstacles. Our method opens the possibility of incorporating modern large neural network models and large-scale data into the field of evolutionary strategy for robotics control.

Acknowledgements
We would like to thank our paper co-authors: Ofir Nachum, Tingnan Zhang, Sergio Guadarrama, and Jie Tan. We would also like to thank Ian Fischer and John Canny for valuable feedback.

Categories
Offsites

MUSIQ: Assessing Image Aesthetic and Technical Quality with Multi-scale Transformers

Understanding the aesthetic and technical quality of images is important for providing a better user visual experience. Image quality assessment (IQA) uses models to build a bridge between an image and a user’s subjective perception of its quality. In the deep learning era, many IQA approaches, such as NIMA, have achieved success by leveraging the power of convolutional neural networks (CNNs). However, CNN-based IQA models are often constrained by the fixed-size input requirement in batch training, i.e., the input images need to be resized or cropped to a fixed size shape. This preprocessing is problematic for IQA because images can have very different aspect ratios and resolutions. Resizing and cropping can impact image composition or introduce distortions, thus changing the quality of the image.

In CNN-based models, images need to be resized or cropped to a fixed shape for batch training. However, such preprocessing can alter the image aspect ratio and composition, thus impacting image quality. Original image used under CC BY 2.0 license.

In “MUSIQ: Multi-scale Image Quality Transformer”, published at ICCV 2021, we propose a patch-based multi-scale image quality transformer (MUSIQ) to bypass the CNN constraints on fixed input size and predict the image quality effectively on native-resolution images. The MUSIQ model supports the processing of full-size image inputs with varying aspect ratios and resolutions and allows multi-scale feature extraction to capture image quality at different granularities. To support positional encoding in the multi-scale representation, we propose a novel hash-based 2D spatial embedding combined with an embedding that captures the image scaling. We apply MUSIQ on four large-scale IQA datasets, demonstrating consistent state-of-the-art results across three technical quality datasets (PaQ-2-PiQ, KonIQ-10k, and SPAQ) and comparable performance to that of state-of-the-art models on the aesthetic quality dataset AVA.

The patch-based MUSIQ model can process the full-size image and extract multi-scale features, which better aligns with a person’s typical visual response.

In the following figure, we show a sample of images, their MUSIQ score, and their mean opinion score (MOS) from multiple human raters in the brackets. The range of the score is from 0 to 100, with 100 being the highest perceived quality. As we can see from the figure, MUSIQ predicts high scores for images with high aesthetic quality and high technical quality, and it predicts low scores for images that are not aesthetically pleasing (low aesthetic quality) or that contain visible distortions (low technical quality).

High quality
76.10 [74.36] 69.29 [70.92]
     
Low aesthetics quality
55.37 [53.18] 32.50 [35.47]
     
Low technical quality
14.93 [14.38] 15.24 [11.86]
Predicted MUSIQ score (and ground truth) on images from the KonIQ-10k dataset. Top: MUSIQ predicts high scores for high quality images. Middle: MUSIQ predicts low scores for images with low aesthetic quality, such as images with poor composition or lighting. Bottom: MUSIQ predicts low scores for images with low technical quality, such as images with visible distortion artifacts (e.g., blurry, noisy).

The Multi-scale Image Quality Transformer
MUSIQ tackles the challenge of learning IQA on full-size images. Unlike CNN-models that are often constrained to fixed resolution, MUSIQ can handle inputs with arbitrary aspect ratios and resolutions.

To accomplish this, we first make a multi-scale representation of the input image, containing the native resolution image and its resized variants. To preserve the image composition, we maintain its aspect ratio during resizing. After obtaining the pyramid of images, we then partition the images at different scales into fixed-size patches that are fed into the model.

Illustration of the multi-scale image representation in MUSIQ.

Since patches are from images of varying resolutions, we need to effectively encode the multi-aspect-ratio multi-scale input into a sequence of tokens, capturing both the pixel, spatial, and scale information. To achieve this, we design three encoding components in MUSIQ, including: 1) a patch encoding module to encode patches extracted from the multi-scale representation; 2) a novel hash-based spatial embedding module to encode the 2D spatial position for each patch; and 3) a learnable scale embedding to encode different scales. In this way, we can effectively encode the multi-scale input as a sequence of tokens, serving as the input to the Transformer encoder.

To predict the final image quality score, we use the standard approach of prepending an additional learnable “classification token” (CLS). The CLS token state at the output of the Transformer encoder serves as the final image representation. We then add a fully connected layer on top to predict the IQS. The figure below provides an overview of the MUSIQ model.

Overview of MUSIQ. The multi-scale multi-resolution input will be encoded by three components: the scale embedding (SCE), the hash-based 2D spatial embedding (HSE), and the multi-scale patch embedding (MPE).

Since MUSIQ only changes the input encoding, it is compatible with any Transformer variants. To demonstrate the effectiveness of the proposed method, in our experiments we use the classic Transformer with a relatively lightweight setting so that the model size is comparable to ResNet-50.

Benchmark and Evaluation
To evaluate MUSIQ, we run experiments on multiple large-scale IQA datasets. On each dataset, we report the Spearman’s rank correlation coefficient (SRCC) and Pearson linear correlation coefficient (PLCC) between our model prediction and the human evaluators’ mean opinion score. SRCC and PLCC are correlation metrics ranging from -1 to 1. Higher PLCC and SRCC means better alignment between model prediction and human evaluation. The graph below shows that MUSIQ outperforms other methods on PaQ-2-PiQ, KonIQ-10k, and SPAQ.

Performance comparison of MUSIQ and previous state-of-the-art (SOTA) methods on four large-scale IQA datasets. On each dataset we compare the Spearman’s rank correlation coefficient (SRCC) and Pearson linear correlation coefficient (PLCC) of model prediction and ground truth.

Notably, the PaQ-2-PiQ test set is entirely composed of large pictures having at least one dimension exceeding 640 pixels. This is very challenging for traditional deep learning approaches, which require resizing. MUSIQ can outperform previous methods by a large margin on the full-size test set, which verifies its robustness and effectiveness.

It is also worth mentioning that previous CNN-based methods often required sampling as many as 20 crops for each image during testing. This kind of multi-crop ensemble is a way to mitigate the fixed shape constraint in the CNN models. But since each crop is only a sub-view of the whole image, the ensemble is still an approximate approach. Moreover, CNN-based methods both add additional inference cost for every crop and, because they sample different crops, they can introduce randomness in the result. In contrast, because MUSIQ takes the full-size image as input, it can directly learn the best aggregation of information across the full image and it only needs to run the inference once.

To further verify that the MUSIQ model captures different information at different scales, we visualize the attention weights on each image at different scales.

Attention visualization from the output tokens to the multi-scale representation, including the original resolution image and two proportionally resized images. Brighter areas indicate higher attention, which means that those areas are more important for the model output. Images for illustration are taken from the AVA dataset.

We observe that MUSIQ tends to focus on more detailed areas in the full, high-resolution images and on more global areas on the resized ones. For example, for the flower photo above, the model’s attention on the original image is focusing on the pedal details, and the attention shifts to the buds at lower resolutions. This shows that the model learns to capture image quality at different granularities.

Conclusion
We propose a multi-scale image quality transformer (MUSIQ), which can handle full-size image input with varying resolutions and aspect ratios. By transforming the input image to a multi-scale representation with both global and local views, the model can capture the image quality at different granularities. Although MUSIQ is designed for IQA, it can be applied to other scenarios where task labels are sensitive to image resolution and aspect ratio. The MUSIQ model and checkpoints are available at our GitHub repository.

Acknowledgements
This work is made possible through a collaboration spanning several teams across Google. We’d like to acknowledge contributions from Qifei Wang, Yilin Wang and Peyman Milanfar.

Categories
Misc

Upcoming Workshop: Fundamentals of Deep Learning

Explore deep learning with hands-on exercises in computer vision and NLP in this online instructor-led workshop.

Explore deep learning with hands-on exercises in computer vision and NLP in this online instructor-led workshop.

Categories
Misc

Building an Automatic Speech Recognition Model for the Kinyarwanda Language

Speech recognition technology is growing in popularity for voice assistants and robotics, for solving real-world problems through assisted healthcare or…

Speech recognition technology is growing in popularity for voice assistants and robotics, for solving real-world problems through assisted healthcare or education, and more. This is helping democratize access to speech AI worldwide. As labeled datasets for unique, emerging languages become more widely available, developers can build AI applications readily, accurately, and affordably to enhance technology developments and experiences for their native regions.

Kinyarwanda is the native language of 9.8 million people in Rwanda, Uganda, DR Congo, and Tanzania with over 20 million total speakers across the globe. 

In April 2022, Mozilla Common Voice (MCV), a crowdsourced project aimed at making voice recognition open and accessible to everyone, made a significant contribution to building the Kinyarwanda dataset, as detailed in the article, Lessons from Building for Kinyarwanda on Common Voice. It is a 57 GB dataset with 2,000+ hours of audio, making it the largest dataset on the MCV platform.

To bring the value of the effort and dataset to developers, an automatic speech recognition (ASR) model was trained on this dataset that achieved state-of-the-art performance on the published checkpoints.

This post provides an overview of the training process using NeMo ASR toolkit. It briefly covers challenges with the dataset, converting characters to longer units using byte-pair encoding, and the training process for improved model performance. Developers can refer to the step-by-step tutorial on GitHub for the reference code and details. 

Obtaining the dataset

MCV has the largest publicly available multi-language dataset. You can download language-specific datasets from the Mozilla Common Voice Hub

In the Kinyarwanda dataset used for the model, there are 1,404,853 sentences that are pre-split into train/dev/test data. Each entry in the dataset consists of a unique MP3 file and corresponding information such as name of the file, transcription, and meta information in TSV format. 

NeMo ASR requires data that includes a set of utterances in individual audio files plus a manifest that describes the dataset, with information about one utterance per line.

Once the dataset is downloaded, in the training split, TSV files are converted to JSON manifests and MP3 files are converted to WAV files, which are recommended formats for NeMo toolkit. The same steps are then repeated for test and dev data separately.

The manifest format is provided below:

{"audio_filepath": "/path/to/audio.wav", "text": "the transcription of the utterance", "duration": 23.147}

Data preprocessing

Before training the model, the data requires preprocessing to reduce ambiguity and inconsistencies and make the data easy to interpret. The preprocessing steps for this model are:

  • Replace all punctuation with a space (except for apostrophes)
  • Replace different types of apostrophes [’’‘`ʽ’] by 1
  • Make all text lowercase​ for consistency
  • Replace rare characters with diacritics ​ ([éèëēê] → e, for example)​
  • Delete all remaining out-of-vocabulary characters 

(combined Latin letters, space, and apostrophe, for example)

Because 99% of the dataset has an audio duration of 11 seconds or shorter, it is suggested to restrict the maximum audio duration to 11 seconds during preprocessing for faster training.

The final Kinyarwanda transcript consists of sentences with Latin letters, spaces, and apostrophes after preprocessing.

Subword tokenization 

It is possible to train character-based ASR models but they will regard each letter as a separate token, taking more time to generate the output. Using longer units improves both quality and speed. 

This process involves a tokenization algorithm called byte-pair encoding that splits words into subtokens and marks the beginning of the word with a special symbol so it’s easy to restore the original words.

To make the process easier, NeMo toolkit supports on-the-fly subword tokenization by passing the tokenizer through the model config so there is no need to modify transcripts. This does not affect the model performance and potentially helps to adapt to other domains without retraining the tokenizer.

Visit NVIDIA/NeMo on GitHub for a detailed description and tutorial on subword tokenization for NeMo ASR.

Training models

Two approaches lead to trained model. The first approach involves training the model from scratch using two model architectures: Conformer-CTC and Conformer-Transducer. The second approach involves 

fine-tuning the Kinyarwanda Conformer-Transducer model from different pretrained checkpoints.

To train a Conformer-CTC model, use speech_to_text_ctc_bpe.py with the default config conformer_ctc_bpe.yaml. To train a Conformer-Transducer model, use speech_to_text_rnnt_bpe.py with the default config conformer_transducer_bpe.yaml

For fine-tuning, use the pretrained STT_EN_Conformer_Transducer model for a checkpoint that is not self-supervised. Use the SSL_EN_Conformer_Large for a self-supervised checkpoint from NVIDIA GPU Cloud. 

You can find more details about the training process in the step-by-step tutorial on GitHub. 

The reference code for Self-supervised Checkpoint Initialization (SSL_EN_Conformer_Large) is provided below.

import nemo.collections.asr as nemo_asr
ssl_model = nemo_asr.models.ssl_models.SpeechEncDecSelfSupervisedModel.from_pretrained(model_name='ssl_en_conformer_large')

# define fine-tune model
asr_model = nemo_asr.models.EncDecCTCModelBPE(cfg=cfg.model, trainer=trainer)

# load ssl checkpoint
asr_model.load_state_dict(ssl_model.state_dict(), strict=False)

del ssl_model

Figure 1 shows a comparison of training dynamics. The fine-tuning approach is quick and easy for training, and also leads to faster convergence and better quality.

A graph showing the Word Error Rate comparison for models used.
Figure 1. Word Error Rate output comparison for models used

Test results

While building a model, the goal is to minimize the Word Error Rate (WER) while transcribing the speech input. In simple words, Word Error Rate is the number of errors divided by the total number of words.​ It is often used to test the performance of a model but should not be the only standard, as out-of-scope variables like noise, echo, and accents can have a substantial impact on speech recognition.

Character Error Rate (CER) is also considered. CER gives the percentage of characters that were incorrectly predicted. Our models have the lowest percentage of WER and CER in the Kinyarwanda ASR models (Table 1).

Model WER % CER %
Conformer-CTC-Large 18.73​ 5.75
Conformer-Transducer-Large 16.19 5.7​
Table 1. Word Error Rate and Character Error Rate for the Kinyarwanda models

Key takeaways

We have built two high-quality Kinyarwanda checkpoints from scratch with the NeMo toolkit. The Conformer-Transducer checkpoint has better quality but the Conformer-CTC is 4x faster at inference, so they are both potentially useful based on the need.​ 

The high performance of the pretrained model is another step towards new developments in the speech AI community. The state-of-the-art model can be improved further by fine-tuning it with more data that has more dialects, accents, and rare words and is a true representation of how people speak their native languages. NVIDIA NeMo pretrained models are open source and meet the goal of democratization and inclusivity across the globe.

Additional resources

Explore the MVC initiative to access or provide voice data for your language. For more information on models, see the following resources:

Join experts from Google, Meta, NVIDIA, and more at the first annual NVIDIA Speech AI Summit. Register now.

Categories
Misc

Get in Touch With New Mobile Gaming Controls on GeForce NOW

GeForce NOW expands touch control support to 13 more games this GFN Thursday. That means it’s easier than ever to take PC gaming on the go using mobile devices and tablets. The new “Mobile Touch Controls” row in the GeForce NOW app is the easiest way for members to find which games put the action Read article >

The post Get in Touch With New Mobile Gaming Controls on GeForce NOW appeared first on NVIDIA Blog.

Categories
Misc

Open-Source Fleet Management Tools for Autonomous Mobile Robots

At ROSCon 2022, NVIDIA announced the newest Isaac ROS software release, Developer Preview (DP) 2. This release includes new cloud– and edge-to-robot task…

At ROSCon 2022, NVIDIA announced the newest Isaac ROS software release, Developer Preview (DP) 2. This release includes new cloud– and edge-to-robot task management and monitoring software for autonomous mobile robot (AMR) fleets, as well as additional features for ROS 2 developers.

NVIDIA Isaac ROS consists of individual packages (GEMs) and complete pipelines (NITROS) for hardware-accelerated performance. In addition to performance improvements, the new release adds the following functionality:

  • Mission Dispatch and Client: An open-source CPU package to assign and monitor tasks from a fleet management system to the robot. Mission Dispatch is a cloud-native microservice that can be integrated as part of larger fleet management systems.
  • FreeSpace Segmentation: A hardware-accelerated package for producing a vision AI–based occupancy grid in the proximity of the robot to be used as an input to the navigation stack.
  • H.264 Video Encode and Decode: Hardware-accelerated packages for compressed video data recording and playback. Video data collection is an important part of training AI perception models. The performance of these new GEMs on the NVIDIA Jetson AGX Orin platform measured at 2x 1080p stereo cameras at 30 fps (>120 fps total), reducing data footprint by ~10x.

Mission Dispatch and Client

Block diagrams for the software stacks.
Figure 1. Architecture of Mission Dispatch and Mission Client software

Mission Dispatch and Client provide a standard, open-source way to assign and track tasks between a fleet management system and ROS 2 robots.  Dispatch and Client communicate using VDA5050, an open standard for communications designed specifically for robot fleets. Messages are transmitted wirelessly over MQTT, a lightweight messaging protocol for Internet of Things (IoT) applications.

Mission Dispatch is a containerized micro-service available for download from NGC, or as source code on the NVIDIA Isaac GitHub repo, and can be integrated into fleet management systems. Mission Dispatch has been verified to interoperate with other open-source ROS 2 clients like the recently announced VDA5050 Connector developed by OTTO Motors and InOrbit.

Mission Client, which is compatible with ROS 2 Humble, is available as a package in the NVIDIA Isaac ROS GitHub repo and preintegrated with the Nav2 navigation stack to assign and track navigation and other tasks on the robot.

“As mobile robot deployment in the real world accelerates, interoperability is becoming increasingly critical,” said Ryan Gariepy, CTO at OTTO Motors. “Bridging VDA5050 with ROS2 as an open-source community will promote innovation in fleet management solutions while allowing robot makers to focus on differentiation.”

NVIDIA Isaac ROS performance

NVIDIA Isaac ROS continues to deliver hardware-accelerated performance for the ROS 2 developer community for AI perception, image processing, and navigation. Autonomous robots require advanced AI and computer vision capabilities. Isaac ROS represents our commitment to making it easier for the robotics community to adopt these cutting-edge technologies.

For more information about the latest performance numbers for key Isaac ROS packages, see Isaac ROS Performance Summary.

Image of a person pushing a cart of crates and associated DNN output images.
Figure 2. Improved stereo depth performance of the BI3D model on flat featureless surfaces. (left) Original photo, (middle) DP1.1 release, (right) DP2 release.

Free training for ROS 2 developers

To provide advanced technical training and access to NVIDIA Isaac ROS experts, NVIDIA is announcing a new series of webinars focused on ROS 2 developers. These sessions are free and feature Q&A periods with the technical experts developing accelerated modules for ROS 2.

Line drawing of TurboTurtle robot with the NVIDIA and ROS logos.
Figure 3. TurboTurtle

The first three webinar topics:

  • November 14, 2022: Pinpoint, 250 fps, ROS 2 localization with vSLAM on Jetson, led by Dr. Raffaello Bonghi.
  • December 2022: Using Isaac ROS for Stereo-Based Depth Estimation, led by Hemal Shah
  • December 2022: Building an Isaac ROS accelerated module using YOLOv5, led by Asawaree Bandhi

Register for the November 14 webinar and check back soon, as more webinars will be added to the series.

ROSCon 2022

If you are attending ROSCon in Kyoto, Japan, be sure to attend the technical session gz-omni: Bridging Gazebo with Isaac Sim (livestream) on October 20, 2022 at 2:10PM JST. Visit NVIDIA at booth #22 to see a live demonstration of NVIDIA Isaac ROS in action running on the NVIDIA Jetson AGX Orin Developer Kit.

Getting started

To get started today with NVIDIA Isaac ROS, review the examples summarized in the /NVIDIA-ISAAC-ROS GitHub repo.

Categories
Misc

How Tarteel Uses AI to Help Arabic Learners Perfect Their Pronunciation

There are some 1.8 billion Muslims, but only 16% or so of them speak Arabic, the language of the Quran. This is in part due to the fact that many Muslims struggle to find qualified instructors to give them feedback on their Quran recitation. Enter today’s guest and his company Tarteel, a member of the Read article >

The post How Tarteel Uses AI to Help Arabic Learners Perfect Their Pronunciation appeared first on NVIDIA Blog.

Categories
Misc

Explainer: What Is Path Tracing?

Path tracing is going real-time, unleashing interactive, photorealistic 3D environments filled with dynamic light and shadow, reflections, and refractions.

Path tracing is going real-time, unleashing interactive, photorealistic 3D environments filled with dynamic light and shadow, reflections, and refractions.