Demonstrations

Here is the list of demonstrations that will take place in the new Engineering building (ENG), Building 405 (5 Grafton Rd). The detailed program can be found after the descriptions of the demonstrations.

OrganizersDemonstration Title and DescriptionTimeLocation
– Ananye Agarwal, CMU
– Ashish Kumar, UC Berkeley
– Jitendra Malik, UC Berkeley
– Deepak Pathak, CMU
Legged Locomotion in Challenging Terrains using Egocentric Vision

Description: Animals are capable of precise and agile locomotion using vision. Replicating this ability has been a long-standing goal in robotics. The traditional approach has been to decompose this problem into elevation mapping and foothold planning phases. The elevation mapping, however, is susceptible to failure and large noise artifacts, requires specialized hardware, and is biologically implausible. In this paper, we present the first end-to-end locomotion system capable of traversing stairs, curbs, stepping stones, and gaps. We show this result on a medium-sized quadruped robot using a single front-facing depth camera. The small size of the robot necessitates discovering specialized gait patterns not seen elsewhere. The egocentric camera requires the policy to remember past information to estimate the terrain under its hind feet. We train our policy in simulation. Training has two phases – first, we train a policy using reinforcement learning with a cheap-to-compute variant of depth image and then in phase 2 distill it into the final policy that uses depth using supervised learning. The resulting policy transfers to the real world without any fine-tuning and can traverse a large variety of terrain while being robust to perturbations like pushes, slippery surfaces, and rocky terrain. Videos are at: https://blindsupp.github.io/visual-walking/
Dec 15:
12:00pm-12:30pm &
3:30pm-4pm

Dec 17:
10:30am-11:00am

OGG Building Level 0
– Xuxin Cheng, Carnegie Mellon University
– Ashish Kumar, UC Berkeley
– Deepak Pathak, Carnegie Mellon University
Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion

Description: Locomotion has seen dramatic progress for walking or running across challenging terrains. However, robotic quadrupeds are still far behind their biological counterparts, such as dogs, which display a variety of agile skills and can use the legs beyond locomotion to perform several basic manipulation tasks like interacting with objects and climbing. In this paper, we take a step towards bridging this gap by training quadruped robots not only to walk but also to use the front legs to climb walls, press buttons, and perform object interaction in the real world. To handle this challenging optimization, we decouple the skill learning broadly into locomotion, which involves anything that involves movement whether via walking or climbing a wall, and manipulation, which involves using one leg to interact while balancing on the other three legs. These skills are trained in simulation using curriculum and transferred to the real world using our proposed sim2real variant that builds upon recent locomotion success. Finally, we combine these skills into a robust long-term plan by learning a behavior tree that encodes a high-level task hierarchy from one clean expert demonstration. We evaluate our method in both simulation and real-world showing successful executions of both short as well as long-range tasks and how robustness helps confront external perturbations.
Dec 15:
10:00am-
10:15am &
4:15pm-
4:30pm

Dec 17:
9:45am-10am
OGG Building Level 0
– Jaden Clark, Stanford University
– Jie Tan, Google Research
– Tingnan Zhang, Google Research
– Xuesu Xiao, George Mason University
– Nathan Kau
– Gabrael Levine, Stanford University
– Stuart Bowers, Hands on Robotics
– Selena Sun, Stanford University
– Liana Tilton, Washington University in St. Louis
Pupper: An Open-source AI Robotics Educational Curriculum and Platform

Description: Robotics education is challenging due to a lack of accessible hardware. Especially when teaching AI, robotics courses are often restricted to simulation, which leaves out crucial components of the field. We aim to combat these challenges using Pupper, an open source quadruped that can be built-in-house for under $2000. Using Pupper, and a carefully-designed curriculum, we aim to democratize AI and robotics education – helping spread the field to underrepresented and low-income groups. Our curriculum takes students from PID control of a single 3-DOF arm (constructed from a leg of Pupper) to reinforcement learning in a physics-based simulation and the physical quadruped. Students construct Pupper themselves, connecting what they know about both hardware and software. Although students often begin the course with little-to-no robotics or AI experience, throughout the course, the students gain hands-on experience in the most important topics in AI robotics, such as onboard perception, reinforcement learning, and sim-to-real adaptation. So far we have donated 11 Puppers, and our course has been offered at Stanford, Washington University in St. Louis, Foothills College, Brandeis, and other schools. At this demo, we will present this robotics curriculum through a live and interactive demonstration with our robots. Attendees will be able to train and deploy RL policies in real time on both the 3-DOF arm and Pupper. We believe this demonstration will help publicize our curriculum, and encourage other members of the robot learning community to host the course at their own institutions.
Full Day, Dec 16OGG Level 1
– Zipeng Fu, Stanford University
– Xuxin Cheng, CMU
– Deepak Pathak, CMU
Deep Whole-Body Control: Learning a Unified Policy for Manipulation and Locomotion

Description: An attached arm can significantly increase the applicability of legged robots to several mobile manipulation tasks that are not possible for the wheeled or tracked counterparts. The standard control pipeline for such legged manipulators is to decouple the controller into that of manipulation and locomotion. However, this is ineffective and requires immense engineering to support coordination between the arm and legs, error can propagate across modules causing non-smooth unnatural motions. It is also biological implausible where there is evidence for strong motor synergies across limbs. In this work, we propose to learn a unified policy for whole-body control of a legged manipulator using reinforcement learning. Further project information: https://manipulation-locomotion.github.io/
Dec 15: 11:30am-12:00pm

Dec 18:
10:45am-11:15am
OGG Building Level 0
– Gabriel Margolis, MIT
– Ananye Agarwal, CMU
– Shamel Fahmi, MIT
– Akshara Rai, FAIR
– Deepak Pathak, CMU
– Pulkit Agrawal, MIT
Demonstrations of the Workshop on “Sim-to-Real Robot Learning: Locomotion and Beyond”

Description: We invited workshop participants to submit a live demonstration proposal for inclusion in the workshop. Demonstrators will give a short on-stage spotlight demo, and will additionally present a poster and/or continue their demonstration during the poster session. See the workshop website for more information: https://sites.google.com/view/corl-22-sim-to-real/home?authuser=0
Dec 15: 10:30am-11:30am &
1:30pm-3:15pm
ENG Building – Level 3

ENG-D-L3
– Gabriel Margolis, MIT
– Pulkit Agrawal, MIT
Walk These Ways: Tuning Robot Control for Generalization with Multiplicity of Behavior

Description: Learned locomotion policies can rapidly adapt to diverse environments similar to those experienced during training but lack a mechanism for fast tuning when they fail in an out-of-distribution test environment. This necessitates a slow and iterative cycle of reward and environment redesign to achieve good performance on a new task. As an alternative, we propose learning a single policy that encodes a structured family of locomotion strategies that solve training tasks in different ways, resulting in Multiplicity of Behavior (MoB). Different strategies generalize differently and can be chosen in real-time for new tasks or environments, bypassing the need for time-consuming retraining. We release a fast, robust open-source MoB locomotion controller, Walk These Ways, that can execute diverse gaits with variable footswing, posture, and speed, unlocking diverse downstream tasks: crouching, hopping, high-speed running, stair traversal, bracing against shoves, rhythmic dance, and more. Video and code release: https://gmargo11.github.io/walk-these-ways
Dec 16:
10:45am-
11:15am &
3:00pm-3:30pm
OGG Level 0
– Nick Pickering, University of Waikato
– Thomas Carnahan, University of Waikato
Labour Decision Support Tools: Flower Bud Counting

Description: Agriculture is facing a period of unprecedented change resulting from the requirement to feed 10 billion mouths by 2050, while operating in an environment of labour shortages and increasing sustainability expectations. To meet these challenges many are looking to industry 4.0 technology as a solution, specifically the combination of Artificial Intelligence (AI), Internet of Things (IoT), in-field and digital twins. Although there have been successful prototypes and promising start-ups, the complex nature of the horticulture growing and supply chain systems creates a significant risk of mass adoption failure due to the silo’d technology approach leading to usability, availability, viability and interoperability challenges. A joint academia/industry project is working towards a collaborative System of Systems (SoS) through the use of a shared autonomous survey robot and digital twin platform. The programme has started with kiwifruit flower counting and canopy cover identification to support grower decision making on crop loading and labour allocation, with plans to expand the collaboration into the areas of pest/disease detection, fruit estimation and harvest optimisation. This presentation explains how Industry 4.0 and a SoS approach could assist the horticulture industry to accelerate innovation and scale adoption for research, industry and government partners to be better together.
Dec 15:
9:45am-10am &
4:45pm-5pm

Dec 16:
10am-10:15am

(vehicle
will be
on display full
days, Dec
15 and
Dec 16)
OGG – Demonstration Area

OGG-D
– Rahul Jangali,
University of
Waikato
– Hin Lim,
University of
Waikato
– Henry
Williams,
University of
Auckland
– Bruce
MacDonald,
University of
Auckland
MaaraTech – Archie Junior Agriculture Robot

Description: The New Zealand grape industry has long been challenged by a labour shortage. Robots could be an option for resolving this issue; however, changing weather patterns and growing systems make it difficult to implement robotic solutions. An autonomous overarching navigating platform has been developed for robotic grape vine pruning and 3D reconstruction of the vine branch from both sides of the plant canopy. Two UR5 robotic arms are mounted inside on either side of the platform, a custom end effector is fastened onto the robotic arm for pruning grape vines. The robot arm also carried a stereo camera both the sides for scanning, detection of cut point and construction of grape vines. The platform’s overarching framework guarantees constant light conditions and inhibits varying weather patterns, such as wind and rain, to enable consistency in 3D reconstruction and identification of cut sites for consistent robotic vine trimming.
Dec 16: 10:15am-
10:30am &
12:45pm-1pm

(vehicle
will be
on
display
full
day Dec
16)
OGG – Demonstration Area

OGG-D

Demonstrations Schedule

Demonstration TitleDayTimeLocation
Labour Decision Support Tools: Flower Bud CountingThursday, Dec 159:45am-10am OGG, Demonstration Area
OGG-D
Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion-//-10:00am-
10:15am
ENG Building Level 3
ENG-D-L3
Demonstrations of the Workshop on “Sim-to-Real Robot Learning: Locomotion and Beyond”-//-10:30am-11:30am ENG Building Level 3
ENG-D-L3
Deep Whole-Body Control: Learning a Unified Policy for Manipulation and Locomotion-//-11:30am-12:00pmENG Building Level 4
ENG-D-L4
Legged Locomotion in Challenging Terrains using Egocentric Vision-//-12:00pm-12:30pmENG Building Level 3
ENG-D-L3
Demonstrations of the Workshop on “Sim-to-Real Robot Learning: Locomotion and Beyond”-//-1:30pm-3:15pmENG Building Level 3
ENG-D-L3
Legged Locomotion in Challenging Terrains using Egocentric Vision-//-3:30pm-4pmENG Building Level 3
ENG-D-L3
Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion-//-4:15pm-
4:30pm
ENG Building Level 3
ENG-D-L3
Labour Decision Support Tools: Flower Bud Counting-//-4:45pm-5pmOGG, Demonstration Area
OGG-D
Pupper: An Open-source AI Robotics Educational Curriculum and PlatformFriday, Dec 16Full DayOGG Level 1
Labour Decision Support Tools: Flower Bud Counting-//-10am-10:15amOGG, Demonstration Area
OGG-D
MaaraTech – Archie Junior Agriculture Robot-//-10:15am-
10:30am
OGG, Demonstration Area
OGG-D
Walk These Ways: Gait-conditioned Policies Yield Diversified Quadrupedal Agility-//-10:50am-
11:20am
OGG Level 0
MaaraTech – Archie Junior Agriculture Robot-//-12:45pm-1pmOGG, Demonstration Area
OGG-D
Walk These Ways: Gait-conditioned Policies Yield Diversified Quadrupedal Agility-//-3:05pm-3:35pmOGG Level 0
Legs as Manipulator: Pushing Quadrupedal Agility Beyond LocomotionSaturday, Dec 179:45am-10amOGG Building Level 0
Legged Locomotion in Challenging Terrains using Egocentric Vision-//-10:30am-11:00amOGG Building Level 0
Deep Whole-Body Control: Learning a Unified Policy for Manipulation and LocomotionSunday,
Dec 18
10:45am-11:15amOGG Building Level 0