Confirmed International speakers
- Nicholas Roy, Robust Robotics Group, CSAIL, MIT
- Niko Sünderhauf, Australian Centre for Robotic Vision and Queensland University of Technology (QUT) in Brisbane
- Jens Kober, Cognitive Robotics department, Delft University of Technology (TU Delft)
- Juxi Leitner, Australian Centre of Excellence for Robotic Vision
- Stefan Leutenegger, Imperial College London
Confirmed International speakers
- To Be Done
Key note Presentations
Juxi Leitner, Australian Centre of Excellence for Robotic Vision
Title: (Deep) Learning for Robotic Grasping and Manipulation
Abstract: The advances of machine learning have pushed robotics
capabilities, especially visual detection capabilities. I will go
through how robust visual perception together with quick visual
learning can create adaptive robotic systems that can see and do
things in the real world. The talk will show state-of-the art robotic
grasping techniques and highlight some of the open questions and
challenges in the field of robotic manipulation and how we are
thinking of tackling them.
Short bio: Juxi Leitner leads the Robotic Manipulation efforts of the Australian Centre for Robotic Vision (ACRV) and is co-founder of LYRO Robotics, a spin-out commercialising robotic manipulation research by creating autonomus picking robots. He was the leader of Team ACRV the winner of the 2017 Amazon Robotics Challenge with their robot Cartman.
Jens Kober: Cognitive Robotics department, Delft University of Technology (TU Delft)
Abstract: Reinforcement learning learns the optimal mapping from inputs (states) to outputs (actions) through interactions with the system. The agent receives a reward in every step and its goal is to optimize for the cumulative reward. Recent advances in deep reinforcement learning have enabled learning end-to-end, i.e., a mapping from raw sensor inputs (e.g., camera images) to low-level commands (e.g., torques). However, (deep) reinforcement learning often requires significantly more iterations than are feasible on real systems. Hence collecting sufficient amounts of data is impractical at best. Therefore a lot of work is done in purely digital or virtual environments. This keynote will give an introduction to (deep) reinforcement learning, the particular challenges of applying it in the robotics domain, and methods of rendering (deep) reinforcement learning in robotics tractable nevertheless.
Short Bio: Jens Kober is an associate professor at Delft University of Technology, Netherlands. Jens is the recipient of the IEEE-RAS Early Academic Career Award in Robotics and Automation 2018. His research interests include motor skill learning, (deep) reinforcement learning, imitation learning, interactive learning, and machine learning for control.
Nicholas Roy, Robust Robotics Group, CSAIL, MIT
Niko Sünderhauf, Australian Centre for Robotic Vision and Queensland University of Technology (QUT) in Brisbane
Short Bio: Niko is a Chief Investigator of the Australian Centre for Robotic
Vision, and a Senior Lecturer at Queensland University of Technology
(QUT) in Brisbane, Australia.
He conducts research in robotic vision, at the intersection of robotics,
computer vision, and machine learning. His research interests focus on
scene understanding, semantic SLAM, and incorporating semantics into
reinforcement learning. Niko is interested in the reliability and
robustness of deep learning in robotics and furthermore leads a project
on new benchmarking challenges in robotic vision.
Stefan Leutenegger, Imperial College London
Title: Spatial perception for mobile robots
Abstract: This talk will focus on the different levels of real-time spatial perception for mobile robots, as needed for their autonomous operation. First, highly accurate and robust visual-inertial SLAM with a focus on motion tracking will be visited, as needed to control e.g. a drone. Next, dense mapping and SLAM approaches will be presented along with related integration into motion planning. Finally, we will visit semantic and object-level understanding, combining modern Deep Learning-based methods with the geometric and physics-based ones addressed previously. The aim of these recent works is to bridge the sense-AI-gap and empower the next generation of mobile robots that need to plan and execute complex tasks interacting with potentially cluttered, and dynamic environments, possibly in proximity of people. The talk will cover both the foundations of maths, algorithms and implementations, as well as recent research and example applications to e.g. inspection or construction scenarios; which includes experiments in proximity or physical contact with structure.
Short Bio: Stefan is a Senior Lecturer (US equivalent Associate Professor) in Robotics in the Department of Computing at Imperial College London, where he leads the Smart Robotics Lab and furthermore co-directs the Dyson Robotics Lab. He has also co-founded SLAMcore, spin-out company aiming at commercialisation of localisation and mapping solutions for robots and drones. Stefan has received a BSc and MSc in Mechanical Engineering with a focus on Robotics and Aerospace Engineering from ETH Zurich, as well as a PhD on “Unmanned solar airplanes: design and algorithms for efficient and robust autonomous operation”, completed in 2014.