Confirmed international speakers
- Niko Sünderhauf, Australian Centre for Robotic Vision and Queensland University of Technology (QUT) in Brisbane
- Jens Kober, Cognitive Robotics department, Delft University of Technology (TU Delft)
- Juxi Leitner, Australian Centre of Excellence for Robotic Vision
- Stefan Leutenegger, Imperial College London
Confirmed national speakers
- Pedro Maldonado, Universidad de Chile
- Wolfhart Totschnig, Universidad Diego Portales
- Rodrigo Verschae, Universidad de O’Higgins
Juxi Leitner, Australian Centre of Excellence for Robotic Vision
Keynote: (Deep) Learning for Robotic Grasping and Manipulation
Abstract: The advances of machine learning have pushed robotics
capabilities, especially visual detection capabilities. I will go
through how robust visual perception together with quick visual
learning can create adaptive robotic systems that can see and do
things in the real world. The talk will show state-of-the art robotic
grasping techniques and highlight some of the open questions and
challenges in the field of robotic manipulation and how we are
thinking of tackling them.
Short bio: Juxi Leitner leads the Robotic Manipulation efforts of the Australian Centre for Robotic Vision (ACRV) and is co-founder of LYRO Robotics, a spin-out commercialising robotic manipulation research by creating autonomus picking robots. He was the leader of Team ACRV the winner of the 2017 Amazon Robotics Challenge with their robot Cartman.
Jens Kober: Cognitive Robotics department, Delft University of Technology (TU Delft)
Abstract: Reinforcement learning learns the optimal mapping from inputs (states) to outputs (actions) through interactions with the system. The agent receives a reward in every step and its goal is to optimize for the cumulative reward. Recent advances in deep reinforcement learning have enabled learning end-to-end, i.e., a mapping from raw sensor inputs (e.g., camera images) to low-level commands (e.g., torques). However, (deep) reinforcement learning often requires significantly more iterations than are feasible on real systems. Hence collecting sufficient amounts of data is impractical at best. Therefore a lot of work is done in purely digital or virtual environments. This keynote will give an introduction to (deep) reinforcement learning, the particular challenges of applying it in the robotics domain, and methods of rendering (deep) reinforcement learning in robotics tractable nevertheless.
Talk: Learning State-Representations
Short Bio: Jens Kober is an associate professor at Delft University of Technology, Netherlands. Jens is the recipient of the IEEE-RAS Early Academic Career Award in Robotics and Automation 2018. His research interests include motor skill learning, (deep) reinforcement learning, imitation learning, interactive learning, and machine learning for control.
Niko Sünderhauf, Australian Centre for Robotic Vision and Queensland University of Technology (QUT) in Brisbane
Keynote: Deep Learning for Robotic Vision
Talk: probably semantic SLAM (tentative)
Short Bio: Niko is a Chief Investigator of the Australian Centre for Robotic
Vision, and a Senior Lecturer at Queensland University of Technology
(QUT) in Brisbane, Australia.
He conducts research in robotic vision, at the intersection of robotics,
computer vision, and machine learning. His research interests focus on
scene understanding, semantic SLAM, and incorporating semantics into
reinforcement learning. Niko is interested in the reliability and
robustness of deep learning in robotics and furthermore leads a project
on new benchmarking challenges in robotic vision.
Stefan Leutenegger, Imperial College London
Keynote: Spatial perception for mobile robots
Abstract: This talk will focus on the different levels of real-time spatial perception for mobile robots, as needed for their autonomous operation. First, highly accurate and robust visual-inertial SLAM with a focus on motion tracking will be visited, as needed to control e.g. a drone. Next, dense mapping and SLAM approaches will be presented along with related integration into motion planning. Finally, we will visit semantic and object-level understanding, combining modern Deep Learning-based methods with the geometric and physics-based ones addressed previously. The aim of these recent works is to bridge the sense-AI-gap and empower the next generation of mobile robots that need to plan and execute complex tasks interacting with potentially cluttered, and dynamic environments, possibly in proximity of people. The talk will cover both the foundations of maths, algorithms and implementations, as well as recent research and example applications to e.g. inspection or construction scenarios; which includes experiments in proximity or physical contact with structure.
Short Bio: Stefan is a Senior Lecturer (US equivalent Associate Professor) in Robotics in the Department of Computing at Imperial College London, where he leads the Smart Robotics Lab and furthermore co-directs the Dyson Robotics Lab. He has also co-founded SLAMcore, spin-out company aiming at commercialisation of localisation and mapping solutions for robots and drones. Stefan has received a BSc and MSc in Mechanical Engineering with a focus on Robotics and Aerospace Engineering from ETH Zurich, as well as a PhD on “Unmanned solar airplanes: design and algorithms for efficient and robust autonomous operation”, completed in 2014.
Wolfhart Totschnig, Institute of Philosophy of Universidad Diego Portales
Talk: Introduction to the ethics of artificial intelligence
Abstract: Until recently, artificial agents were employed only in controlled environments, where their behavior is restricted and hence easily predictable (e.g., robots on an assembly line in a factory). Today, however, we are witnessing the creation of artificial agents that are designed to operate in “real-world”—that is, uncontrolled—situations. Self-driving cars and “autonomous weapon systems” are prominent examples that are already in use, and artificial agents that are still more versatile are in development (e.g., household or health care robots). How can we make sure that such “autonomous” artificial agents behave in accordance with our expectations and desires? This is the central question of the field of research called “ethics of artificial intelligence”. It can be divided into two more specific questions: First, which rules or principles do we want artificial agents to follow? And second, how can we instill in them these rules or principles? The aim of this talk is to offer an introduction to the current debates surrounding these questions.
Short bio: Wolfhart Totschnig is assistant professor at the Institute of Philosophy of Universidad Diego Portales. He has published articles on the ontology of ancient Stoicism, various aspects of Hannah Arendt’s philosophy, and, more recently, the political and ethical questions arising from the advent of artificial intelligence.
Pedro Maldonado, Universidad de Chile
Talk: Similarities and differences between artificial intelligence and the human brain
Abstract:Currently, neuronal networks and deep learning networks are ubiquitous in artificial intelligence. Nevertheless, these brain-inspired paradigms operate under simple mechanisms when compared with real neuronal networks. In this talk, I will present a basic scheme of the way the human brain is built and connected, especially the cerebral cortex, which seems to be the central structure of our sophisticated cognitive capabilities. I will compare the functional circuitry between computers and the brain to highlight the differences and capabilities, emphasizing fist structural differences, the nature of neuronal and computational circuits, and the cognitive abilities of both systems. I will argue that modern Neuroscience has departed significantly for the information-theory-based paradigm, which may enable the design of more efficient artificial intelligence algorithms.
Short Bio: Pedro Maldonado holds a Ph.D. in Physiology from the University of Pennsylvania and is a Full professor and Chairman of the Department of Neuroscience at the Faculty of Medicine at Universidad de Chile. His research interests center on understating the neuronal mechanisms that underlie visual perception, memory, and energy in neuronal networks.
Rodrigo Verschae, Universidad de Chile
Talk: Deep Photovoltaic Prediction
Short Bio: Rodrigo Verschae is an Associate Professor at the Institute of Engineering Sciences, Universidad de O’Higgins, an Associated Researcher of the Center of Mathematical Modeling, Universidad de Chile, and Director of the PAR Explora O’Higgins program. Doctor in Electrical Engineering, interested in Computer Vision and Machine Learning, Rodrigo has been an assistant professor at Kyoto University, Japan (2015-2018), a postdoctoral fellow at the Advanced Mining Technology Center AMTC (2011-2013), a research fellow at the Kyushu Institute of Technology, Japan (2009-2011), and an associated researcher at Fraunhofer IPK-Institute, Alemania (2004-2005), among others.