Program

Confirmed international speakers

  • Niko Sünderhauf, Australian Centre for Robotic Vision and Queensland University of Technology (QUT) in Brisbane
  • Jens Kober, Cognitive Robotics department, Delft University of Technology (TU Delft) 
  • Juxi Leitner, Australian Centre of Excellence for Robotic Vision
  • Stefan Leutenegger, Imperial College London
  • Wei Pan, Delft University of Technology (TU Delft)

Confirmed national speakers

  1. Pedro Maldonado, Universidad de Chile
  2. Wolfhart Totschnig, Universidad Diego Portales
  3. Rodrigo Verschae, Universidad de O’Higgins
  4. Alexandre Bergel, Universidad de Chile
  5. Javier Ruiz-del-Solar, Universidad de Chile
  6. Maria Jose Escobar, Universidad Tecnica Federico Santa Maria
  7. Nicolás Cruz, Universidad de Chile
  8. Francisco Leiva, Universidad de Chile

Final Program

PROGRAM (PDF)

Keynote Presentations


Juxi Leitner, Australian Centre of Excellence for Robotic Vision

Keynote: (Deep) Learning for Robotic Grasping and Manipulation

Abstract: The advances of machine learning have pushed robotics capabilities, especially visual detection capabilities. I will go through how robust visual perception together with quick visual learning can create adaptive robotic systems that can see and do things in the real world. The talk will show state-of-the art robotic grasping techniques and highlight some of the open questions and challenges in the field of robotic manipulation and how we are thinking of tackling them.

Short bio: Juxi Leitner leads the Robotic Manipulation efforts of the Australian Centre for Robotic Vision (ACRV) and is co-founder of LYRO Robotics, a spin-out commercialising robotic manipulation research by creating autonomus picking robots. He was the leader of Team ACRV the winner of the 2017 Amazon Robotics Challenge with their robot Cartman.


Jens Kober: Cognitive Robotics department, Delft University of Technology (TU Delft)


Keynote: (Deep) Reinforcement Learning for Robotics

Abstract: Reinforcement learning  learns the optimal mapping from inputs (states) to outputs (actions) through interactions with the system. The agent receives a reward in every step and its goal is to optimize for the cumulative reward. Recent advances in deep reinforcement learning have enabled learning end-to-end, i.e., a mapping from raw sensor inputs (e.g., camera images) to low-level commands (e.g., torques). However, (deep) reinforcement learning often requires significantly more iterations than are feasible on real systems. Hence collecting sufficient amounts of data is impractical at best. Therefore a lot of work is done in purely digital or virtual environments. This keynote will give an introduction to (deep) reinforcement learning, the particular challenges of applying it in the robotics domain, and methods of rendering (deep) reinforcement learning in robotics tractable nevertheless.

Talk: Learning State-Representations

Short Bio: Jens Kober is an associate professor at Delft University of Technology, Netherlands. Jens is the recipient of the IEEE-RAS Early Academic Career Award in Robotics and Automation 2018. His research interests include motor skill learning, (deep) reinforcement learning, imitation learning, interactive learning, and machine learning for control.


Niko Sünderhauf, Australian Centre for Robotic Vision and Queensland University of Technology (QUT) in Brisbane

Keynote:  Introductory lecture on deep learning for robot vision

Talk a:  Semantic SLAM

Talk b: The Importance of Uncertainty for Deep Learning in Robotics

Short Bio: Niko is a Chief Investigator of the Australian Centre for Robotic Vision, and a Senior Lecturer at Queensland University of Technology (QUT) in Brisbane, Australia. He conducts research in robotic vision, at the intersection of robotics, computer vision, and machine learning. His research interests focus on scene understanding, semantic SLAM, and incorporating semantics into reinforcement learning. Niko is interested in the reliability and robustness of deep learning in robotics and furthermore leads a project on new benchmarking challenges in robotic vision.


Stefan Leutenegger, Imperial College London

Keynote: Spatial AI for mobile robots

Abstract: This talk will focus on the different levels of real-time spatial perception for mobile robots, as needed for their autonomous operation. First, highly accurate and robust visual-inertial SLAM with a focus on motion tracking will be visited, as needed to control e.g. a drone. Next, dense mapping and SLAM approaches will be presented along with related integration into motion planning. Finally, we will visit semantic and object-level understanding, combining modern Deep Learning-based methods with the geometric and physics-based ones addressed previously. The aim of these recent works is to bridge the sense-AI-gap and empower the next generation of mobile robots that need to plan and execute complex tasks interacting with potentially cluttered, and dynamic environments, possibly in proximity of people. The talk will cover both the foundations of maths, algorithms and implementations, as well as recent research and example applications to e.g. inspection or construction scenarios; which includes experiments in proximity or physical contact with structure.

Short Bio: Stefan is a Senior Lecturer (US equivalent Associate Professor) in Robotics in the Department of Computing at Imperial College London, where he leads the Smart Robotics Lab and furthermore co-directs the Dyson Robotics Lab. He has also co-founded SLAMcore, spin-out company aiming at commercialisation of localisation and mapping solutions for robots and drones. Stefan has received a BSc and MSc in Mechanical Engineering with a focus on Robotics and Aerospace Engineering from ETH Zurich, as well as a PhD on “Unmanned solar airplanes: design and algorithms for efficient and robust autonomous operation”, completed in 2014.


Wei Pan, Delft University of Technology


Keynote: Sparse Bayesian (Deep) Learning for Robotic


Wolfhart Totschnig, Institute of Philosophy of Universidad Diego Portales

Talk: Introduction to the ethics of artificial intelligence

Abstract: Until recently, artificial agents were employed only in controlled environments, where their behavior is restricted and hence easily predictable (e.g., robots on an assembly line in a factory). Today, however, we are witnessing the creation of artificial agents that are designed to operate in “real-world”—that is, uncontrolled—situations. Self-driving cars and “autonomous weapon systems” are prominent examples that are already in use, and artificial agents that are still more versatile are in development (e.g., household or health care robots). How can we make sure that such “autonomous” artificial agents behave in accordance with our expectations and desires? This is the central question of the field of research called “ethics of artificial intelligence”. It can be divided into two more specific questions: First, which rules or principles do we want artificial agents to follow? And second, how can we instill in them these rules or principles? The aim of this talk is to offer an introduction to the current debates surrounding these questions.

Short bio: Wolfhart Totschnig is assistant professor at the Institute of Philosophy of Universidad Diego Portales. He has published articles on the ontology of ancient Stoicism, various aspects of Hannah Arendt’s philosophy, and, more recently, the political and ethical questions arising from the advent of artificial intelligence.


Pedro Maldonado, Universidad de Chile

Talk: Similarities and differences between artificial intelligence and the human brain

Abstract:Currently, neuronal networks and deep learning networks are ubiquitous in artificial intelligence. Nevertheless, these brain-inspired paradigms operate under simple mechanisms when compared with real neuronal networks. In this talk, I will present a basic scheme of the way the human brain is built and connected, especially the cerebral cortex, which seems to be the central structure of our sophisticated cognitive capabilities. I will compare the functional circuitry between computers and the brain to highlight the differences and capabilities, emphasizing fist structural differences, the nature of neuronal and computational circuits, and the cognitive abilities of both systems. I will argue that modern Neuroscience has departed significantly for the information-theory-based paradigm, which may enable the design of more efficient artificial intelligence algorithms.

Short Bio: Pedro Maldonado holds a Ph.D. in Physiology from the University of Pennsylvania and is a Full professor and Chairman of the Department of Neuroscience at the Faculty of Medicine at Universidad de Chile. His research interests center on understating the neuronal mechanisms that underlie visual perception, memory, and energy in neuronal networks.


Rodrigo Verschae, Universidad de O’Higgins

Talk: Deep Photovoltaic Prediction

Short Bio: Rodrigo Verschae is an Associate Professor at the Institute of Engineering Sciences, Universidad de O’Higgins, an Associated Researcher of the Center of Mathematical Modeling, Universidad de Chile, and Director of the PAR Explora O’Higgins program. Doctor in Electrical Engineering, interested in Computer Vision and Machine Learning, Rodrigo has been an assistant professor at Kyoto University, Japan (2015-2018), a postdoctoral fellow at the Advanced Mining Technology Center AMTC (2011-2013), a research fellow at the Kyushu Institute of  Technology, Japan (2009-2011), and an associated researcher at Fraunhofer IPK-Institute, Alemania (2004-2005), among others.


Alexandre Bergel, Universidad de Chile

Talk: Building neural networks through neuroevolution

Abstract: Evolutionary Algorithm is a family of algorithms that are inspired by biological evolution. The goal of such algorithms is to provide models that are fit to address a particular problem, in the same way that biological evolution will favor rabbit to run fast in order to escape wolfs. Neuroevolution is applying an evolution algorithm to build neural networks that are fit to solve a particular task. As such, instead of relying on gradient descent as most neural network training techniques are based on, neuroevolution is an evolutionary process similar to the one that produced our brains. This talk is an introduction to neuroevolution. In particular, we will briefly review genetic algorithm (the evolution algorithms commonly employed in neuroevolution), detail the NEAT algorithm, and highlight how does neuroevolution compare against traditional deep learning.

Short Bio: Alexandre Bergel is Associate Professor and researcher at the University of Chile. Alexandre Bergel and his collaborators carry out research in software engineering. His focus is on designing tools and methodologies to improve the overall performance and internal quality of software systems, by employing profiling, visualization, and artificial intelligence techniques. Alexandre authored the book Agile Visualization and co-authored the book Deep Into Pharo. Currently, he is writing the book Agile Artificial Intelligence.

Maria Jose Escobar, Universidad Tecnica Federico Santa Maria

Maria Jose Escobar, Universidad Tecnica Federico Santa Maria

Talk: Towards a Chilean Artificial Intelligence Strategy

Short bio: Maria Jose Escobar is currently an Associate Professor at the Department of Electronic Engineering of the Universidad Técnica Federico Santa María, Valparaíso, Chile, a Principal Investigator of the Advanced Center in Electrical and Electrical Engineering (AC3E), and head of the “Data Analytics and Artificial Intelligence” research line. Here research interests include: computational neuroscience, biological vision, artificial intelligence, and bio-inspired/cognitive robotics.


Nicolás Cruz & Javier Ruiz del Solar, Universidad de Chile

Talk: Bridging the simulation-to-reality-gap using generative neural networks

Abstract: One of the main challenges to train machine learning algorithms for robotic applications is the need of large amounts of training data. This is especially relevant in the case of robot vision systems which need to be trained using images that consider the large variability of the target real-world application. Simulations can be used to address this challenge, but this approach suffers from the simulation-to-reality gap, due to the fact that there is a visual mismatch between the images rendered by the simulators and the real images. This talk will analyze the use of generative neural networks to address the simulation-to-reality gap.


Francisco Leiva & Javier Ruiz del Solar, Universidad de Chile

Talk: Deep Reinforcement Learning for Robotic Navigation and Collision Avoidance

Abstract: Deep Reinforcement Learning (DRL) has been able to solve increasingly complex problems in the recent years. In robotics, DRL based solutions for navigation (and related tasks) have shown good performance as well as real-world applicability. In this presentation we will examine some DRL solutions for collision avoidance and target driven navigation for resource-constrained mobile robots, highlighting their advantages and shortcomings.

Short bio: Nicolás Cruz was born in 1992, he studied electrical engineering at the university of Chile where he was part of the University’s robotics laboratory. He is currently doing his magister thesis on simulating environments using generative models. He also works at Mundos Virtuales, a Chilean based Robotics company.

Javier Ruiz del Solar, Universidad de Chile

Short Bio: Javier Ruiz-del-Solar is full professor at the Department of Electrical Engineering of the Universidad de Chile and Executive Director of the Advanced Mining Technology Center. Javier is the recipient of the IEEE RAB Achievement Award 2003, RoboCup Engineering Challenge Award 2004, RoboCup @Home Innovation Award in 2007 and in 2008, RoboCup Symposium Best Paper Award in 2015 and in 2017, and since 2005 he is a Senior Member of the IEEE. His research interests include (deep) reinforcement learning, computer vision, mobile robotics and automation of mining machines.