Ali Shafti


Research Associate in Robotics and AI @ Imperial College London

News

  • 07/2020: Our work on the EU Enhance project highlighted by the EU Innovation Radar!

  • 07/2020: Organised seminar with Imperial College AI Network and Robotics forum on AI/Robotics in Healthcare.

  • 07/2020: Paper accepted at IEEE/RSJ IROS'20 on real-world human-robot collaborative reinforcement learning.

  • 06/2020: Invited talk at ICRA2020 WS on Human-Robot Handovers on Explainable Human-Robot Interaction

  • 05/2020: Preprint on how human motor coordination faces supernumerary robotic augmentation

  • 03-06/2020: Working from home with the COVID-19 restrictions - be safe out there!

  • 03/2020: Invited talk at the University of Cambridge, Human Enhancement Seminar on Explainable Robotics

  • 03/2020: Preprint on real-world human-robot collaborative reinforcement learning

  • 01/2020: Happy New Year! New year, new CV ;-)

  • 12/2019: Preprint on human-robot collaborative learning with deep RL - at NeurIPS'19 WS on Robot Learning .

  • 10/2019: Preprint on the use of action grammars to consistently improve sample efficiency in RL agents.

  • 10/2019: Featured on the Imperial College London website on the topic of explainable intelligent robotics.

  • 09/2019: Preprint + video: how human gaze attention prediction helps train better autonomous driving agents.

  • News archive...

Latest research demo videos (more here):

Real-World Human-Robot Collaborative RL

A setup for real-world human-robot reinforcement learning of a fully collaborative motor task, in the form of a marble-maze game.

Gaze Prediction for Autonomous Driving

Prediction of human visual attention helps with the training of autonomous driving agents - attention masking helps the agent "see what matters".

Learning Explainable Robotic Manipulations

Hierarchical Reinforcement Learning is used to create more explainable representations of the manipulating agent's understanding of world dynamics.

Gaze-based HRI + Arm Inverse Kinematics

The system is aware of the human user's arm kinematics - this allows for full control of the human user's hand orientation, whilst keeping the interaction comfortable.

About me

I study the interaction and collaboration between humans and robots. I look into making these intuitive and natural for increased synergy, and augmented capabilities on both sides. I am curious about achieving machine intelligence, while conserving the role of human intelligence as an essential part of the action/perception loop and the overall interaction. To this aim, my research involves human-robot interaction/collaboration through machine learning and human behaviour analytics.

My original training is in electronics and electrical engineering. During my BSc and MSc I was focused on microelectronics and analogue/digital circuits design. For my PhD I expanded my research into robotics, focusing on the electronics and computer science aspects, with human-robot interaction/collaboration as an area of application. I am now exploring machine intelligence and AI methods and their application within the human-robot interaction/collaboration realm, as part of my PostDoc research.

For more details, please see my CV.

Collaboration

I am interested in collaborations within the above research topics, as well as other topics that fit in with my expertise. This can be in the form of academic or industrial collaborations, and as joint research projects or in the form of consulting. Do get in touch!