Ali Shafti

Senior Research Associate in Robotics and AI, Imperial College London


  • 08/2021: The New Scientist has a report about our research on Robotic Human Augmentation.

  • 08/2021: Grant funded by EPSRC-N+ on Human-Like Computing, on Trustworthy Human-Robot Collaboration.

  • 07/2021: Now on Scientific Reports: Our Action Grammars methods used to study human behaviour & evolution.

  • 07/2021: Invited talk at ICL Robotics Forum - TUM MSRM Academic Workshop on human in-the-loop Robotics.

  • 06/2021: Paper accepted at Scientific Reports on the evolution of Human Action Grammars - to appear in July.

  • 05/2021: Catch my talk at NER, and later this month at VSS; both on human interfacing through gaze for robotics.

  • 04/2021: Catch our talk on robotic human augmentation at NCM, delivered by Prof. Aldo Faisal.

  • 03/2021: Talk accepted at NCM 2021 (10% acceptance), on robotic augmentation of humans.

  • 03/2021: Talk accepted at ACM CHI 2021 workshop on RL for HCI, on human-robot collaborative RL.

  • 03/2021: New preprint on neuromuscular reinforcement learning to actuate human limbs like robots arms.

  • 02/2021: Talk + Poster accepted at VSS 2021, gaze intention decoding + autonomous driving w/ gaze attention.

  • 02/2021: 2xPapers accepted at the IEEE NER 2021, on gaze interfaces for cognitive human-robot interaction.

  • 01/2021: You can watch my talk below, here.

  • 01/2021: Invited talk at the Imperial College London Dept. of Bioengineering Seminar. Come say Hi online!

  • 01/2021: Happy new year - or at least congratulations on finishing 2020, let's hope for a better one!

  • News archive...

Latest research demo videos (more here):

Real-World Human-Robot Collaborative RL

A setup for real-world human-robot reinforcement learning of a fully collaborative motor task, in the form of a marble-maze game.

Gaze Prediction for Autonomous Driving

Prediction of human visual attention helps with the training of autonomous driving agents - attention masking helps the agent "see what matters".

Learning Explainable Robotic Manipulations

Hierarchical Reinforcement Learning is used to create more explainable representations of the manipulating agent's understanding of world dynamics.

Gaze-based HRI + Arm Inverse Kinematics

The system is aware of the human user's arm kinematics - this allows for full control of the human user's hand orientation, whilst keeping the interaction comfortable.

About me

I study physical collaboration and interaction between humans and robots. I look into making these intuitive and natural for increased synergy, and augmented capabilities on both sides. I am curious about achieving machine intelligence, while conserving the role of human intelligence as an essential part of the action/perception loop and the overall interaction. To this aim, my research involves human-robot collaboration through machine learning and human behaviour analytics.

My original training is in electronics and electrical engineering. During my BSc and MSc I was focused on microelectronics and analogue/digital circuits design. For my PhD I expanded my research into robotics, focusing on the electronics and computer science aspects, with human-robot collaboration as an area of application. I am now exploring machine intelligence and motor neuroscience and their application within the physical human-robot collaboration realm, as part of my PostDoc research.

For more details, please see my CV.


I am interested in collaborations within the above research topics, as well as other topics that fit in with my expertise. This can be in the form of academic or industrial collaborations, and as joint research projects or in the form of consulting. Do get in touch!