Ali Shafti


Research Associate in Robotics and AI @ Imperial College London

News

  • 01/2020: Happy New Year! New year, new CV ;-)
  • 12/2019: Preprint on human-robot collaborative learning with deep RL - at NeurIPS'19 WS on Robot Learning .
  • 10/2019: Preprint on the use of action grammars to consistently improve sample efficiency in RL agents.
  • 10/2019: Featured on the Imperial College London website on the topic of explainable intelligent robotics.
  • 09/2019: Preprint + video: how human gaze attention prediction helps train better autonomous driving agents.
  • 07 & 08/2019: Summer break + Some quality time in the lab working on some new cool things - stay tuned! :)
  • 06/2019: Invited talk at "From BCI to Human Robot Augmentation" workshop, the Hamlyn Symposium 2019.
  • 06/2019: 1 x Paper accepted at IEEE IROS'19 on hierarchical reinforcement learning for robot manipulation.
  • 06/2019: Organising half-day workshop on Human-in-the-loop AI at the London AI Summit.
  • 05/2019: 2 x Papers being presented at IEEE ICRA'19 on human-in-the-loop robotics. Details here.
  • 05/2019: Successful final review meeting of ENHANCE project - reviewers commended our work specifically.
  • 04/2019: Hierarchical Reinforcement Learning for interpretable robotic manipulation - our latest pre-print.
  • 03/2019: IMechE, interviewed on the MIT Mini Cheetah Robot, by B. Katz et. al.
  • News archive...

Latest research demo videos (more here):

Gaze Prediction for Autonomous Driving

Prediction of human visual attention helps with the training of autonomous driving agents - attention masking helps the agent "see what matters".

Explainable Robotic Manipulation Learning

Hierarchical Reinforcement Learning is used to create more explainable representations of the manipulating agent's understanding of world dynamics.

Gaze-based HRI + Human Arm Inverse Kinematics

The system is aware of the human user's arm kinematics - this allows for full control of the human user's hand orientation, whilst keeping the interaction comfortable.

Gaze-based HRI

Gaze-based, context-aware human-robot interaction. The robot assists those with reaching and grasping disabilities, based on the concept of human action grammars.

About me

I study the interaction and collaboration between humans and robots. I look into making these intuitive and natural for increased synergy, and augmented capabilities on both sides. I am curious about achieving machine intelligence, while conserving the role of human intelligence as an essential part of the action/perception loop and the overall interaction. To this aim, my research involves human-robot interaction/collaboration through machine learning and human behaviour analytics.

My original training is in electronics and electrical engineering. During my BSc and MSc I was focused on microelectronics and analogue/digital circuits design. For my PhD I expanded my research into robotics, focusing on the electronics and computer science aspects, with human-robot interaction/collaboration as an area of application. I am now exploring machine intelligence and AI methods and their application within the human-robot interaction/collaboration realm, as part of my PostDoc research.

For more details, please see my CV.

Collaboration

I am interested in collaborations within the above research topics, as well as other topics that fit in with my expertise. This can be in the form of academic or industrial collaborations, and as joint research projects or in the form of consulting. Do get in touch!