Previous research at Imperial College London
Research Fellow / Senior Research associate in Robotics and Artificial Intelligence | PI: Dr. A. Aldo Faisal
As of mid-2021, I was a Research Fellow, leading all robotics activities and students, co-supervising PhD students and several projects. I was Senior Research Associate 2020-21.
I research physical human-robot collaboration, particularly the implications and opportunities arising from human motor control and coordination facing intelligent robot control and motion planning. I investigate human and robot in-the-loop methods within machine learning, particularly reinforcement learning, to achieve more intuitive, natural and efficient human robot collaboration. As part of this, I continue the work on the eNHANCE project (see below), and other projects involving human-robot interaction to investigate:
Explainability in physical human-(intelligent)robot collaboration.
Human+Robot-in-the-loop Reinforcement Learning
Human-Robot Augmentation and its neurocognitive implications
Previously (2017-2019): Lead Research Associate and Project Manager, EU Horizon2020 eNHANCE| PI: Dr. A. Aldo Faisal
Research and development on multiple robotics and sensing systems.
Development of full system integration through ROS.
Development of personalisation through (Deep) Reinforcement Learning.
System validation through tests and experiments with end-users.
Project manager for all research and administrative activities at ICL.
The keywords below list some of the tools I use for my current research.
Keywords: [Robot Operating System (ROS)], [Python], [C++], [MATLAB], [PyTorch], [TensorFlow], [(deep) Reinforcement Learning], [OpenAI Gym], [Simultaneous Localisation and Mapping (SLAM)], [Solidworks], [Formlabs Form2 3D Printer], [Universal Robots UR10], [BioServo SEMGlove], [BioServo CarbonHand], [Myo Armband], [Arduino], [SMI Eye-trackers], [Agile Project Management].
Gallery of research at Imperial College London
Learning to play the piano with the SR3T
Examining the motor coordination constraints of robotic human augmentation.
Real-World Human-Robot Collaborative RL
A setup for real-world human-robot reinforcement learning of a fully collaborative motor task, in the form of a marble-maze game.
Gaze-based, context-aware HRI based on human action grammars, 2019.
Gaze-based, context-aware HRI based on human action grammars, 2018.
Gaze-based, context-aware HRI through multi-modal sensing, 2017.
Supernumerary Robotic 3rd Thumb, a setup for embodiment studies, 2017.
Previous research at King's College London
Research and development on robot learning and actively ergonomic human-robot interaction.
System validation through tests and experiments with users.
Project manager for all research and administrative activities at KCL.
Aside from this specific application project, my general research involved human behaviour analysis, particularly in the area of human physiological comfort. As part of this I developed methods and devices for the objective real-time assessment of human comfort, which was then used to place the human, and their physical comfort, within the robotic system's action/perception loop. This allowed for active robot-assisted ergonomic interactions within the factory environment as well as objective studies of surgeons' comfort within the clinical environment. The keywords below list some of the tools I used for my previous research.
Keywords: [Robot Operating System (ROS)], [MATLAB], [Solidworks], [Dimension/Formlabs/Utilmaker 3D printers], [Baxter Research Robot], [Arduino], [Microsoft Kinect], [Supervised Learning], [Agile Project Management].