My PhD: smarter machines, robots and memory!




Some more details:

In the roller-coaster of my PhD, I have studied how we can build next-generation meeting support systems that can understand us better. Specifically, I focused on building the very first models able to predict participants' memory of meetings! My baselines performed significantly above chance and predicted memory using various verbal and non-verbal behaviour that can be extracted from any online meeting recorded (speech, eye-gaze, facial expressions). Won't bore you with more details - but think advanced user modelling using multimodal signals for machines that can understand our values and cognitive processes:)

These are my major publications, but see my google scholar for more:

  • Tsfasman, M. et al. (2021). Towards a real-time measure of the perception of anthropomorphism in human–robot interaction. In Proceedings of the 2nd ACM Multimedia Workshop on Multimodal Conversational AI (pp. 13–18).

  • Saravanan, A., Tsfasman, M. et al. (2022). Giving social robots a conversational memory for motivational experience sharing. In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (pp. 985–992). IEEE.

  • Tsfasman, M., et al. (2022). Towards creating a conversational memory for long-term meeting support: Predicting memorable moments in multi-party conversations through eye-gaze. In International Conference on Multimodal Interaction (pp. 94–104).

Here is me talking about one of my first studies, that showed that we can use eye gaze to predict which moments people remember from online meetings:

Create a free website with Framer, the website builder loved by startups, designers and agencies.