Rosari Naveena Selvan
© privat

Rosari Naveena Selvan, M.SC.

phd student
Institut für Psychologie
Fliednerstraße 21
D-48149 Münster
room Fl 316
phone: +49 (0) 251 / 83 34164

E-Mail: rselvan@uni-muenster.de

 

Academic CV

Since 01/2021 Research Assistant & Doctoral Student
Project: Turn Taking at the Joint Action Table
Department of Biological Psychology, Institute of Psychology, University of Münster, Germany &
III. Physikalisches Institut/ BCCN, University of Göttingen, Germany.

08/16 – 05/18 M. Sc. Neuropsychology – First Rank Holder
Institute of Behavioural Science, National Forensic Sciences University, India.
Masters’ Project: Behavioural and neural correlates of emotion facial expressions.

06/13 – 05/16 B. Sc. Zoology
PSG College of Arts and Science, Bharathiar University, India.

 

Research & Work Experience

06/19 – 08/20 Assistant Professor of Neuropsychology
Institute of Behavioural Science, National Forensic Sciences University, India.

02/19 – 05/19 Assistant Professor of Psychology
Department of Psychology, PSG College of Arts and Science Coimbatore, India.

10/18 – 01/19 Project Assistant
Project: Magnitude and evolution of sleep-disordered breathing in ischemic stroke survivors
Department of Neurology, National Institute of Mental Health and Neurosciences (NIMHANS), India.

05/16 – 06/16 Neuropsychology Intern
Department of Neurology, PSG Hospitals, India.

 

Research Interests

  • Action Perception & Prediction
  • Neuroplasticity & Music
  • Neuroaesthetics & Architecture

Publications

Selvan, R.N., Cheng, M., Siestrup, S., Mecklenbrauck, F., Jainta, B., Pomp, J., Zahedi, A., Tamosiunaite, M., Wörgötter, F., Schubotz, R.I. (2024). Updating predictions in a complex repertoire of actions and its neural representation. NeuroImage 296 (2024) 120687

Papatzikis, E., Agapaki, M., Selvan, R. N., Pandey, V., & Zeba, F. (2023). Quality standards and recommendations for research in music and neuroplasticity. Ann NY Acad Sci., 1520, 20-33. https://doi.org/10.1111/nyas.14944

Papatzikis, E.; Elhalik, M.; Inocencio, S.A.M.; Agapaki, M.; Selvan, R.N.; Muhammed, F.S.; Haroon, N.A.; Dash, S.K.; Sofologi, M.; Bezoni, A. (2021). Key Challenges and Future Directions When Running Auditory Brainstem Response (ABR) Research Protocols with Newborns: A Music and Language EEG Feasibility Study. Brain Sci., 11(12), 1562. https://doi.org/10.3390/brainsci11121562

 

Conference Contributions

Selvan, R. N., Cheng, M., Mecklenbrauck, F., Siestrup, S., Tamosiunaite, M., Wörgötter, F., & Schubotz, R, I. (2023) Look what I will do! fMRI & Eye tracking during action perception. Poster at the Federation of European Neuroscience Society Regional Meeting (FRM) 2023 in Algarve, Portugal.

Selvan, R. N., & Pandey, V. (2020) Neural correlates of emotional facial expression recognition during bilateral presentation: An ERP study. Talk at the International Conference on Brain Disorders and Therapeutics in Rome, Italy.

Project

TUJOTA - Turn Taking at the Joint Action Table

In this collaborative project between Prof. Florentin Wörgötter (head of Computational Neuroscience department at the University of Göttingen) and our lab, we aim to investigate the cognitive architecture of action perception. Our particular focus of interest is the neurocognitive basis of a behavior or function called ‘turn taking’ that has been first described in conversations. Here, taking turns means that while the listener still decodes what s/he hears, s/he already prepares the own upcoming utterance so that the average transition time between the two conversational partners is no more than 200 msec. Since the planning of an utterance itself takes considerably longer, it becomes obvious that turn taking entails several coincident anticipatory processes: predicting the approximate content of the so far unspoken, predicting the timing of the other’s current utterance and hence the most probable point in time where a reply is suitable, and preparing the own utterance. Turn taking is also evident in joint action but has been rarely investigated. In the current project, we measure brain responses, hand movements, and eye movements of an action observer getting reading for turn taking. Using computer vision at the Göttingen lab, we will assess cues that an observer derives from an observed object manipulation performed by an actor/actress and measure the observer’s eye movements as well as the point in time where s/he starts turn taking. Using these data points to model entropy and surprisal in an fMRI study in Münster, we will examine the brain activity in action observers presented with videos from the same actions. Our findings may contribute to the development of robots that can engage in joint action with human beings.