Keynote speakers

Keynote 1

NUS Computing - OOI Wei Tsang

Wei Tsang Ooi is currently an Associate Professor of Computer Science at the School of Computing, National University of Singapore (NUS).  His primary research focuses on interactive multimedia systems.  He is generally excited about research problems and applications that involve processing or transmitting a huge amount of multimedia data for users to view and interact with.  Wei Tsang particularly enjoys developing protocols, algorithms, models, tools, and systems that optimize for computational resource usage without compromising the user’s quality of experience.  His expertise spans domains such as two-way video conferencing, networked virtual environments, progressive mesh streaming, video surveillance systems, cloud gaming systems, zoomable video streaming, and most recently, volumetric video streaming. Wei Tsang is also currently serving as the NUS co-director for IPAL, a Franco-Singaporean international research lab where he co-leads collaboration efforts among NUS, the Singapore Agency for Science, Technology, and Research (A*STAR), and the French National Centre for Scientific Research (CNRS) researchers in projects involving interactive intelligent systems.  He centers his effort on building human-centric intelligent systems that collaborate well with users. An active member of the multimedia research community, Wei Tsang is serving on the steering committee of the ACM International Conference on Multimedia Systems (MMSys) series and the International Workshop on Immersive Mixed and Virtual Environment Systems (MMVE) series.  He contributed regularly to conference organization efforts.  Besides serving as a reviewer and an area chair for numerous conferences, he has recently served as the Author’s Advocate for the ACM International Conference on Multimedia (MM) in 2023, the Technical Program Committee (TPC) co-chair for MM in 2019, and the TPC co-chair for ACM International Conference on Interactive Experiences for TV and Online Video (TVX) in 2018, and the Workshop Chair for MMVE in 2018.  He is currently a member of the IEEE MultiMedia Editorial Board.

Title: Towards Volumetric Video Realism in Extended Reality: Challenges and Opportunities

Abstract:  Advances in volumetric capture, compression, and rendering techniques have enabled the possibilities of telepresence in an extended reality (XR) environment.   Live or pre-recorded volumetric video of an avatar can be streamed over the network and rendered at a client’s XR environment, creating an illusion of spatial co-presence.In this keynote talk, I will first present a case for the importance of visual realism of volumetric video in such scenarios.  I will then present existing approaches toward higher visual realism in volumetric video, dividing them into two categories: (i) approaches to achieve smoother motion through temporal up-sampling and (ii) approaches to obtain better details through spatial up-sampling.  The former aims to achieve a rendering frame rate that is close to what human brains perceive; while the latter allows users to move closer to an avatar without losing its realism.  The talk will also outline the trade-offs and limitations of the current up-sampling approaches.  I will conclude the talk with my personal view on the research challenges and opportunities that the research community should confront to achieve a true-to-life XR experience.


Keynote 2

Tiago H. Falk is a Full Professor at the Institut national de la recherche scientifique, Centre on Energy, Materials, and Telecommunications (INRS-EMT), University of Québec, where he directs the Multisensory Signal Analysis and Enhancement Lab focused on building next-generation human-machine interfaces for both real and virtual worlds. He is also a founding/regular member of the INRS-UQO Mixed Research Unit on Cybersecurity, where research is being conducted to make human-machine interfaces secure and reliable by tackling emerging vulnerabilities to artificial intelligence algorithms. He is Co-Chair of the Technical Committee (TC) on Brain-Machine Interface Systems of the IEEE Systems, Man and Cybernetics (SMC) Society, member of the IEEE Signal Processing Society TC on Audio and Acoustics Signal Processing, an Associate Editor of the IEEE Transactions on Human-Machine Systems, member-at-large of the IEEE SMC Society Board of Governors, a member of the IEEE Telepresence Initiative, and served as Academic Chair of the Canadian Medical and Biological Engineering Society. He is serving as the Grand Challenges Co-Chair of the 2023 ACM Multimedia Conference.

Title: Multisensory Immersive Experiences: From Monitoring of Human Influential Factors to New Applications in Healthcare

Abstract: While virtual and extended reality applications are on the rise, existing experiences are not fully immersive, as only two senses (audio-visual) are typically stimulated. In this keynote talk, I will describe our ongoing work on developing multisensory immersive experiences, which combine auditory, visual, olfactory, and haptic/somatosensory stimuli. I will show the impact that stimulating more senses can have on user quality of experience, sense of presence and immersion, and engagement levels. Moreover, with multisensory experiences, monitoring human influential factors is crucial, as the perception of sensory stimuli can be very subjective (e.g., while a smell can be pleasant for some, it can be unpleasant for others). To this end, I will also describe our work on instrumenting virtual reality headsets with biosensors to allow not only for automated (remote) monitoring of human behaviour and tracking of human influential factors, but to also develop new markers of user experience, such as a multimodal time perception metric or a cybersickness metric. Lastly, I will describe some new applications of multisensory experiences that we are developing for healthcare and well-being. I will start with the use of immersive multisensory nature walks for mental health and describe two ongoing projects, one with patients with post-traumatic stress disorder and another with nurses suffering from burnout. I will conclude with a description of the use of multisensory priming for motor-imagery based neurorehabilitation for stroke survivors.