The recent demand for video surveillance systems to reduce security threats and criminal activities has brought challenges due to both the overwhelmingly vast amount of content to monitor and the limited physical resources with which to do it. Automated artificial vision technology is becoming more widely adopted for monitoring and surveillance applications to reduce the need for physical resources and human intervention. Human activity analysis and prediction is one of the most desired features of an automated video surveillance systems and often involves tracking and the recognition of motion or other explicit action traits which are voluntarily performed by humans. However, there are often many ‘unseen actions’ that can be used to infer and ultimately predict human behaviour.
Social signals are communicative and informative signals that directly or indirectly provide information through a range of non-verbal behavioural cues including facial expressions, body postures and gestures, and vocal outbursts. Social Signal Processing (SSP) aims to provide a framework for systematic, algorithmic and computational analysis of social signals, building on existing knowledge from anthropology and social psychology. This project will build on this SSP knowledge and recent advances in artificial vision technology (e.g. visually analysing relevant behavioural cues like blinks, smiles, crossed arms, facial expressions, and similar) in order to design and develop automated video surveillance systems for prediction. This project will be supervised by academics from the School of Computing and Intelligent Systems and the School of Psychology.