TY - GEN
T1 - Data-driven fusion of multi-camera video sequences
T2 - 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017
AU - Bhinge, Suchita
AU - Levin-Schwartz, Yuri
AU - Adali, Tulay
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/6/16
Y1 - 2017/6/16
N2 - Due to the potential for object occlusion in crowded areas, the use of multiple cameras for video surveillance has prevailed over the use of a single camera. This has motivated the development of a number of techniques to analyze such multi-camera video sequences. However, most of these techniques require a camera calibration step, which is cumbersome and must be done for every new configuration. Additionally, these techniques fail to exploit the complementary information across these multiple datasets. We propose a data-driven solution to the problem by making use of the inherent similarity of temporal signatures of objects across video sequences. We introduce an effective solution for the detection of abandoned objects using this inherent diversity based on the transposed independent vector analysis (tIVA) model. By taking advantage of the similarity across multiple cameras, the new technique does not require any calibration and thus can be readily applied to any camera configuration. We demonstrate the superior performance of our technique over the single camera-based method using the PETS 2006 dataset.
AB - Due to the potential for object occlusion in crowded areas, the use of multiple cameras for video surveillance has prevailed over the use of a single camera. This has motivated the development of a number of techniques to analyze such multi-camera video sequences. However, most of these techniques require a camera calibration step, which is cumbersome and must be done for every new configuration. Additionally, these techniques fail to exploit the complementary information across these multiple datasets. We propose a data-driven solution to the problem by making use of the inherent similarity of temporal signatures of objects across video sequences. We introduce an effective solution for the detection of abandoned objects using this inherent diversity based on the transposed independent vector analysis (tIVA) model. By taking advantage of the similarity across multiple cameras, the new technique does not require any calibration and thus can be readily applied to any camera configuration. We demonstrate the superior performance of our technique over the single camera-based method using the PETS 2006 dataset.
KW - Abandoned objects
KW - joint blind source separation
KW - multiple cameras
KW - object detection
KW - video surveillance
UR - http://www.scopus.com/inward/record.url?scp=85023740755&partnerID=8YFLogxK
U2 - 10.1109/ICASSP.2017.7952446
DO - 10.1109/ICASSP.2017.7952446
M3 - Conference contribution
AN - SCOPUS:85023740755
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 1697
EP - 1701
BT - 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 5 March 2017 through 9 March 2017
ER -