Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10609/122546
Título : Limbs detection and tracking of head-fixed mice for behavioral phenotyping using motion tubes and deep learning
Autoría: Abbas, Waseem  
Masip Rodó, David  
Giovannucci, Andrea
Otros: University of North Carolina at Chapel Hill
Universitat Oberta de Catalunya (UOC)
Citación : Abbas, W., Masip, D. & Giovannucci, A. (2020). Limbs detection and tracking of head-fixed mice for behavioral phenotyping using motion tubes and deep learning. IEEE Access, 8(), 37891-37901. doi: 10.1109/ACCESS.2020.2975926
Resumen : The broad accessibility of affordable and reliable recording equipment and its relative ease of use has enabled neuroscientists to record large amounts of neurophysiological and behavioral data. Given that most of this raw data is unlabeled, great effort is required to adapt it for behavioral phenotyping or signal extraction, for behavioral and neurophysiological data, respectively. Traditional methods for labeling datasets rely on human annotators which is a resource and time intensive process, which often produce data that that is prone to reproducibility errors. Here, we propose a deep learning-based image segmentation framework to automatically extract and label limb movements from movies capturing frontal and lateral views of head-fixed mice. The method decomposes the image into elemental regions (superpixels) with similar appearance and concordant dynamics and stacks them following their partial temporal trajectory. These 3D descriptors (referred as motion cues) are used to train a deep convolutional neural network (CNN). We use the features extracted at the last fully connected layer of the network for training a Long Short Term Memory (LSTM) network that introduces spatio-temporal coherence to the limb segmentation. We tested the pipeline in two video acquisition settings. In the first, the camera is installed on the right side of the mouse (lateral setting). In the second, the camera is installed facing the mouse directly (frontal setting). We also investigated the effect of the noise present in the videos and the amount of training data needed, and we found that reducing the number of training samples does not result in a drop of more than 5% in detection accuracy even when as little as 10% of the available data is used for training.
Palabras clave : redes profundas
detector de movimiento
RNC
LSTM
flujo óptico
espaciotemporal
neurociencia
fenotipos conductuales
DOI: 10.1109/ACCESS.2020.2975926
Tipo de documento: info:eu-repo/semantics/article
Versión del documento: info:eu-repo/semantics/publishedVersion
Fecha de publicación : 3-mar-2020
Licencia de publicación: http://creativecommons.org/licenses/by/4.0/es/  
Aparece en las colecciones: Articles cientÍfics
Articles

Ficheros en este ítem:
Fichero Descripción Tamaño Formato  
Abbas_Masip_EEE_Limbs.pdf1,93 MBAdobe PDFVista previa
Visualizar/Abrir