Empreu aquest identificador per citar o enllaçar aquest ítem: http://hdl.handle.net/10609/122546
Títol: Limbs detection and tracking of head-fixed mice for behavioral phenotyping using motion tubes and deep learning
Autoria: Abbas, Waseem  
Masip Rodó, David  
Giovannucci, Andrea
Altres: University of North Carolina at Chapel Hill
Universitat Oberta de Catalunya (UOC)
Citació: Abbas, W., Masip, D. & Giovannucci, A. (2020). Limbs detection and tracking of head-fixed mice for behavioral phenotyping using motion tubes and deep learning. IEEE Access, 8(), 37891-37901. doi: 10.1109/ACCESS.2020.2975926
Resum: The broad accessibility of affordable and reliable recording equipment and its relative ease of use has enabled neuroscientists to record large amounts of neurophysiological and behavioral data. Given that most of this raw data is unlabeled, great effort is required to adapt it for behavioral phenotyping or signal extraction, for behavioral and neurophysiological data, respectively. Traditional methods for labeling datasets rely on human annotators which is a resource and time intensive process, which often produce data that that is prone to reproducibility errors. Here, we propose a deep learning-based image segmentation framework to automatically extract and label limb movements from movies capturing frontal and lateral views of head-fixed mice. The method decomposes the image into elemental regions (superpixels) with similar appearance and concordant dynamics and stacks them following their partial temporal trajectory. These 3D descriptors (referred as motion cues) are used to train a deep convolutional neural network (CNN). We use the features extracted at the last fully connected layer of the network for training a Long Short Term Memory (LSTM) network that introduces spatio-temporal coherence to the limb segmentation. We tested the pipeline in two video acquisition settings. In the first, the camera is installed on the right side of the mouse (lateral setting). In the second, the camera is installed facing the mouse directly (frontal setting). We also investigated the effect of the noise present in the videos and the amount of training data needed, and we found that reducing the number of training samples does not result in a drop of more than 5% in detection accuracy even when as little as 10% of the available data is used for training.
Paraules clau: xarxes profundes
detector de moviment
RNC
LSTM
flux òptic
espaitemporal
neurociència
fenotips conductuals
DOI: 10.1109/ACCESS.2020.2975926
Tipus de document: info:eu-repo/semantics/article
Versió del document: info:eu-repo/semantics/publishedVersion
Data de publicació: 3-mar-2020
Llicència de publicació: http://creativecommons.org/licenses/by/4.0/es/  
Apareix a les col·leccions:Articles cientÍfics
Articles

Arxius per aquest ítem:
Arxiu Descripció MidaFormat 
Abbas_Masip_EEE_Limbs.pdf1,93 MBAdobe PDFThumbnail
Veure/Obrir
Comparteix:
Exporta:
Consulta les estadístiques

Aquest ítem està subjecte a una llicència de Creative Commons Llicència Creative Commons Creative Commons