Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10609/92220
Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.contributor.authorEscalera, Sergio-
dc.contributor.authorBaró, Xavier-
dc.contributor.authorVitrià Marca, Jordi-
dc.contributor.authorRadeva Ivanova, Petia-
dc.contributor.authorRaducanu, Bogdan-
dc.contributor.otherUniversitat Oberta de Catalunya. Estudis d'Informàtica, Multimèdia i Telecomunicació-
dc.contributor.otherUniversitat Autònoma de Barcelona (UAB)-
dc.date.accessioned2019-03-14T14:15:36Z-
dc.date.available2019-03-14T14:15:36Z-
dc.date.issued2012-02-07-
dc.identifier.citationEscalera Guerrero, S., Baró Solé, X., Vitrià Marca, J., Radeva Ivanova, P. & Raducanu, B. (2012). Social Network Extraction and Analysis Based on Multimodal Dyadic Interaction. Sensors, 12(2), 1702-1719. doi: 10.3390/s120201702-
dc.identifier.issn1424-8220MIAR
-
dc.identifier.urihttp://hdl.handle.net/10609/92220-
dc.description.abstractSocial interactions are a very important component in people's lives. Social network analysis has become a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from multimodal dyadic interactions. For our study, we used a set of videos belonging to New York Times' Blogging Heads opinion blog. The Social Network is represented as an oriented graph, whose directed links are determined by the Influence Model. The links' weights are a measure of the "influence" a person has over the other. The states of the Influence Model encode automatically extracted audio/visual features from our videos using state-of-the art algorithms. Our results are reported in terms of accuracy of audio/visual data fusion for speaker segmentation and centrality measures used to characterize the extracted social network.en
dc.language.isoeng-
dc.publisherSensors-
dc.relation.ispartofSensors, 2012, 12(2)-
dc.relation.urihttps://www.mdpi.com/1424-8220/12/2/1702/pdf-
dc.rightsCC BY-
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/-
dc.subjectinfluence modelen
dc.subjectsocial interactionen
dc.subjectinteracción sociales
dc.subjectinteracció socialca
dc.subjectaudio/visual data fusionen
dc.subjectfusión de datos audio/visuales
dc.subjectfusió de dades àudio/visualca
dc.subjectsocial network analysisen
dc.subjectanálisis de las redes socialeses
dc.subjectanàlisi de les xarxes socialsca
dc.subjectmodelo de influenciaes
dc.subjectmodel d'influènciaca
dc.subject.lcshSocial networksen
dc.titleSocial network extraction and analysis based on multimodal dyadic interaction-
dc.typeinfo:eu-repo/semantics/article-
dc.subject.lemacXarxes socialsca
dc.subject.lcshesRedes socialeses
dc.rights.accessRightsinfo:eu-repo/semantics/openAccess-
dc.identifier.doi10.3390/s120201702-
dc.gir.idAR/0000002736-
dc.relation.projectIDinfo:eu-repo/grantAgreement/TIN2009-14404-C02-
dc.relation.projectIDinfo:eu-repo/grantAgreement/CONSOLIDERINGENIO CSD 2007-00018-
dc.relation.projectIDinfo:eu-repo/grantAgreement/TIN2010-09703-E-
dc.type.versioninfo:eu-repo/semantics/publishedVersion-
Aparece en las colecciones: Articles cientÍfics
Articles

Ficheros en este ítem:
Fichero Descripción Tamaño Formato  
multimodal.pdf1,53 MBAdobe PDFVista previa
Visualizar/Abrir