Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Adapted Psychological Profiling Verses the Right to an Explainable Decision
Keeley Crockett, Department Of Computing And Mathematics, Manchester Metropolitan University, United Kingdom

Learning Vector Quantization Methods as Multi-layer Networks for Interpretable Classification Learning
Thomas Villmann, University of Applied Sciences Mittweida, Germany

Computational Intelligence Applications in Skeleton-based Forensic Identification: Automating Craniofacial Superimposition and Comparative Radiology
Oscar Cordón, University of Granada, Spain

An Approximation to the Problem of Information Fusion in Fuzzy Classification
Humberto Bustince, Public University of Navarra, Spain

 

Adapted Psychological Profiling Verses the Right to an Explainable Decision

Keeley Crockett
Department Of Computing And Mathematics, Manchester Metropolitan University
United Kingdom
http://www.docm.mmu.ac.uk/
 

Brief Bio
Keeley Crockett is a Reader in Computational Intelligence in the School of Computing, Mathematics and Digital Technology at Manchester Metropolitan University in the UK. She gained a BSc Degree (Hons) in Computation from UMIST in 1993, and a PhD in the field of machine learning from the Manchester Metropolitan University in 1998 entitled "Fuzzy Rule Induction from Data Domains". She is A Senior Fellow of the Higher Education Academy. She is a knowledge engineer and has worked with companies to provide business rule automation with natural language interfaces using conversational agents. She is Senior Artificial Intelligence Scientist consultant for Silent Talker Ltd. She leads the Intelligent Systems Group (Computational Intelligence Lab – launch in 2018) that has established a strong international presence in its research into Conversational Agents and Adaptive Psychological Profiling including an international patent on "Silent Talker". She is currently a member of the IEEE Task Force on Ethical and Social Implications of Computational Intelligence and has a strong focus on ethically aligned design in the context of intelligent systems development. Her main research interests include fuzzy decision trees, semantic text based clustering, conversational agents, fuzzy natural language processing, semantic similarity measures, and AI for psychological profiling. Currently the Principal Investigator (MMU) on the H2020 funded project iBorderCtrl – Intelligent Smart Border Control and CI a UK Knowledge Transfer Partnership with Service Power. She is the currently Chair of IEEE WIE UKI, Chair of the IEEE CIS Webinars, Vice-Chair of the IEEE Women into Computational Intelligence Society. She s also a IEEE WCCI Tutorials 2018 Co-chair. She has authored over 120 peer reviewed publications and is currently an Associate Editor for IEEE Transactions on Emerging Topics in Computational Intelligence.


Abstract
This keynote will focus on how computational intelligence techniques can be used to automatically physiological profile people. Silent Talker is a pioneering psychological profiling system which was developed by experts in Behavioural Neuroscience and Computational Intelligence. Designed for use in natural conversation, Silent Talker combines image processing and artificial intelligence to classify multiple visible  non-verbal signals of the head and face during verbal communication. From analysis, the system produces an accurate and comprehensive time-based profile of a subject’s psychological state.
The talk will give examples on how Silent Talker technology can be used. Firstly, to detect deception through providing risk scores to border guards in a border crossing scenario which is being developed as part of the European Union sponsored project known as iBorderCtrl. Secondly, to detect the comprehension level of a person in order to provide personalised  and adaptable online learning within a conversational intelligent tutoring system. Ethical considerations will also be touched on in line with the GDPR and how it is important to have a “human in the loop” when developing such systems.

 



 

 

Learning Vector Quantization Methods as Multi-layer Networks for Interpretable Classification Learning

Thomas Villmann
University of Applied Sciences Mittweida
Germany
 

Brief Bio
Prof. Thomas Villmann is with the University of Applied Sciences Mittweida (UASW), Germany. He holds a diploma degree in Mathematics, received his Ph.D. in Computer Science in 1996 and his habilitation as well as venia legendi in the same subject in 2005, all from the University of Leipzig, Germany. From 1997 to 2009 he led the computational intelligence group of the hospital for psychotherapy at Leipzig University. in 2006 he was visiting professor at the University Paris Panthéon-Sorbonne in the dpeartment for statistical analysis and mathematical modelling (SAMM) Since 2009 he is a full professor for Technomathematics/ Computational Intelligence at the UASW (Saxony), Germany. He is founding member of the German chapter of European Neural Network Society (GNNS) and its president since 2011 as well as board member of the European Neural network Society (ENNS). Further he leads the Institute of Computational Intelligence and Intelligent Data Analysis e.V. in Mittweida, Germany and the Computational Intelligence Group at the University of Applied Sciences Mittweida. Since 2017 he is director of the Saxony Institute for Computational Intelligence and machine Learning (SICIM.) Prof. Villmann published more than 90 articles in leading journals. He authored and co-authored more than 250 conference papers and book chapters. Under his supervision, 10 PhD completions were achieved, three more anticipated this year. He is editor in chief of the Machine Learning Reports (MLR) and acts as an associate editor for Neural Processing Letters and for Computational Intelligence and Neuroscience. His research focus includes the theory of prototype based clustering and classification, non-standard metrics, information theoretic and similarity based learning, statistical data analysis and their application in pattern recognition, data mining and knowledge discovery for use in medicine, bioinformatics, remote sensing, hyperspectral analysis, forensics and others. Peronally, Thomas Villmann is a mountaineer climbing all over the world. He is still active in judo (bronze medaillist at German Championship for Veterans this year) and likes jogging together with his border terrier dog Emmy.


Abstract
Learning Vector Quantization (LVQ) as introduced by T. Kohonen is an intuitive method for classification learning inspired by Bayesian decision learning and unsupervised neural vector quantization. Nowadays learning vector quantizers are trained using stochastic gradient descent learning according to a cost function approximating the overall classification error. Thereby, the geometric interpretation as a vector quantization model partitioning the data space into class regions gives one of the main advantages for this classifier model. Yet, the origins of LVQ are unsupervised neural learning models based on competing perceptrons. Thus LVQ can be comprehended also as multilayer neural networks.
The talk will consider recent developments of LVQ starting with the basic stochastic gradient learning variant. Several modifications and adaptations for specific aspects like matrix-relevance learning, classification-border sensitive learning or classification learning for imbalanced data, etc. will be explained. Further, we will also focus to the multilayer network description of LVQ classifiers allowing to draw close connections to deep networks. We will show, how successful ideas from there can be easily transferred to LVQ.



 

 

Computational Intelligence Applications in Skeleton-based Forensic Identification: Automating Craniofacial Superimposition and Comparative Radiology

Oscar Cordón
University of Granada
Spain
 

Brief Bio
Prof. Oscar Cordón is Full Professor at the Department of Computer Science and Artificial Intelligence in the University of Granada (UGR) in Spain. He created and headed the Virtual Learning Centre from 2001 to 2005 and he is currently Vice-President for Digital University at the UGR. From April 2006 to December 2015, he was also affiliated to the European Centre for Soft Computing, a private international research center, where he first acted as founding Principal Researcher of the Applications of Fuzzy Logic and Evolutionary Algorithms Research Unit until August 2011 and later as Distinguished Affiliated Researcher until December 2015. Prof. Cordón received the UGR Young Researcher Career Award in 2004; the IEEE CIS Outstanding Early Career Award in its 2011 edition, the first such award conferred; the IFSA Award for Outstanding Applications of Fuzzy Technology also in 2011; and the Spanish National Award on Computer Science ARITMEL by the Spanish Computer Science Scientific Society (SCIE) in 2014. In 2018, he was elevated to the IEEE Fellow grade for his contributions to genetic and evolutionary fuzzy systems. He has published more than 350 peer-reviewed scientific publications, including 99 JCR-SCI-indexed journals and a co-authored book published by World Scientific in 2001 with more than 1250 citations in Google Scholar. He is included among the 1% most cited researchers in the World (source: Thomson’s Web of Knowledge), with 4161 citations from 2651 different citing articles, h index of 34, in the Web of Knowledge, and 12099 citations, h index of 52, in Google Scholar. He has coordinated 30 research projects and contracts with an overall budget of 7,6M€. He is currently or was Associate Editor of 17 journals, 7 of them indexed in the SCI-JCR. He is also inventor of an international patent under exploitation and has advised 18 PhD dissertations, one of them recognized with the EUSFLAT Best Ph.D. Thesis Award in 2011 and other 3 with some other awards. He is an IEEE member since 2004 (senior member since 2011 and fellow since 2018) and has enjoyed many different representations for reputed international societies such as the IEEE Computational Intelligence Society (founder and chair of the Genetic Fuzzy Systems Task Force; member of the Fuzzy Systems Technical Committee; and elected member of the Administrative Committee (AdCom); among many others) and the EUSFLAT Society (Treasurer (2005-2007) and Executive Board member (2005-07, 2009-2013)). He has also been involved in the organization of many different conferences: IPC chair of IEEE EFS2006, GEFS2008 and ESTYLF2008; international co-chair of HIS2008; publicity co-chair of IEEE SCCI2009; finance co-chair of IFSA-EUSFLAT 2009; advisory board member of ISDA'09; evolutionary algorithms IPC area chair of IPMU2010; special session co-chair of 2010 IEEE CEC 2010 (WCCI 2010); Fuzzy image, speech, vision and signal processing IPC area chair of Fuzz-IEEE 2011; special session chair of Fuzz-IEEE 2013; program committe co-chair of IFSA2015, program committe co-chair of IEEE CEC 2015, General Chair of Fuzz-IEEE 2016 (WCCI 2016), and technical co-chair of IEEE CEC 2017, 2019, and of Fuzz-IEEE 2020 (WCCI 2020). His current main research interests are in the fields of: fuzzy rule-based systems and genetic fuzzy systems; evolutionary algorithms, ant colony optimization and other metaheuristics; computational intelligence applications to different topics (medical image registration, forensic anthropology, etc.); information visualization; and agent-based modeling, social networks, and their applications in marketing science.


Abstract
The primary and most reliable means of forensic identification are fingerprints, comparative dental analysis, and DNA analysis. However, the application of these methods fails when there is not enough ante-mortem (AM) or post-mortem (PM) information either due to unavailability of appropriate reference samples or to the degraded condition of the remains. Skeleton-based forensic identification (SFI) techniques become a sound alternative because the skeleton usually survives both natural and non-natural decomposition processes. Within SFI, some of the most important techniques are craniofacial superimposition (CFS) and comparative radiography (CR). CFS aims to overlay a skull with some AM images of a candidate in order to determine if they correspond to the same person. CR considers the comparison of AM and PM other bones and cavities (skull frontal sinuses, clavicles, patellas, …) which have been reported as useful for positive identification based on their individuality and uniqueness.
Although these SFI techniques have been widely used, there are not common methodologies accepted worldwide. Instead, each forensic anthropologist applies a specific approach considering her expert knowledge and the available technologies. Hence, there is a strong interest in designing systematic and automatic methods to support the forensic anthropologist to apply both CFS and CR, avoiding the use of subjective, error-prone, and time-consuming manual procedures.
The use of computational intelligence (evolutionary algorithms and fuzzy sets) and computer vision (3D-2D image registration and image processing) is a natural way to achieve this aim. This talk is devoted to present an intelligent system for CFS developed in collaboration with the University of Granada’s Physical Anthropology Lab within a twelve year long research project. The resulting system is protected by an international patent and is currently under commercialization in Mexico. The results obtained in several real-world cases solved by the Lab in cooperation with the Spanish Scientific Police will be reported. In addition, a recent proposal of a computer-aided CR paradigm based on the 3D bone scan-2D radiograph superimposition process of any bone or cavity will also be introduced.



 

 

An Approximation to the Problem of Information Fusion in Fuzzy Classification

Humberto Bustince
Public University of Navarra
Spain
 

Brief Bio
Humberto Bustince is full professor of Computer Science and Artificial Intelligence in the Public University of Navarra and Honorary Professor at the University of Nottingham. He is the main researcher of the Artificial Intelligence and Approximate Reasoning group of the former University, whose main research lines are both theoretical (aggregation and pre-aggregation functions, information and comparison measures, fuzzy sets and extensions) and applied (image processing, classification, machine learning, data mining, big data and deep learning). He has led 11 I+D public-funded research projects, at a national and at a regional level. He has been in charge of research projects collaborating with first-line private companies in fields such as banking, removable energies or security, among others. . He has taken part in two international research projects. He has authored more than 240 works, according to Web of Science, in conferences and international journals, most of them in journals of the first quartile of JCR. Moreover, five of these works are also among the highly cited papers of the last ten years, according to Science Essential Indicators of Web of Science. He is editor-in-chief of the online magazine Mathware&Soft Computing of the European Society for Fuzzy Logic and technologies, EUSFLAT) and of the Axioms journal. He is associated editor of the IEEE Transactions on Fuzzy Systems journal and member of the editorial board of the journals Fuzzy Sets and Systems, Information Fusion, International Journal of Computational Intelligence Systems and Journal of Intelligent & Fuzzy Systems. He is Senior member of the IEEE Association and Fellow of the International Fuzzy Systems Association (IFSA).In 2015 he was awarded for the best outstanding paper published in IEEE Transactions on Fuzzy Systems in 2013, and in 207, he received the Cross of Carlos III el Noble from the Government of Navarra.


Abstract
In this talk we review the notion of aggregation functions as a tool for information fusion. In particular, we focus on the case of Choquet and Sugeno integrals in the discrete setting, since thay allow us to take into account possible links between data. We discuss the limitations of aggregation functions, in particular those related to monotonicity and we propose generalizations, the so-called pre-aggregation functions, which overcome these shortcomings. In particular, we show how we can build generalizations of the usual Choquet and Sugeno integrals by considering operations more general than the product when we are defining them. These generalizations are not aggregation functions, in general, but just pre-aggregation functions, but they have very nice behavior in different applications. In particular, we show they can be used in classification problems to improve some fuzzy-rule based algorithms so that we obtain new classifiers at the level of the state-of-the-art.



footer