IJCCI 2018 Abstracts


Area 1 - Cognitive and Hybrid Systems

Short Papers
Paper Nr: 9
Title:

Preventing Cross-Site Scripting Attacks by Combining Classifiers

Authors:

Fawaz A. Mereani and Jacob M. Howe

Abstract: Cross-Site Scripting (XSS) is one of the most popular attacks targeting web applications. Using XSS attackers can obtain sensitive information or obtain unauthorized privileges. This motivates building a system that can recognise a malicious script when the attacker attempts to store it on a server, preventing the XSS attack. This work uses machine learning to power such a system. The system is based on a combination of classifiers, using cascading to build a two phase classifier and the stacking ensemble technique to improve accuracy. The system is evaluated and shown to achieve high accuracy and high detection rate on a large real world dataset.
Download

Paper Nr: 38
Title:

Forest Fire Area Estimation using Support Vector Machine as an Approximator

Authors:

Nittaya Kerdprasop, Pumrapee Poomka, Paradee Chuaybamroong and Kittisak Kerdprasop

Abstract: Forest fire is critical environmental issue that can cause severe damage. Fast detection and accurate estimation of forest fire burned area can help firefighters to effectively control damage. Thus, the purpose of this paper is to apply state of the art data modeling method to estimate the area of forest fire burning using support vector machine (SVM) algorithm as a tool for area approximation. The dataset is real forest fires data from the Montesinho natural park in the northeast region of Portugal. The original dataset comprises of 517 records with 13 attributes. We randomly sample the data 10 times to obtain 10 data-subsets for building estimation models using two kinds of SVM kernel: radial basis function and polynomial function. The obtained models are compared against other proposed techniques to assess performances based on the two measurement metrics: mean absolute error (MAE) and root mean square error (RMSE). The experimental results show that our SVM predictor using polynomial kernel function can precisely estimate forest fire damage area with the MAE and RMSE as low as 6.48 and 7.65, respectively. These errors are less than other techniques reported in the literature.
Download

Paper Nr: 12
Title:

Cognitive Architecture and Software Environment for the Design and Experimentation of Survival Behaviors in Artificial Agents

Authors:

Bhargav Teja Nallapu and Frédéric Alexandre

Abstract: We discuss here the characteristics of a software environment appropriate for the development of a bio-inspired cognitive architecture, which can emulate the behavior of autonomous intelligent agents. First, it is reminded that, while the focus is often set on the more abstract aspects of cognitive abilities, studying the fundamental bases of intelligence that allow for autonomy is a prerequisite for well defined intelligent systems. Secondly, we highlight functional loops associating cerebral structures including the basal ganglia in the brain of most species along the evolution. They are dedicated to the organization of behavior under the constraint of reinforcement, corresponding in their simplest expression to the selection of action for survival. Lastly, concerning the simulation of such models, we describe a software environment to study such relations in a more controlled way than hardware implementations, by adapting a platform built on the top of a video game for the development of classical artificial intelligence models. We explain here how our neuronal model exhibits bodily and internal characteristics necessary for survival tasks and how these characteristics are plugged in the simulation platform. Some scenarios of survival are reported as an illustration of this environment.
Download

Paper Nr: 13
Title:

Computational Modelling Auditory Awareness

Authors:

Yu Su, Jingyu Wang, Ke Zhang, Kurosh Madani and Xianyu Wang

Abstract: Research in the human voice and environment sound recognition has been well studied during the past decades. Nowadays, modeling auditory awareness has received more and more attention. Its basic concept is to imitate the human auditory system to give artificial intelligence the auditory perception ability. In order to successfully mimic human auditory mechanism, several models have been proposed in the past decades. In view of deep learning (DL) algorithms has better classification performance than conventional approaches (such as GMM and HMM), the latest research works mainly focused on building auditory awareness models based on deep architectures. In this survey, we will offer a quality and compendious survey on recent auditory awareness models and development trend. This article includes three parts: i) classical auditory saliency detection method and developments during the past decades, ii) the application of machine learning in ASD. Finally, summarizing comments and development trends in this filed will be given.
Download

Paper Nr: 27
Title:

Sentiment Analysis in Brazilian Portuguese Tweets in the Domain of Calamity: Application of the Summarization Method and Semantic Similarity in Polarized Terms

Authors:

Ariana Moura da Silva, Rodrigo da Matta Bastos and Ricardo Luis de Azevedo da Rocha

Abstract: This research integrates an interdisciplinary project which mobilizes the areas of Computer Engineering, Linguistics and Communication to perform the processing of texts in a natural language extracted from microblogging service Twitter as well as to conduct an analysis and classification of the sentiments mined. Many proposals have been formulated using the polarization method; however, most projects do not encompass an automatic classification by semantic proximity. This research aims to evaluate the reaction of individuals shared in the social network, not only to classify them as positive or negative, but also to ascertain the semantic similarity of these messages in the same domain. Based on the set of tweets in Portuguese extracted from a corpus of calamity, we apply three methods: a) the lexical classifier, called Summarization Method; b) the semantic classifier, called LSA - Latent Semantic Analysis; c) the ASSTPS classifier - Analysis of Semantic similarity in Polarized and Summarized terms. The results are applied to a set of 811 tweets of the calamity domain and point out which method obtained the best hit rate and semantic approximation. In this sense, the classification of sentiments by semantic proximity can help greatly, performing the sorting of content of relevant messages, discarding unnecessary information, linking messages with the same theme in common, and even generating Metrics for classifying emotions.
Download

Paper Nr: 39
Title:

Modeling Method for Temperature Anomaly Analysis

Authors:

Kittisak Kerdprasop, Paradee Chuaybamroong and Nittaya Kerdprasop

Abstract: This study applies intelligent analytical methods to analyze temperature anomaly events during the past seven centuries of countries in the Southeast Asia including Thailand, Malaysia, Myanmar, and Cambodia. The temperature reconstruction during the years 1300 to 1999 were used as data source for anomaly analysis. In the analytical process, correlation analysis was applied to initially investigate the temperature variability concordance among the Southeast Asian countries. The results are that temperature variability patterns in Thailand, Myanmar, and Cambodia are moderately correlated to each other. On the contrary, the temperature variation patterns of Malaysia do not correlate to other countries in the same region. The further in-depth analysis focuses on the temperature anomaly of Thailand that shows high variability from the 14th to 16th centuries. Several machine learning algorithms had been applied to estimate the temperature anomaly of Thailand based on the anomaly events among the neighbors. The learned models reveal that Myanmar temperature anomaly most associate to the Thailand’s temperature variation. The performance of each model had been assessed and the results reveal that the chi-squared automatic interaction detection, or CHAID, is the best one with 0.624 correlation coefficient and relative error around 0.611.
Download

Area 2 - Evolutionary Computation

Full Papers
Paper Nr: 18
Title:

Expansion: A Novel Mutation Operator for Genetic Programming

Authors:

Mohiul Islam, Nawwaf Kharma and Peter Grogono

Abstract: Expansion is a novel mutation operator for Genetic Programming (GP). It uses Monte Carlo simulation to repeatedly expand and evaluate programs using unit instructions, taking advantage of the granular search space of evolutionary program synthesis. Monte Carlo simulation and its heuristic search method, Monte Carlo Tree Search has been applied to Koza-style tree-based representation to compare results with different variation operations such as sub-tree crossover and point mutation. Using a set of benchmark symbolic regression problems, we prove that expansion have better fitness performance than point mutation, when included with crossover. It also provides significant boost in fitness when compared with GP using only crossover on a diverse problem set. We conclude that the best fitness can be achieved by including all three operators in GP, crossover, point mutation and expansion.
Download

Paper Nr: 32
Title:

The Influence of Input Data Standardization Methods on the Prediction Accuracy of Genetic Programming Generated Classifiers

Authors:

Amaal R. Al Shorman, Hossam Faris, Pedro A. Castillo, J. J. Merelo and Nailah Al-Madi

Abstract: Genetic programming (GP) is a powerful classification technique. It is interpretable and it can dynamically build very complex expressions that maximize or minimize some fitness functions. It has a capacity to model very complex problems in the area of Machine Learning, Data Mining and Pattern Recognition. Nevertheless, GP has a high computational complexity time. On the other side, data standardization is one of the most important pre-processing steps in machine learning. The purpose of this step is to unify the scale of all input features to have equal contribution to the model. The objective of this paper is to investigate the influence of input data standardization methods on GP, and how it affects its prediction accuracy. Six different methods of input data standardization were checked in order to determine which one allows to achieve the most accurate result with lowest computational cost. The simulations have been implemented on ten benchmarked datasets with three different scenarios (varying the population size and number of generations). The results showed that the computational efficiency of GP is highly enhanced when coupled with some standardization methods, specifically Min-Max method for scenario I and Vector method for scenario II, and scenario III. Whereas, Manhattan and Z-Score methods had the worst results for all three scenarios.
Download

Short Papers
Paper Nr: 6
Title:

Partial Sampling Operator and Tree-structural Distance for Multi-objective Genetic Programming

Authors:

Makoto Ohki

Abstract: This paper describes a technique on an optimization of tree-structure data, or genetic programming (GP), by means of a multi-objective optimization technique. NSGA-II is applied as a frame work of the multi-objective optimization. GP wreaks bloat of the tree structure as one of the major problem. The cause of bloat is that the tree structure obtained by the crossover operator grows bigger and bigger but its evaluation does not improve. To avoid the risk of bloat, a partial sampling (PS) operator is proposed instead to the crossover operator. Repeating processes of proliferation and metastasis in PS operator, new tree structure is generated as a new individual. Moreover, the size of the tree and a tree-structural distance (TSD) are additionally introduced into the measure of the tree-structure data as the objective functions. And then, the optimization problem of the tree-structure data is defined as a three-objective optimization problem. TSD is also applied to the selection of parent individuals instead to the crowding distance of the conventional NSGA-II. The effectiveness of the proposed techniques is verified by applying to the double spiral problem.
Download

Paper Nr: 15
Title:

Applying Cartesian Genetic Programming to Evolve Rules for Intrusion Detection System

Authors:

Hasanen Alyasiri, John Clark and Daniel Kudenko

Abstract: With cyber-attacks becoming a regular feature in daily business and attackers continuously evolving their techniques, we are witnessing ever more sophisticated and targeted threats. Various artificial intelligence algorithms have been deployed to analyse such incidents. Extracting knowledge allows the discovery of new attack methods, intrusion scenarios, and attackers’ objectives and strategies, all of which can help distinguish attacks from legitimate behaviour. Among those algorithms, Evolutionary Computation (EC) techniques have seen significant application. Research has shown it is possible to utilize EC methods to construct IDS detection rules. In this paper, we show how Cartesian Genetic Programming (CGP) can construct the behaviour rule upon which an intrusion detection will be able to make decisions regarding the nature of the activity observed in the system. The CGP framework evolves human readable solutions that provide an explanation of the logic behind its evolved decisions. Experiments are conducted on up-to-date cybersecurity datasets and compared with state of the art paradigms. We also introduce ensemble learning paradigm, indicating how CGP can be used as stacking technique to improve the learning performance.
Download

Paper Nr: 19
Title:

Meta Heuristics for Dynamic Machine Scheduling: A Review of Research Efforts and Industrial Requirements

Authors:

Simon Anderer, Thanh-Ha Vu, Bernd Scheuermann and Sanaz Mostaghim

Abstract: This paper presents a survey on the state-of-the-art of dynamic machine scheduling problems. For this purpose, 82 papers have been examined according to the underlying scheduling models and assumptions, the source and implementation of uncertainty and dynamics as well as the applied solution methods and optimization criteria. Furthermore, the integration of machine scheduling into the functional levels of a company is outlined and the essential requirements for dynamic machine scheduling in modern industrial environments are identified. On this basis, the most prevalent gaps, the main challenges, and conclusions for future research are pointed out.
Download

Paper Nr: 28
Title:

Scheduling of Streaming Data Processing with Overload of Resources using Genetic Algorithm

Authors:

Mikhail Melnik, Denis Nasonov and Nikolay Butakov

Abstract: The growing demand for processing of streaming data contributes to the development of distributed streaming platforms, such as Apache Storm or Flink. However, the volume of data and complexity of their processing is growing extremely fast, which poses new challenges and tasks for developing new tools and methods for improving the efficiency of streaming data processing. One of the main ways to improve a system performance is an effective scheduling and a proper configuration of the computing platform. Running large-scale streaming applications, especially in the clouds, requires a high cost of computing resources and additional efforts to deploy and support an application itself. Thus, there is a need for an opportunity to estimate the performance of the system and its behaviour before real calculations are made. Therefore, in this work we propose a model for distributed data stream processing, stream scheduling problem statement and a developed simulator of the streaming platform, immediately allowing to explore the behaviour of the system under various conditions. In addition, we propose a genetic algorithm for efficient stream scheduling and conducting experimental studies.
Download

Paper Nr: 34
Title:

An Investigation of Parameter Tuning in the Random Adaptive Grouping Algorithm for LSGO Problems

Authors:

Evgenii Sopov and Alexey Vakhnin

Abstract: Large-scale global optimization (LSGO) is known as one of the most challenging problem for many search algorithms. Many well-known real-world LSGO problems are not separable and are complex for comprehensive analysis, thus they are viewed as the black-box optimization problems. The most advanced algorithms for LSGO are based on cooperative coevolution with problem decomposition using grouping methods. The random adaptive grouping algorithm (RAG) combines the ideas of random dynamic grouping and learning dynamic grouping. In our previous studies, we have demonstrated that cooperative coevolution (CC) of the Self-adaptive Differential Evolution (DE) with Neighborhood Search (SaNSDE) with RAG (DECC-RAG) outperforms some state-of-the-art LSGO algorithms on the LSGO benchmarks proposed within the IEEE CEC 2010 and 2013. Nevertheless, the performance of the RAG algorithm can be improved by tuning the number of subcomponents. Moreover, there is a hypothesis that the number of subcomponents should vary during the run. In this study, we have performed an experimental analysis of parameter tuning in the RAG. The results show that the algorithm performs better when using subcomponents of larger size. In addition, some improvement can be done by applying dynamic group sizing.
Download

Paper Nr: 45
Title:

A Tandem Drone-ground Vehicle for Accessing Isolated Locations for First Aid Emergency Response in Case of Disaster

Authors:

Marcos Calle, Jose Luis Andrade-Pineda, Pedro Luis González-R, Jose Miguel Leon-Blanco and David Canca Ortiz

Abstract: The collapse of infrastructures is very often a complicating factor for the early emergency actuations after a disaster. A proper plan to better cover the needs of the affected people within the disaster area while maintaining life-saving relief operations is mandatory hence. In this paper, we use a drone for flying over a set of difficult-to-access locations for imaging issues to get information to build a risk assessment as the earliest stage of the emergency operations. While the drone provides the flexibility required to visit subsequently a sort of isolated locations, it needs a commando vehicle in ground for (i) monitoring the deployment of operations and (ii) being a recharging station where the drone gets fresh batteries. This work proposes a decision-making process to plan the mission, which is composed by the ground vehicle stopping points and the sequence of locations visited for each drone route. We propose a Genetic Algorithm (GA) which has proven to be helpful in finding good solutions in short computing times. We provide experimental analysis on the factors effecting the performance of the output solutions, around an illustrative test instance. Results show the applicability of these techniques for providing proper solutions to the studied problem.
Download

Paper Nr: 5
Title:

Total Optimization of Smart City by Global-best Modified Brain Storm Optimization

Authors:

Mayuko Sato, Yoshikazu Fukuyama, Tatsuya Iizaka and Tetsuro Matsui

Abstract: This paper proposes a total optimization method of a smart city (SC) by Global-best Modified Brain Storm Optimization (GMBSO). Efficient utilization of energy is necessary for reduction of CO2 emission, and SC demonstration projects have been conducted all of the world to reduce total energies and the amount of CO2 emission. Energy cost, actual electric power loads at peak load hours, and CO2 emission are minimized using the proposed method. Many evolutionary computation techniques such as Differential Evolutionary Particle Swarm Optimization (DEEPSO) and Global-best Brain Storm Optimization (GBSO) are applied to the problem. However, there is room for improving solution quality. This paper proposes Global-best Modified Brain Storm Optimization (GMBSO), which is a combined method of GBSO and Modified Brain Storm Optimization (MBSO). The proposed GMBSO is applied to a total optimization problem of SC. The results by the proposed GMBSO based method is compared with those by conventional DEEPSO, BSO, only GBSO, and only MBSO based methods.
Download

Paper Nr: 7
Title:

Many-Objective Nurse Scheduling using NSGA-II based on Pareto Partial Dominance with Linear Subset-size Scheduling

Authors:

Makoto Ohki

Abstract: This paper describes a nurse scheduling in Japanese standard general hospitals. In the standard general hospital in Japan basically three shift system is adopted for nurses working in there. We have compiled evaluations of the monthly nurse schedule into twelve penalty functions in the past work. These twelve penalty functions are translated to twelve objective functions in this paper. The nurse scheduling with twelve objective functions is solved as a multi-objective optimization problem by means of NSGA-II. The optimization is insufficient when NSGA-II is applied to such an optimization problem with four or more objective functions, known as a many-objective optimization problem. One method for reducing this problem is a technique based on Pareto partial dominance. In this technique, the partial non-dominated sorting is executed by using a subset selected from all objective functions. In the conventional technique, the schedule of subset size over optimization has to be prepared beforehand in the form of a list. Moreover, the selection list brings a great influence on the result of optimization. Creating such a selection list is a heavy burden for the user. This paper proposes a technique of NSGA-II based on Pareto partial dominance with a linear subset-size scheduling. By embedding the subset-size scheduling into the algorithm, the user, namely the chief nurse, is released from the designing of the selection list.
Download

Paper Nr: 17
Title:

A Cascading Chi-shapes based Decoder for Constraint-handling in Distributed Energy Management

Authors:

Joerg Bremer and Sebastian Lehnhoff

Abstract: A steadily rising share of small, distributed, and volatile energy units like wind energy converters solar panels, co-generation plants, or similar assigns new tasks and challenges to the smart grid regarding operation and control. The growing complexity of the grid also imposes a growing complexity of constraints that restrict the validity of solutions for operation schedules, resource capacity utilization or grid compliance. Using surrogate models as an abstraction layer has recently become a promising approach for constructing algorithms independently from any knowledge about the actual device or operation restricting constraints. So called decoders as a special constraint handling technique allow for systematically generating feasible solutions directly from a learned surrogate model. Some decoder approaches based on support vector machines have already been implemented, but suffer from performance issues and a sensible parametrization. We propose a new type of decoder based on a cascade of χ-shapes to overcome these problems. The applicability is demonstrated with a simulation study using different types of flexible energy units.
Download

Paper Nr: 26
Title:

Enhanced Differential Grouping for Large Scale Optimization

Authors:

Mohamed A. Meselhi, Ruhul A. Sarker, Daryl L. Essam and Saber M. Elsayed

Abstract: The curse of dimensionality is considered a main impediment in improving the optimization of large scale problems. An intuitive method to enhance the scalability of evolutionary algorithms is cooperative co-evolution. This method can be used to solve high dimensionality problems through a divide-and-conquer strategy. Nevertheless, its performance deteriorates if there is any interaction between subproblems. Thus, a method that tries to group interdependent variables in the same group is demanded. In addition, the computational cost of current decomposition methods is relatively expensive. In this paper, we propose an enhanced differential grouping (EDG) method, that can efficiently uncover separable and nonseparable variables in the first stage. Then, nonseparable variables are furthermore examined to detect their direct and indirect interdependencies, and all interdependent variables are grouped in the same subproblem. The efficiency of the EDG method was evaluated using large scale global optimization benchmark functions with up to 1000 variables. The numerical experimental results indicate that the EDG method efficiently decomposes benchmark functions with fewer fitness evaluations, in comparison with state-of-the-art methods. Moreover, EDG was integrated with cooperative co-evolution, which shows the efficiency of this method over other decomposition methods.
Download

Paper Nr: 33
Title:

Revisiting Population Structure and Particle Swarm Performance

Authors:

Carlos M. Fernandes, Nuno Fachada, Juan L. J. Laredo, Juan Julian Merelo, Pedro A. Castillo and Agostinho Rosa

Abstract: Population structure strongly affects the dynamic behavior and performance of the particle swarm optimization (PSO) algorithm. Most of PSOs use one of two simple sociometric principles for defining the structure. One connects all the members of the swarm to one another. This strategy is often called gbest and results in a connectivity degree k = n, where n is the population size. The other connects the population in a ring with k = 3. Between these upper and lower bounds there are a vast number of strategies that can be explored for enhancing the performance and adaptability of the algorithm. This paper investigates the convergence speed, accuracy, robustness and scalability of PSOs structured by regular and random graphs with 3≤k≤n. The main conclusion is that regular and random graphs with the same averaged connectivity k may result in significantly different performance, namely when k is low.
Download

Paper Nr: 35
Title:

Does the Jaya Algorithm Really Need No Parameters?

Authors:

Willa Ariela Syafruddin, Mario Köppen and Brahim Benaissa

Abstract: Jaya algorithm is a swarm optimization algorithm, formulated on the concept where the solution obtained for a given problem moves toward the best solution and away from the worst solution. Despite being a very simple algorithm, it has shown excellent performance for various application. It has been claimed that Jaya algorithm is parameter-free. Here, we want to investigate the question whether introducing parameters in Jaya might be of advantage. Results show the comparison of the results for different benchmark function, indicating that, apart from a few exceptions, generally no significant improvement of Jaya can be achieved this way. The conclusion is that we have to consider the operation of Jaya differently from a modified PSO and more in the sense of a stochastic gradient descent.
Download

Paper Nr: 42
Title:

Multi-objective Evolutionary Approach in the Linear Dynamical System Inverse Modeling

Authors:

Ivan Ryzhikov, Christina Brester, Eugene Semenkin and Mikko Kolehmainen

Abstract: In this study, we consider an inverse mathematical modeling problem for dynamical systems with a single output. Generally, the final solution of this problem is an approximation of a system transient process and a system state at some time point. Only those classes of models, which describe the transient process properly, can portray the system behavior and can be applicable for prediction and optimal control problems. One of possible mathematical representations of dynamical systems is differential equations, in particular, linear differential equations for linear systems. While solving the inverse problem, we aim to identify a differential equation order and parameters, an initial system state. Since all the parameters are interrelated, we propose to identify them by solving a two-criterion optimization problem, which includes the model adequacy (i.e. a distance between model outputs and observations) and the closeness of the initial value estimation to the observation data. To solve this complex optimization problem, we apply a Real-valued Cooperative Multi-Objective Evolutionary Algorithm which effectiveness has been proved on the set of high-dimensional test problems. We investigate the dependency between the considered criteria by depicting the Pareto front approximation. Then, having the same amount of computational resources, we vary the system order, the number of control inputs and the initial state to analyze changes in the algorithm effectiveness based on each criterion and estimate basic limitations. Finally, we conclude that the optimization problem considered is quite challenging and it might be used for testing and comparing various heuristics.
Download

Area 3 - Fuzzy Computation

Full Papers
Paper Nr: 31
Title:

Usability of Concordance Indices in FAST-GDM Problems

Authors:

Marcelo Loor, Ana Tapia-Rosero and Guy De Tré

Abstract: A flexible attribute-set group decision-making (FAST-GDM) problem boils down to finding the most suitable option(s) with a general agreement among the participants in a decision-making process in which each option can be described by a flexible collection of attributes. The solution to such a problem can involve a consensus reaching process (CRP) in which the participants iteratively try to reach a general agreement on the best option(s) based on the attributes that are relevant for each participant. A challenging task in a CRP is the selection of an adequate method to determine the level of concordance between the evaluations given by each participant and the collective evaluations computed for the group. To gain insights in this regard, we performed a pilot test in which a group of persons were asked to estimate the level of concordance between individual and collective evaluations obtained while other participants tried to solve a FAST-GDM problem. The perceived concordance levels were compared with several theoretical concordance indices based on similarity measures designed to compare intuitionistic fuzzy sets. This paper presents our findings on how each of the chosen theoretical concordance indices reflected the perceived concordance levels.
Download

Paper Nr: 41
Title:

Rainfall-runoff Modelling in a Semi-urbanized Catchment using Self-adaptive Fuzzy Inference Network

Authors:

Tak Kwin Chang, Amin Talei and Chai Quek

Abstract: Conventional neuro-fuzzy systems used for rainfall-runoff (R-R) modelling generally employ offline learning in which the number of rules and rule parameters need to be set by the user in calibration stage. This make the rule-base fixed and incapable of being adaptive if some rules become inconsistent over time. In this study, the Self-adaptive Fuzzy Inference Network (SaFIN) is used for R-R application. SaFIN benefits from an adaptive learning mechanism which allows it to remove inconsistent and obsolete rules over time. SaFIN models are developed to capture the R-R process in two catchments including Dandenong located in Victoria, Australia, and Sungai Kayu Ara catchment in Selangor, Malaysia. Models’ performance aer then compared with the ANFIS, ARX, and physical models. Results show that SaFIN outperforms ANFIS, ARX, and physical models in simulating runoff for both low and peak flows. This study shows the good potential of using SaFIN in R-R modelling application.
Download

Short Papers
Paper Nr: 8
Title:

Operator-dependent Modifiers in Nilpotent Logical Systems

Authors:

J. Dombi and O. Csiszár

Abstract: The purpose of the current study is to consider the main unary operators of a nilpotent logical system in an integral framework and to reveal the underlying general structure of all the previously examined operators in nilpotent logical systems. The unary operators are obtained by repeating the argument in multivariable operators. This enables us to provide a widely applicable system, where all the operators are connected to each other, and where the modalities and hedges are operator-dependent. It becomes possible to describe all the operators by using a generator function and a few parameters. The possibility, necessity and sharpness operators are thoroughly examined and it is also shown how the multivariable operators can be derived from the unary ones.
Download

Paper Nr: 14
Title:

A Flexible Approach to Matching User Preferences with Records in Datasets based on the Conformance Measure and Aggregation Functions

Authors:

Miljan Vučetić and Miroslav Hudec

Abstract: Matching user preferences with content in datasets is an important task in building robust query engines. However, this is still a challenging task, because the entities’ attributes are often expressed by various data types including numerical, categorical, and fuzzy data. Moreover, the user’s preferences and data types for particular attributes may not collide, i.e. the user explains his requirements in linguistic term(s), whereas the respective attribute is recorded as a real number and vice versa. Further, the user may provide different relevancies for atomic conditions, where usual one-directional reinforcement aggregation functions, e.g. conjunction, are not suitable. In this paper, we propose a robust framework capable to manage user requirements and match them with records in a dataset. The former is solved by conformance measure, whereas for the latter the suitable aggregation functions have been suggested to cover particular aggregation needs. Finally, we discuss benefits, drawbacks and outline further activities.
Download

Area 4 - Neural Computation

Full Papers
Paper Nr: 2
Title:

Deep Classifier Structures with Autoencoder for Higher-level Feature Extraction

Authors:

Maysa I. A. Almulla Khalaf and John Q. Gan

Abstract: This paper investigates deep classifier structures with stacked autoencoder (SAE) for higher-level feature extraction, aiming to overcome difficulties in training deep neural networks with limited training data in high-dimensional feature space, such as overfitting and vanishing/exploding gradients. A three-stage learning algorithm is proposed in this paper for training deep multilayer perceptron (DMLP) as the classifier. At the first stage, unsupervised learning is adopted using SAE to obtain the initial weights of the feature extraction layers of the DMLP. At the second stage, error back-propagation is used to train the DMLP by fixing the weights obtained at the first stage for its feature extraction layers. At the third stage, all the weights of the DMLP obtained at the second stage are refined by error back-propagation. Cross-validation is adopted to determine the network structures and the values of the learning parameters, and test datasets unseen in the cross-validation are used to evaluate the performance of the DMLP trained using the three-stage learning algorithm, in comparison with support vector machines (SVM) combined with SAE. Experimental results have demonstrated the advantages and effectiveness of the proposed method.
Download

Paper Nr: 3
Title:

Evaluation Platform for Artificial Intelligence Algorithms

Authors:

Zoltan Czako, Gheorghe Sebestyen and Anca Hangan

Abstract: Currently, artificial intelligence (AI) algorithms have been receiving a lot of attention from researchers as well as from commercial product developers. Hundreds of different AI algorithms aiming different kind of real-life problems have qualitatively different results, based on the nature of data, the nature of the problems and based on the context in which they are used. Choosing the most appropriate algorithm to solve a particular problem is not a trivial task. The goal of our research is to create a platform, which can be used in the early stage of problem solving. With this platform, the user could be able to quickly train, test and evaluate several artificial intelligence algorithms and also they will be able to find out which is the algorithm that performs best for a specific problem. Moreover, this platform will help developers to tune the parameters of the chosen algorithm in order to get better results on their problem. We will demonstrate our approach by running different types of algorithms initially in the case of breast cancer sample dataset and after that we will use the platform for solving an anomaly detection problem.
Download

Paper Nr: 4
Title:

Experimental Evaluation of Point Cloud Classification using the PointNet Neural Network

Authors:

Marko Filipović, Petra Ðurović and Robert Cupec

Abstract: Recently, new approaches for deep learning on unorganized point clouds have been proposed. Previous approaches used multiview 2D convolutional neural networks, volumetric representations or spectral convolutional networks on meshes (graphs). On the other hand, deep learning on point sets hasn’t yet reached the “maturity” of deep learning on RGB images. To the best of our knowledge, most of the point cloud classification approaches in the literature were based either only on synthetic models, or on a limited set of views from depth sensors. In this experimental work, we use a recent PointNet deep neural network architecture to reach the same or better level of performance as specialized hand-designed descriptors on a difficult dataset of non-synthetic depth images of small household objects. We train the model on synthetically generated views of 3D models of objects, and test it on real depth images.
Download

Short Papers
Paper Nr: 10
Title:

Making AI Great Again: Keeping the AI Spring

Authors:

Lito Perez Cruz and David Treisman

Abstract: There are philosophical implications to how we define Artificial Intelligence (AI). To talk about AI is to deal with philosophy. Working on the intersections between these subjects, this paper takes a multi-lens approach in examining the reasons for the present resurgence of interest in things AI through a range of historical, linguistic, mathematical and economic perspectives. It identifies AI’s past decline and offers suggestions on how to sustain and give substance to the current global hype and frenzy surrounding AI.
Download

Paper Nr: 20
Title:

A Grid Cell Inspired Model of Cortical Column Function

Authors:

Jochen Kerdels and Gabriele Peters

Abstract: The cortex of mammals has a distinct, low-level structure consisting of six horizontal layers that are vertically connected by local groups of about 80 to 100 neurons forming so-called minicolumns. A well-known and widely discussed hypothesis suggests that this regular structure may indicate that there could be a common computational principle that governs the diverse functions performed by the cortex. However, no generally accepted theory regarding such a common principle has been presented so far. In this position paper we provide a novel perspective on a possible function of cortical columns. Based on our previous efforts to model the behaviour of entorhinal grid cells we argue that a single cortical column can function as an independent, autoassociative memory cell (AMC) that utilizes a sparse distributed encoding. We demonstrate the basic operation of this AMC by a first set of preliminary simulation results.
Download

Paper Nr: 21
Title:

Problem Solving using Recurrent Neural Network based on the Effects of Gestures

Authors:

Sanghun Bang and Charles Tijus

Abstract: Models of puzzle problem solving, such as Tower of Hanoi, are based on moves analysis. In a grounded and embodied based approach of cognition, we thought that gestures made to take the discs to one place and place them in another place could be beneficial to the learning process, as well as to the modeling and simulation. Gestures comprise moves, but in addition they are also prerequisites of moves when the free hand goes in one location to take a disc. Our hypothesis is that we can model the solving of the Tower of Hanoi through observing the actions of the hand with and without objects. We collected sequential data of moves and gestures of participants solving the Tower of Hanoi with four dicks and, then, train a Recurrent Neural Network model of Tower of Hanoi based on these data in order to find the shortest solution path. In this paper, we propose an approach for change of state sequences training, which combines Recurrent Neural Network and Reinforcement Learning methods.
Download

Paper Nr: 30
Title:

A Case Study on using Crowdsourcing for Ambiguous Tasks

Authors:

Ankush Chatterjee, Umang Gupta and Puneet Agrawal

Abstract: In our day to day life, we come across situations which are interpreted differently by different human beings. A given sentence may be offensive to some humans but not to others. Similarly, a sentence can convey different emotions to different human beings. For instance, “Why you never text me!”, can either be interpreted as a sad or an angry utterance. Lack of facial expressions and voice modulations make detecting emotions in textual sentences a hard problem. Some textual sentences are inherently ambiguous and their true emotion label is difficult to determine. In this paper, we study how to use crowdsourcing for an ambiguous task of determining emotion labels of textual sentences. Crowdsourcing has become one of the most popular medium for obtaining large scale labeled data for supervised learning tasks. However, for our task, due to the intrinsic ambiguity, human annotators differ in opinions about the underlying emotion of certain sentences. In our work, we harness the multiple perspectives of annotators for ambiguous sentences to improve the performance of an emotion detection model. In particular, we compare our technique against the popularly used technique of majority vote to determine the label of a given sentence. Our results indicate that considering diverse perspective of annotators is helpful for the ambiguous task of emotion detection.
Download

Paper Nr: 46
Title:

Circulating Tumor Enumeration using Deep Learning

Authors:

Stephen Obonyo and Joseph Orero

Abstract: Cancer is the third most killer disease just after infectious and cardiovascular diseases. Existing cancer treatment methods vary among patients based on the type and stage of tumor development. Treatment modalities such as chemotherapy, surgery and radiation are successful when the disease is detected early and regularly monitored. Enumeration and detection of Circulating Tumor Cells (CTC’s) is a key monitoring method which involves identification of cancer related substances known as tumor markers which are excreted by primary tumors into patient’s blood. The presence, absence or number of CTC’s in blood can be used as treatment metric indicator. As such, the metric can be used to evaluate patient’s disease progression and determine effectiveness of a treatment option a patient is subjected to. In this paper, we present a deep learning model based on Convolutional Neural Network which learns and enumerates CTC’s from stained image samples. With no human intervention, the model learns the best set of representations to enumerate CTC’s.
Download