Abstract: |
This work describes a methodology to combine logic-based systems and connectionist systems. Our approach uses finite truth-valued {\L}ukasiewicz logic, where we take advantage of fact, presented in \cite{Castro98}, wherein every connective can be defined by a neuron in an artificial network having, by activation function, the identity truncated to zero and one. This allowed the injection of first-order formulas into a network architecture, and also simplified symbolic rule extraction. Neural networks are trained using the Levenderg-Marquardt algorithm, where we restricted the knowledge dissemination in the network structure, and the generated network is simplified applying the "Optimal Brain Surgeon" algorithm proposed by B. Hassibi, D. G. Stork and G.J. Stork. This procedure reduces neural network plasticity without drastically damaging the learning performance, thus making the descriptive power of produced neural networks similar to the descriptive power of {\L}ukasiewicz logic language and simplifying the translation between symbolic and connectionist structures. We used this method in the reverse engineering problem of finding the formula used on the generation of a given truth table. For real data sets the method is particularly useful for attribute selection, on binary classification problems defined using nominal attributes, and where each instance had a level of uncertainty associated with it. |