Current Trends in Neural Networks : Theory and Applications

Recent progress in connectionism / neural networks has come from three main areas: (1) increasing the biological plausibility of network models (2) precise analysis of feed-forward network behavior with respect to traditional symbol-processing concepts (3) development and application of recurrent (feed-back) networks to sequence-processing tasks. This session presents current work in all three areas.

Speakers:

Feed-forward Neural Networks based on Self-Extracted Knowledge
Hyeoncheol Kim, Korea University
We introduce a hybrid model of neural network learning and the learned knowledge. It involves neural network training with domain data, transforming connection weights of the trained network into symbolic rules, and building a new network with the symbolic knowledge. In this article, we show that knowledge can be represented in either of the following two di erent forms: connection weights or symbolic rules, which are mutually interchangeable. The hybrid model provides better structure complexity and better performance over other models with neural network only or symbolic rule base only. Empirical results are also shown.

Mapping a Decision Tree for Classification into a Neural Network
LI Aijun, LIU Yunhui and LUO Siwei, Northern Jiaotong University
The traditional neural network has its own limitations. For example, training time is long, the topology is not specified, and so on. Since the decision tree and neural network are similar in classification and have equivalent property, a decision tree can be used to provide a systematic design method of neural network. In this paper, we propose a new mapping between decision tree and neural network that can accurately specify the number of units, layers, connection and initial setting of parameters of neural network. Further, we quote two theorems to show that the mapping is reasonable.

A Neural Network for High-Level Cognitive Control of Serial Order Behavior
Steve Donaldson, Samford University
Cognitive behaviors presuppose an ability to process serially ordered information, but in entities that demon-strate higher levels of intelligence rote recall is insuffi-cient. Humans show a marked ability to reproduce pre-viously learned sequences in varied forms that not only provide rich and interesting responses, but enable proc-essing flexibility. This work shows how an artificial neural network that combines predictive learning, se-quence interleaving, and sequence creation components can model such behavior, thus leading to an advanced form of nonstereotypical serial order processing.

An Evolutionary Strategy for Supervised Training of Biologically Plausible Neural Networks
Ammar Belatreche, Liam P Maguire, Martin McGinnity and Qing Xiang Wu, University of Ulster
Spiking neural networks represent a more plausible model of real biological neurons. In contrast to the classical artificial neural networks, which adopt a high abstraction of real neurons, spiking neurons consider time as an important feature for information representation and processing. However, good training algorithms are needed for better exploitation of these realistic models. Most existing learning paradigms adjust the synaptic weights in an unsupervised way based on the adaptation of the famous Hebbian rule. In this paper a new approach for supervised training is presented with a biologically plausible architecture. An adapted evolutionary strategy is used for adjusting the synaptic strengths and delays, which are responsible for learning the model of spike trains fed to the input neurons. The algorithm is applied to complex nonlinearly separable problems, and the results show that the network is able to perform learning successfully using temporal encoding.

Dynamical Parsing to Fractal Representations
Simon D. Levy, Washington & Lee University
A connectionist parsing model is presented in which traditional formal computing mechanisms (Finite-State Automaton; Parse Tree) have direct recurrent neural-network analogues (Sequential Cascaded Net; Fractal RAAM Decoder). The model is demonstrated on a paradigmatic formal context-free language and an arithmetic-expression parsing task. Advantages and current shortcomings of the model are described, and its contribution to the ongoing debate about the role of connectionism in language processing tasks is discussed.

Shallow Parsing with Long Short-Term Memory
James Hammerton, University of Groningen
Applying Artificial Neural Networks (ANNs) to language learning has been an active area of research in connectionism. However much of this work has involved small and/or artifi- cially created data sets, whilst other approaches to language learning are now routinely applied to large real-world datasets containing hundreds of thousands of words or more, thus raising the question of how ANNs might scale-up. This paper describes recent work on shallow parsing of real world texts using a recurrent neural network(RNN) architecture called Long Short-Term Memory (LSTM).




Last modified: July 23, 2003 by Dan Ventura.