
Ventura, Dan's Publications (detailed list) 


THIS PAGE IS NO LONGER MAINTAINED. Click here for our new publications list, which is more uptodate.
This page contains the titles and abstracts of papers written by author Ventura, Dan, a member of the BYU Neural Networks and Machine Learning (NNML) Research Group. Postscript files are available for most papers. A more concise list is available.
To view the entire list in one page, click here.
“A Subsymbolic Model of the Cognitive Processes of Rerepresentation and Insight
 Authors: Dan Ventura
 Abstract:
We present a subsymbolic computational model for effecting knowledge rerepresentation and insight. Given a set of data, manifold learning is used to automatically organize the data into one or more representational transformations, which are then learned with a set of neural networks. The result is a set of neural filters that can be applied to new data as rerepresentation operators.
 Reference: In Proceedings of ACM Creativity and Cognition, page to appear, October 2009.
 BibTeX
 Download the file: pdf
A Reductio Ad Absurdum Experiment in Sufficiency for Evaluating (Computational) Creative Systems
 Authors: Dan Ventura
 Abstract:
We consider a combination of two recent proposals for characterizing computational creativity and explore the sufficiency of the resultant framework. We do this in the form of a gedanken experiment designed to expose the nature of the framework, what it has to say about computational creativity, how it might be improved and what questions this raises.
 Reference: In Proceedings of the International Joint Workshop on Computational Creativity, pages 11–19, September 2008.
 BibTeX
 Download the file: pdf
DataDriven Programming and Behavior for Autonomous Virtual Characters
 Authors: Jonathan Dinerstein and Dan Ventura and Michael Goodrich and Parris Egbert
 Abstract:
We present a highlevel overview of a system for programming autonomous virtual characters by demonstration. The result is a deliberative model of agent behavior that is stylized and effective, as demonstrated in five different cases studies.
 Reference: In Proceedings of the Association for the Advancement of Artificial Intelligence, pages 1450–1451, July 2008.
 BibTeX
 Download the file: pdf
Subsymbolic Rerepresentation to Facilitate Learning Transfer
 Authors: Dan Ventura
 Abstract:
We consider the issue of knowledge (re)representation in the context of learning transfer and present a sub symbolic approach for e?ecting such transfer. Given a set of data, manifold learning is used to automatically organize the data into one or more representational transformations, which are then learned with a set of neural networks. The result is a set of neural filters that can be applied to new data as rerepresentation operators. Encouraging preliminary empirical results elucidate the approach and demonstrate its feasibility, suggesting possible implications for the broader field of creativity.
 Reference: In Creative Intelligent Systems, AAAI 2008 Spring Symposium Technical Report SS0803, pages 128–134, March 2008.
 BibTeX
Robust MultiModal Biometric Fusion via SVM Ensemble
 Authors: Sabra Dinerstein and Jon Dinerstein and Dan Ventura
 Abstract:
Existing learningbased multimodal biometric fusion techniques typically employ a single static Support Vector Machine (SVM). This type of fusion improves the accuracy of biometric classification, but it also has serious limitations because it is based on the assumptions that the set of biometric classifiers to be fused is local, static, and complete. We present a novel multiSVM approach to multimodal biometric fusion that addresses the limitations of existing fusion techniques and show empirically that our approach retains good classification accuracy even when some of the biometric modalities are unavailable.
 Reference: In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, pages 1530–1535, October 2007.
 BibTeX
 Download the file: pdf
Learning Policies for Embodied Virtual Agents Through Demonstration
 Authors: Jonathan Dinerstein and Parris Egbert and Dan Ventura
 Abstract:
Although many powerful AI and machine learning techniques exist, it remains difficult to quickly create AI for embodied virtual agents that produces visually lifelike behavior. This is important for applications (e.g., games, simulators, interactive displays) where an agent must behave in a manner that appears humanlike. We present a novel technique for learning reactive policies that mimic demonstrated human behavior. The user demonstrates the desired behavior by dictating the agent’s actions during an interactive animation. Later, when the agent is to behave autonomously, the recorded data is generalized to form a continuous statetoaction mapping. Combined with an appropriate animation algorithm (e.g., motion capture), the learned policies realize stylized and naturallooking agent behavior. We empirically demonstrate the efficacy of our technique for quickly producing policies which result in lifelike virtual agent behavior.
 Reference: In Proceedings of the International Joint Conference on Artificial Intelligence, pages 1257–1262, Hyderabad, India, January 2007.
 BibTeX
 Download the file: pdf
Clustering Music via the Temporal Similarity of Timbre
 Authors: Jake Merrell and Dan Ventura and Bryan Morse
 Abstract:
We consider the problem of measuring the similarity of streaming music content and present a method for modeling, on the fly, the temporal progression of a song’s timbre. Using a minimum distance classification scheme, we give an approach to classifying streaming music sources and present performance results for auto associative song identification and for contentbased clustering of streaming music. We discuss possible extensions to the approach and possible uses for such a system.
 Reference: In IJCAI Workshop on Artificial Intelligence and Music, pages 153–164, January 2007.
 BibTeX
 Download the file: pdf
Geometric Task Decomposition in a Multiagent Environment
 Authors: Kaivan Kamali and Dan Ventura and Amulya Garga and Soundar Kumara
 Abstract:
Task decomposition in a multiagent environment is often performed online. This paper proposes a method for subtask allocation that can be performed before the agents are deployed, reducing the need for communication among agents during their mission. The proposed method uses a Voronoi diagram to partition the taskspace among team members and includes two phases: static and dynamic. Static decomposition (performed in simulation before the start of the mission) repeatedly partitions the taskspace by generating random diagrams and measuring the efficacy of the corresponding subtask allocation. If necessary, dynamic decomposition (performed in simulation after the start of a mission) modifies the result of a static decomposition (i.e. in case of resource limitations for some agents). Empirical results are reported for the problem of surveillance of an arbitrary region by a team of agents.
 Reference: Applied Artificial Intelligence, volume 20 (5), pages 437–456, 2006.
 BibTeX
 Download the file: pdf
Fast and Robust Incremental Action Prediction for Interactive Agents
 Authors: John Dinerstein and Dan Ventura and Parris Egbert
 Abstract:
The ability for a given agent to adapt online to better interact with another agent is a difficult and important problem. This problem becomes even more difficult when the agent to interact with is a human, since humans learn quickly and behave nondeterministically. In this paper we present a novel method whereby an agent can incrementally learn to predict the actions of another agent (even a human), and thereby can learn to better interact with that agent. We take a casebased approach, where the behavior of the other agent is learned in the form of stateaction pairs. We generalize these cases either through continuous knearest neighbor, or a modified bounded minimax search. Through our case studies, our technique is empirically shown to require little storage, learn very quickly, and be fast and robust in practice. It can accurately predict actions several steps into the future. Our case studies include interactive virtual environments involving mixtures of synthetic agents and humans, with cooperative and/or competitive relationships.
 Reference: Computational Intelligence, volume 21 (1), pages 90–110, 2005.
 BibTeX
 Download the file: pdf
Training a Quantum Neural Network
 Authors: Bob Ricks and Dan Ventura
 Abstract:
Quantum learning holds great promise for the field of machine intelligence. The most studied quantum learning algorithm is the quantum neural network. Many such models have been proposed, yet none has become a standard. In addition, these models usually leave out many details, often excluding how they intend to train their networks. This paper discusses one approach to the problem and what advantages it would have over classical networks.
 Reference: In Neural Information Processing Systems, pages 1019–1026, December 2003.
 BibTeX
 Download the file: pdf
Probabilistic Connections in Relaxation Networks
 Authors: Dan Ventura
 Abstract:
This paper reports results from studying the behavior of Hopfieldtype networks with probabilistic connections. As the probabilities decrease, network performance degrades. In order to compensate, two network modifications — input persistence and a new activation function — are suggested, and empirical results indicate that the modifications significantly improve network performance.
 Reference: In Proceedings of the International Joint Conference on Neural Networks, pages 934–938, May 2002.
 BibTeX
 Download the file: pdf
Pattern Classification Using a Quantum System
 Authors: Dan Ventura
 Abstract:
We consider and compare three approaches to quantum pattern classification, presenting empirical results from simulations.
 Reference: In Proceedings of the Joint Conference on Information Sciences, pages 537–640, March 2002.
 BibTeX
 Download the file: pdf
A Quantum Analog to Basis Function Networks
 Authors: Dan Ventura
 Abstract:
A Fourierbased quantum computational learning algorithm with similarities to classical basis function networks is developed. Instead of a Gaussian basis, the quantum algorithm uses a discrete Fourier basis with the output being a linear combination of the basis. A set of examples is considered as a quantum system that undergoes unitary transformations to produce learning. The main result of the work is a quantum computational learning algorithm that is unique among quantum algorithms as it does not assume a priori knowledge of a function f.
 Reference: In Proceedings of the International Conference on Computing Anticipatory Systems, pages 286–295, August 2001.
 BibTeX
 Download the file: pdf
On the Utility of Entanglement in Quantum Neural Computing
 Authors: Dan Ventura
 Abstract:
Efforts in combining quantum and neural computation are briefly discussed and the concept of entanglement as it applies to this subject is addressed. Entanglement is perhaps the least understood aspect of quantum systems used for computation, yet it is apparently most responsible for their computational power. This paper argues for the importance of understanding and utilizing entanglement in quantum neural computation.
 Reference: In Proceedings of the International Joint Conference on Neural Networks, pages 1565–1570, July 2001.
 BibTeX
 Download the file: pdf
Learning Quantum Operators
 Authors: Dan Ventura
 Abstract:
Consider the system Fx>=y> where F is unknown. We examine the possibility of learning the operator F inductively, drawing analogies with ideas from classical computational learning.
 Reference: In Proceedings of the Joint Conference on Information Sciences, pages 750–752, March 2000.
 BibTeX
 Download the file: pdf
Quantum Neural Networks
 Authors: Alexandr Ezhov and Dan Ventura
 Abstract:
This chapter outlines the research, development and perspectives of quantum neural networks – a burgeoning new field which integrates classical neurocomputing with quantum computation. It is argued that the study of quantum neural networks may give us both new undestanding of brain function as well as unprecedented possibilities in creating new systems for information processing, including solving classically intractable problems, associative memory with exponential capacity and possibly overcoming the limitations posed by the ChurchTuring thesis.
 Reference: Kasabov, N., editor, Future Directions for Intelligent Systems and Information Science 2000, PhysicaVerlag, 2000.
 BibTeX
 Download the file: pdf
Distributed Queries for Quantum Associative Memory
 Authors: Alexandr Ezhov and A. Nifanova and Dan Ventura
 Abstract:
This paper discusses a model of quantum associative memory which generalizes the completing associative memory proposed by Ventura and Martinez. Similar to this model, our system is based on Grover’s well known algorithm for searching an unsorted quantum database. However, the model presented in this paper suggests the use of a distributed query of general form. It is demonstrated that spurious memories form an unavoidable part of the quantum associative memory model; however, the very presence of these spurious states provides the possibility of organizing a controlled process of data retrieval using a specially formed initial state of the quantum database and also of the transformation performed upon it. Concrete examples illustrating the properties of the proposed model are also presented.
 Reference: Information Sciences , volume 34, pages 271–293, 2000.
 BibTeX
 Download the file: pdf
Optically Simulating a Quantum Associative Memory
 Authors: John Howell and John Yeazell and Dan Ventura
 Abstract:
This paper discusses the realization of a quantum associative memory using linear integrated optics. An associative memory produces a full pattern of bits when presented with only a partial pattern. Quantum computers have the potential to store large numbers of patterns and hence have the ability to far surpass any classical neural network realization of an associative memory. In this work two 3qubit associative memories will be discussed using linear integrated optics. In addition, corrupted, invented and degenerate memories are discussed.
 Reference: Physical Review A, volume 62, 2000. Article 42303.
 BibTeX
 Download the file: pdf
Quantum Associative Memory
 Authors: Dan Ventura and Tony R. Martinez
 Abstract:
This paper combines quantum computation with classical neural network theory to produce a quantum computational learning algorithm. Quantum computation uses microscopic quantum level effects to perform computational tasks and has produced results that in some cases are exponentially faster than their classical counterparts. The unique characteristics of quantum theory may also be used to create a quantum associative memory with a capacity exponential in the number of neurons. This paper combines two quantum computational algorithms to produce such a quantum associative memory. The result is an exponential increase in the capacity of the memory when compared to traditional associative memories such as the Hopfield network. The paper covers necessary highlevel quantum mechanical and quantum computational ideas and introduces a quantum associative memory. Theoretical analysis proves the utility of the memory, and it is noted that a small version should be physically realizable in the near future.
 Reference: Information Sciences , volume 14, pages 273–296, 2000.
 BibTeX
 Download the file: pdf
Initializing the Amplitude Distribution of a Quantum State
 Authors: Dan Ventura and Tony R. Martinez
 Abstract:
To date, quantum computational algorithms have operated on a superposition of all basis states of a quantum system. Typically, this is because it is assumed that some function f is known and implementable as a unitary evolution. However, what if only some points of the function f are known? It then becomes important to be able to encode only the knowledge that we have about f. This paper presents an algorithm that requires a polynomial number of elementary operations for initializing a quantum system to represent only the m known points of a function f.
 Reference: Foundations of Physics Letters , volume 6, pages 547–559, December 1999.
 BibTeX
 Download the file: pdf
A Quantum Associative Memory Based on Grover’s Algorithm
 Authors: Dan Ventura and Tony R. Martinez
 Abstract:
Quantum computation uses microscopic quantum level effects to perform computational tasks and has produced results that in some cases are exponentially faster than their classical counterparts. The unique characteristics of quantum theory may also be used to create a quantum associative memory with a capacity exponential in the number of neurons. This paper combines two quantum computational algorithms to produce a quantum associative memory. The result is an exponential increase in the capacity of the memory when compared to traditional associative memories such as the Hopfield network. This paper covers necessary high level quantum mechanical ideas and introduces a quantum associative memory, a small verson of which should be physically realizable in the near future.
 Reference: In Proceedings of the International Conference on Artificial Neural Networks and Genetic Algorithms, pages 22–27, April 1999.
 BibTeX
 Download the file: ps, pdf
Quantum Computational Intelligence: Answers and Questions
 Authors: Dan Ventura
 Abstract:
This is a brief article discussing the interesting possibilities and potential difficulties with combining classical computational intelligence with quantum computation. See http://www.computer.org/intelligent/ex1999/pdf/x4009.pdf for a copy of the article.
 Reference: IEEE Intelligent Systems , volume 4, pages 14–16, 1999.
 BibTeX
 Download the file: pdf
Implementing Competitive Learning in a Quantum System
 Authors: Dan Ventura
 Abstract:
Ideas from quantum computation are applied to the field of neural networks to produce competitive learning in a quantum system. The resulting quantum competitive learner has a prototype storage capacity that is exponentially greater than that of its classical counterpart. Further, empirical results from simulation of the quantum competitive learning system on realworld data sets demonstrate the quantum system’s potential for excellent performance.
 Reference: In Proceedings of the International Joint Conference on Neural Networks (IJCNN’99), paper 513, 1999.
 BibTeX
 Download the file: ps, pdf
A Neural Model of Centered Trigram Speech Recognition
 Authors: Dan Ventura and D. Randall Wilson and Brian Moncur and Tony R. Martinez
 Abstract:
A relaxation network model that includes higher order weight connections is introduced. To demonstrate its utility, the model is applied to the speech recognition domain. Traditional speech recognition systems typically consider only that context preceding the word to be recognized. However, intuition suggests that considering following context as well as preceding context should improve recognition accuracy. The work described here tests this hypothesis by applying the higher order relaxation network to consider both precedes and follows context in a speech recognition task. The results demonstrate both the general utility of the higher order relaxation network as well as its improvement over traditional methods on a speech recognition task.
 Reference: In Proceedings of the International Joint Conference on Neural Networks (IJCNN’99), paper 2188, 1999.
 BibTeX
 Download the file: pdf, ps
Artificial Associative Memory using Quantum Processes
 Authors: Dan Ventura
 Abstract:
This paper discusses an approach to constructing an artificial quantum associative memory (QuAM). The QuAM makes use of two quantum computational algorithms, one for pattern storage and the other for pattern recall. The result is an exponential increase in the capacity of the memory when compared to traditional associative memories such as the Hopfield network. Further, the paper argues for considering pattern recall as a nonunitary process and demonstrates the utility of nonunitary operators for improving the pattern recall performance of the QuAM.
 Reference: In Proceedings of the Joint Conference on Information Sciences, volume 2, pages 218–221, October 1998.
 BibTeX
 Download the file: ps, pdf
Quantum and Evolutionary Approaches to Computational Learning
 Authors: Dan Ventura
 Abstract:
This dissertation presents two methods for attacking the problem of high dimensional spaces inherent in most computational learning problems. The first approach is a hybrid system for combining the thorough search capabilities of evolutionary computation with the speed and generalization of neural computation. This neural/evolutionary hybrid is utilized in three different settings: to address the problem of data acquisition for training a supervised learning system; as a learning optimization system; and as a system for developing neurocontrol. The second approach is the idea of quantum computational learning that overcomes the “curse of dimensionality” by taking advantage of the massive state space of quantum systems to process information in a way that is classically impossible. The quantum computational learning approach results in the development of a neuron with quantum mechanical properties, a quantum associative memory and a quantum computational learning system for inductive learning.
 Reference: PhD thesis, Brigham Young University, Computer Science Department, August 1998.
 BibTeX
 Download the file: ps
Quantum Associative Memory with Exponential Capacity
 Authors: Dan Ventura and Tony R. Martinez
 Abstract:
Quantum computation uses microscopic quantum level effects to perform computational tasks and has produced results that in some cases are exponentially faster than their classical counterparts by taking advantage of quantum parallelism. The unique characteristics of quantum theory may also be used to create a quantum associative memory with a capacity exponential in the number of neurons. This paper covers necessary highlevel quantum mechanical ideas and introduces a simple quantum associative memory. Further, it provides discussion, empirical results and directions for future work.
 Reference: In Proceedings of the International Joint Conference on Neural Networks, pages 509–13, May 1998.
 BibTeX
 Download the file: ps, pdf
Optimal Control Using a Neural/Evolutionary Hybrid System
 Authors: Dan Ventura and Tony R. Martinez
 Abstract:
One of the biggest hurdles to developing neurocontrollers is the difficulty in establishing good training data for the neural network. We propose a hybrid approach to the development of neurocontrollers that employs both evolutionary computation (EC) and neural networks (NN). EC is used to discover appropriate control actions for specific plant states. The survivors of the evolutionary process are used to construct a training set for the NN. The NN learns the training set, is able to generalize to new plant states, and is then used for neurocontrol. Thus the EC/NN approach combines the broad, parallel search of EC with the rapid execution and generalization of NN to produce a viable solution to the control problem. This paper presents the EC/NN hybrid and demonstrates its utility in developing a neurocontroller that demonstrates stability, generalization, and optimality.
 Reference: In Proceedings of the International Joint Conference on Neural Networks, pages 1036–41, May 1998.
 BibTeX
 Download the file: pdf, ps
Using Evolutionary Computation to Facilitate Development of Neurocontrol
 Authors: Dan Ventura and Tony R. Martinez
 Abstract:
The field of neurocontrol, in which neural networks are used for control of complex systems, has many potential applications. One of the biggest hurdles to developing neurocontrollers is the difficulty in establishing good training data for the neural network. We propose a hybrid approach to the development of neurocontrollers that employs both evolutionary computation (EC) and neural networks (NN). The survivors of this evolutionary process are used to construct a training set for the NN. The NN learns the training set, is able to generalize to new system states, and is then used for neurocontrol. Thus the EC/NN approach combines the broad, parallel search of EC with the rapid execution and generalization of NN to produce a viable solution to the control problem. This paper presents the EC/NN hybrid and demonstrates its utility in developing a neurocontroller for the pole balancing problem.
 Reference: In Proceedings of the International Workshop on Neural Networks and Neurocontrol, August 1997.
 BibTeX
 Download the file: pdf, ps
An Artificial Neuron with Quantum Mechanical Properties
 Authors: Dan Ventura and Tony R. Martinez
 Abstract:
Quantum computation uses microscopic quantum level effects to perform computational tasks and has produced results that in some cases are exponentially faster than their classical counterparts. Choosing the best weights for a neural network is a time consuming problem that makes the harnessing of this quantum parallelism appealing. This paper briefly covers highlevel quantum theory and introduces a model for a quantum neuron.
 Reference: In Proceedings of the International Conference on Neural Networks and Genetic Algorithms, pages 482–485, 1997.
 BibTeX
 Download the file: pdf, ps
Concerning a General Framework for the Development of Intelligent Systems
 Authors: Dan Ventura and Tony R. Martinez
 Abstract:
There exists ongoing debate between Connectionism and Symbolism as to the nature of and approaches to cognition. Many viewpoints exist and various issues seen as important have been raised. This paper suggests that a combination of these methodologies will lead to a better overall model. The paper reviews and assimilates the opinions and viewpoints of these diverse fields and provides a cohesive list of issues thought to be critical to the modeling of intelligence. Further, this list results in a framework for the development of a general, unified theory of cognition.
 Reference: In Proceedings of the IASTED International Conference on Artificial Intelligence, Expert Systems and Neural Networks, pages 44–47, 1996.
 BibTeX
 Download the file: pdf, ps
Robust Optimization Using Training Set Evolution
 Authors: Dan Ventura and Tony R. Martinez
 Abstract:
Training Set Evolution is an eclectic optimization technique that combines evolutionary computation (EC) with neural networks (NN). The synthesis of EC with NN provides both initial unsupervised random exploration of the solution space as well as supervised generalization on those initial solutions. An assimilation of a large amount of data obtained over many simulations provides encouraging empirical evidence for the robustness of Evolutionary Training Sets as an optimization technique for feedback and control problems.
 Reference: In Proceedings of the International Conference on Neural Networks, pages 524–8, 1996.
 BibTeX
 Download the file: ps, pdf
A General Evolutionary/Neural Hybrid Approach to Learning Optimization Problems
 Authors: Dan Ventura and Tony R. Martinez
 Abstract:
A method combining the parallel search capabilities of Evolutionary Computation (EC) with the generalization of Neural Networks (NN) for solving learning optimization problems is presented. Assuming a fitness function for potential solutions can be found, EC can be used to explore the solution space, and the survivors of the evolution can be used as a training set for the NN which then generalizes over the entire space. Because the training set is generated by EC using a fitness function, this hybrid approach allows explicit control of training set quality.
 Reference: In Proceedings of the World Congress on Neural Networks, pages 1091–5, 1996.
 BibTeX
 Download the file: pdf, ps
On Discretization as a Preprocessing Step For Supervised Learning Models
 Authors: Dan Ventura
 Abstract:
Many machine learning and neurally inspired algorithms are limited, at least in their pure form, to working with nominal data. However, for many realworld problems, some provision must be made to support processing of continuously valued data. BRACE, a paradigm for the discretization of continuously valued attributes is introduced, and two algorithmic instantiations of this paradigm, VALLEY and SLICE are presented. These methods are compared empirically with other discretization techniques on several realworld problems and no algorithm clearly outperforms the others. Also, discretization as a preprocessing step is in many cases found to be inferior to direct handling of continuously valued data. These results suggest that machine learning algorithms should be designed to directly handle continuously valued data rather than relying on preprocessing or ad hoc techniques. To this end statistical prototypes (SP/MSP) are developed and an empirical comparison with wellknown learning algorithms is presented. Encouraging results demonstrate that statistical prototypes have the potential to handle continuously valued data well. However, at this point, they are not suited for handling nominally valued data, which is arguably at least as important as continuously valued data in learning realworld applications. Several areas of ongoing research that aim to provide this ability are presented.
 Reference: Master’s thesis, Brigham Young University, Computer Science Department, April 1995.
 BibTeX
 Download the file: ps
Using Evolutionary Computation to Generate Training Set Data for Neural Networks
 Authors: Dan Ventura and Tim L. Andersen and Tony R. Martinez
 Abstract:
Most neural networks require a set of training examples in order to attempt to approximate a problem function. For many realworld problems, however, such a set of examples is unavailable. Such a problem involving feedback optimization of a computer network routing system has motivated a general method of generating artificial training sets using evolutionary computation. This paper describes the method and demonstrates its utility by presenting promising results from applying it to an artificial problem similar to a realworld network routing optimization problem.
 Reference: In Proceedings of the International Conference on Artificial Neural Networks and Genetic Algorithms, pages 468–471, 1995.
 BibTeX
 Download the file: pdf, ps
An Empirical Comparison of Discretization Models
 Authors: Dan Ventura and Tony R. Martinez
 Abstract:
Many machine learning and neurally inspired algorithms are limited, at least in their pure form, to working with nominal data. However, for many realworld problems, some provision must be made to support processing of continuously valued data. This paper presents empirical results obtained by using six different discretization methods as preprocessors to three different supervised learners on several realworld problems. No discretization technique clearly outperforms the others. Also, discretization as a preprocessing step is in many cases found to be inferior to direct handling of continuously valued data. These results suggest that machine learning algorithms should be designed to directly handle continuously valued data rather than relying on preprocessing or ad hoc techniques.
 Reference: In Proceedings of the 10th International Symposium on Computer and Information Sciences, pages 443–450, 1995.
 BibTeX
 Download the file: ps, pdf
Using Multiple Statistical Prototypes to Classify Continuously Valued Data
 Authors: Dan Ventura and Tony R. Martinez
 Abstract:
Multiple Statistical Prototypes (MSP) is a modification of a standard minimum distance classification scheme that generates multiple prototypes per class using a modified greedy heuristic. Empirical comparison of MSP with other wellknown learning algorithms shows MSP to be a robust algorithm that uses a very simple premise to produce good generalization and achieve parsimonious hypothesis representation.
 Reference: In Proceedings of the International Symposium on Neuroinformatics and Neurocomputers, pages 238–245, 1995.
 BibTeX
 Download the file: ps, pdf
BRACE: A Paradigm for the Discretization of Continuously Valued Data
 Authors: Dan Ventura and Tony R. Martinez
 Abstract:
Discretization of continuously valued data is a useful and necessary tool because many learning paradigms assume nominal data. A list of objectives for efficient and effective discretization is presented. A paradigm called BRACE (Boundary Ranking And Classification Evaluation) that attempts to meet the objectives is presented along with an algorithm that follows the paradigm. The paradigm meets many of the objectives, with potential for extension to meet the remainder. Empirical results have been promising. For these reasons BRACE has potential as an effective and efficient method for discretization of continuously valued data. A further advantage of BRACE is that it is general enough to be extended to other types of clustering/unsupervised learning.
 Reference: In Proceedings of the Seventh Florida Artificial Intelligence Research Symposium, pages 117–121, 1994.
 BibTeX
 Download the file: pdf, ps



