Notice: Undefined variable: site_menusepcolor in /var/www/template.inc on line 63
|
Notice: Undefined variable: site_titleStack in /var/www/template.inc on line 70
Goodman, Eric's Publications (detailed list) |
|
|
THIS PAGE IS NO LONGER MAINTAINED. Click here for our new publications list, which is more up-to-date.
This page contains the titles and abstracts of papers written by author Goodman, Eric, a member of the BYU Neural Networks and Machine Learning (NNML) Research Group. Postscript files are available for most papers. A more concise list is available.
To view the entire list in one page, click here.
Spatiotemporal Pattern Recognition in Liquid State Machines
- Authors: Eric Goodman and Dan Ventura
- Abstract:
The applicability of complex networks of spiking neurons as a general purpose machine learning technique remains open. Building on previous work using macroscopic exploration of the parameter space of an (artificial) neural microcircuit, we investigate the possibility of using a liquid state machine to solve two real-world problems: stockpile surveillance signal alignment and spoken phoneme recognition.
- Reference: In Proceedings of the International Joint Conference on Neural Networks, pages 7979–7584, Vancouver, BC, July 2006.
- BibTeX
- Download the file: pdf
Effectively Using Recurrently Connected Spiking Neural Networks
- Authors: Eric Goodman and Dan Ventura
- Abstract:
Recurrently connected spiking neural networks are difficult to use and understand because of the complex nonlinear dynamics of the system. Through empirical studies of spiking networks, we deduce several principles which are critical to success. Network parameters such as synaptic time delays and time constants and the connection probabilities can be adjusted to have a significant impact on accuracy. We show how to adjust these parameters to fit the type of problem.
- Reference: In Proceedings of the International Joint Conference on Neural Networks, pages 1542–1547, July 2005.
- BibTeX
- Download the file: pdf
Time Invariance and Liquid State Machines
- Authors: Eric Goodman and Dan Ventura
- Abstract:
Time invariant recognition of spatiotemporal patterns is a common task of signal processing. The liquid state machine (LSM) is a paradigm which robustly handles this type of classification. Using an artificial dataset with target pattern lengths ranging from 0.1 to 1.0 seconds, we train an LSM to find the start of the pattern with a mean absolute error of 0.18 seconds. Also, LSMs can be trained to identify spoken digits, 1-9, with an accuracy of 97.6%, even with scaling by factors ranging from 0.5 to 1.5.
- Reference: In Proceedings of the Joint Conference on Information Sciences, pages 420–423, July 2005.
- BibTeX
- Download the file: pdf
|
|
|
|